From patchwork Thu Jan 19 18:53:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mahapatra, Amit Kumar" X-Patchwork-Id: 45927 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp498667wrn; Thu, 19 Jan 2023 11:11:41 -0800 (PST) X-Google-Smtp-Source: AMrXdXt36mMMVkLb8bjE7ZLbEmzeHgBOHJD2JbHUxECXfUl/iVEyXps0/gDtmL5eF1tb3+Fa7z5x X-Received: by 2002:a62:5287:0:b0:58b:453e:77e0 with SMTP id g129-20020a625287000000b0058b453e77e0mr11099416pfb.20.1674155501414; Thu, 19 Jan 2023 11:11:41 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1674155501; cv=pass; d=google.com; s=arc-20160816; b=lRdFkCpedId6ty+GY/SbeHWmTOS1ARrwxQaxYvURpZf8IvUkPwLg/+wmc1+ml/F2EJ drxQvmaFuOTiTDs/M2WYa9hpzRGakShXeFrPAUWO2Nm1xlnpKFYC8B6R2RJiKMzdQINx CjKFDgQwD8lILA5z3SOjH83ajDBBUmvYV/RrdnRMTBcoo/5H5iLQDov+106EZKtXlf4j nXfi/wUx09wBMq5T+P8opxJtDZsN+2EmPGycvQcFeCkCUACG7PquDDUxhJWRRm0iCG4J 12UtP9uU/ekaj/oWkfeVqfm3/KAoPyEMP9LpdKUSSYPjJtYe7OLHhkqPaiiyLMd1O5hf w0dQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jfye6PMQ2jIAwtrLNySl4+fnfHgpNY0AXxiyxRIKYBw=; b=WrjML+YqyxdKRmMKNSY9/gnglC2kyZRI0mZGQln67RTJlt/r2hpzWVhWyyaZjqVqpf lHIqs3vRrfgJnTYCiKhSH0RxREGbdPXZtNaLZHVYaYHAFck3ypjGJb39rfM/92JykTYV qOIgfIhj3cqe0Nd9WljPVVEON2bjAmOZEGUOa0mmjHr3s8a2389Rl4tpXHLMTVhx3/Bf VyLvCMLvLKjhZQFy0GPV7E0N4j8pOvrRnX/UK0jwyHrc6/2en8EC+YChsiX/shJEGyxX rZZfGaWDFT+Ju9FgFDssdcA7JylKyDRP/SJqs8h8sbCANm443paXVK99lWQZMPHerYws 5Mnw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=vYeBK6Dm; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ct18-20020a056a000f9200b0057513273bb7si17023125pfb.132.2023.01.19.11.11.29; Thu, 19 Jan 2023 11:11:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=vYeBK6Dm; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231312AbjASTGM (ORCPT + 99 others); Thu, 19 Jan 2023 14:06:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230298AbjASTGA (ORCPT ); Thu, 19 Jan 2023 14:06:00 -0500 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on20615.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe59::615]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7BEA9575D; Thu, 19 Jan 2023 11:05:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nirPwzX59rUC4i3U12mS9kIlOnWFes7ViFlJT55aPp3pGmdvhTDSeV8RG1AbnPAX3F95UAwnXXMW4WL/YF4sH/5q8jc5ZmTWYhfWIZRCZ8x+BxlgStcbFuSntkqZ9G9tcHMPo9zX5pwyigp8Oeh5/PCte3pKKATsONlmCI+mDeKkTopNEWNWeBrGe+KKvqOxRjUozJsnQNVFyRqX/VHfBk+gu0TYeLg3NrRBKlld1D/tORQwrRFzjegoBDBjcvbIOTt1PjxuQbxoSPkp6IweDohNUa+ekZN2Onf50WKwntIweXQ3xx7xBG3oOfM8iyhUG6BKMjzf0RX1kmv2+StwBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jfye6PMQ2jIAwtrLNySl4+fnfHgpNY0AXxiyxRIKYBw=; b=MIeI2lETWY10WZxAwXBtZ8/f2gk3FbJKQBxgQYOnLIO7Aai2N3riJ6XQUmtVcMWGvGA89ntc+AzmrjajrBxwkkH6LXKODs1Jus1VJ+UQSDTyerulu+RlFbydAJfwOuyq2o7jzFdeK8Y14br2zxzvZQVKrCp0+sORXMXeLRr4XzUQva+M7yOYyZVCS2hh2zSduYpy83nKqZPgmk2rUiP9hpUDC2KlowV3R6gF+zzMLdG9mLhGe8Nhq+L6R6CkI/0awU9zNE2dkCCvjulb9Bslb+1p9Gq8hI/Cn9Pky979DRiFfYpaCLMHjF7T62xQEEkieMnfcF+YxmgGb5l7d9thmw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jfye6PMQ2jIAwtrLNySl4+fnfHgpNY0AXxiyxRIKYBw=; b=vYeBK6DmklQab6WBvpEeQg6t7BS/Fg9Bsils1bl0yEBnN5LJc06BEcE0WMosFQgCkxHocUR8XaM1Y79q0gTZgIKduAfdWyHIHMj+rwen6te36+KGtf900J5S3Z/lCMwxt5BhFvhQr1TCyenPG7bV/uidi2dhqdZLbE2qKEpQ0dw= Received: from CY5PR13CA0041.namprd13.prod.outlook.com (2603:10b6:930:11::27) by BY5PR12MB4290.namprd12.prod.outlook.com (2603:10b6:a03:20e::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.26; Thu, 19 Jan 2023 18:59:21 +0000 Received: from CY4PEPF0000C969.namprd02.prod.outlook.com (2603:10b6:930:11:cafe::b1) by CY5PR13CA0041.outlook.office365.com (2603:10b6:930:11::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.17 via Frontend Transport; Thu, 19 Jan 2023 18:59:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000C969.mail.protection.outlook.com (10.167.241.73) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6002.11 via Frontend Transport; Thu, 19 Jan 2023 18:59:21 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 19 Jan 2023 12:59:20 -0600 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 19 Jan 2023 12:59:19 -0600 Received: from xhdsneeli40.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend Transport; Thu, 19 Jan 2023 12:58:54 -0600 From: Amit Kumar Mahapatra To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v2 12/13] mtd: spi-nor: Add parallel memories support in spi-nor Date: Fri, 20 Jan 2023 00:23:41 +0530 Message-ID: <20230119185342.2093323-13-amit.kumar-mahapatra@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230119185342.2093323-1-amit.kumar-mahapatra@amd.com> References: <20230119185342.2093323-1-amit.kumar-mahapatra@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000C969:EE_|BY5PR12MB4290:EE_ X-MS-Office365-Filtering-Correlation-Id: 08aa5eea-273a-4f0f-1154-08dafa4f462c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: lUhBerZNj6vhMTpi+b2DL0MpL90YleveWyXnUMXXTMBPvmzWunJECvcf6HlP2654bIRKJrI6oSQv4jqiPtIOkWL6cUVBg6Ft2XWyxH3Eoe2LGI3oNHQv1Yv60gEVxMUVZlpnSijofCZbRvf5GJ1nuKzq7LvEx0z0DdFHmhqT4/fqRg3ao9zRlOBbUgZSjiu/O2TNpJXyxNcjc/V0XscemN9zEuuW33hhcbmHxDNl3e69lq2nj3cLsCLnpkmNR6Gej/HnOOHl+nX64W6Yb7TWGVzmFF+Bb3m5OivQrTXDKeGCPEISWtUbe3kXQnbjimsTB8tTYSq1sBRFASi0oHofmQahX62opPx5hfBTwKN1BHLoWQnmiNZNBf33cIt8JEVh7nHg7kvzj25XjT9rBueQpHX0IJDH5fDvkCOHlzB7X5U+X2HgjaZ5q8Kh2yn/vPWzlxxbrTR6SY9ZJfTk3IqyGL3lK74GnROEeEkeYesE9UqZYHshAMB0mD9T4HZYX67wz0uUD6duTpJ2K0vq5MlP6jBPgLihxkr2GY/dKIw2I5uygPZ3gPi8Yr9PcQWvTGo00ikqDaJVAJ9suj/8xo8Rt99d5M0LkLzVQijl6YMP9cKFbH+fRPi9W5A4PLlDDmWoHTe2xG1QrtN6sCTf/onXK0wqstRVlyIb0PZ0MSqEPZsEcgn4wdNeK1eajVXUXTvF5Zu56OFtbv2z+2Ua7XG1fRvef3njS2pFhzDyckcLRzr7Jyl0Zkv8UrHxwrvVZEMUV7ZG7tQMbLKjXlhgFNQids6cG5ePcahnMqPup3uLrP0jNJmAF2MUImTypmwW3o8A8Z0dBjMeoGwQWJdMaDbh5a5aQPplJHwnn2OgJA2B3N8= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(376002)(346002)(39860400002)(451199015)(36840700001)(46966006)(40470700004)(8936002)(5660300002)(40460700003)(86362001)(316002)(36756003)(6666004)(478600001)(2906002)(1191002)(2616005)(82310400005)(1076003)(110136005)(426003)(54906003)(47076005)(336012)(83380400001)(81166007)(70206006)(7366002)(4326008)(7336002)(7406005)(7276002)(7416002)(186003)(8676002)(70586007)(40480700001)(82740400003)(30864003)(26005)(356005)(921005)(36860700001)(41300700001)(36900700001)(41080700001)(83996005)(84006005)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 18:59:21.0812 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 08aa5eea-273a-4f0f-1154-08dafa4f462c X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000C969.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4290 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755479278971035796?= X-GMAIL-MSGID: =?utf-8?q?1755479278971035796?= The current implementation assumes that a maximum of two flashes are connected in parallel mode. The QSPI controller splits the data evenly between both the flashes so, both the flashes that are connected in parallel mode should be identical. During each operation SPI-NOR sets 0th bit for CS0 & 1st bit for CS1 in nor->spimem->spi->cs_index_mask. The QSPI driver will then assert/de-assert CS0 & CS1. Write operation in parallel mode are performed in page size * 2 chunks as each write operation results in writing both the flashes. For doubling the address space each operation is performed at addr/2 flash offset, where addr is the address specified by the user. Signed-off-by: Amit Kumar Mahapatra --- drivers/mtd/spi-nor/core.c | 514 +++++++++++++++++++++++--------- drivers/mtd/spi-nor/core.h | 4 + drivers/mtd/spi-nor/micron-st.c | 5 + 3 files changed, 384 insertions(+), 139 deletions(-) diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index bb7326dc8b70..367cbb36ef69 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -464,17 +464,29 @@ int spi_nor_read_sr(struct spi_nor *nor, u8 *sr) op.data.nbytes = 2; } + if (nor->flags & SNOR_F_HAS_PARALLEL) + op.data.nbytes = 2; + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); ret = spi_mem_exec_op(nor->spimem, &op); } else { - ret = spi_nor_controller_ops_read_reg(nor, SPINOR_OP_RDSR, sr, - 1); + if (nor->flags & SNOR_F_HAS_PARALLEL) + ret = spi_nor_controller_ops_read_reg(nor, + SPINOR_OP_RDSR, + sr, 2); + else + ret = spi_nor_controller_ops_read_reg(nor, + SPINOR_OP_RDSR, + sr, 1); } if (ret) dev_dbg(nor->dev, "error %d reading SR\n", ret); + if (nor->flags & SNOR_F_HAS_PARALLEL) + sr[0] |= sr[1]; + return ret; } @@ -1466,12 +1478,122 @@ static int spi_nor_erase(struct mtd_info *mtd, struct erase_info *instr) if (ret) return ret; - /* whole-chip erase? */ - if (len == mtd->size && !(nor->flags & SNOR_F_NO_OP_CHIP_ERASE)) { - unsigned long timeout; + if (!(nor->flags & SNOR_F_HAS_PARALLEL)) { + /* whole-chip erase? */ + if (len == mtd->size && !(nor->flags & SNOR_F_NO_OP_CHIP_ERASE)) { + unsigned long timeout; + + while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && params) { + nor->spimem->spi->cs_index_mask = 1 << cur_cs_num; + ret = spi_nor_write_enable(nor); + if (ret) + goto erase_err; + + ret = spi_nor_erase_chip(nor); + if (ret) + goto erase_err; + + /* + * Scale the timeout linearly with the size of the flash, with + * a minimum calibrated to an old 2MB flash. We could try to + * pull these from CFI/SFDP, but these values should be good + * enough for now. + */ + timeout = max(CHIP_ERASE_2MB_READY_WAIT_JIFFIES, + CHIP_ERASE_2MB_READY_WAIT_JIFFIES * + (unsigned long)(params->size / + SZ_2M)); + ret = spi_nor_wait_till_ready_with_timeout(nor, timeout); + if (ret) + goto erase_err; + + cur_cs_num++; + params = spi_nor_get_params(nor, cur_cs_num); + } + + /* REVISIT in some cases we could speed up erasing large regions + * by using SPINOR_OP_SE instead of SPINOR_OP_BE_4K. We may have set up + * to use "small sector erase", but that's not always optimal. + */ + + /* "sector"-at-a-time erase */ + } else if (spi_nor_has_uniform_erase(nor)) { + /* Determine the flash from which the operation need to start */ + while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && + (addr > sz - 1) && params) { + cur_cs_num++; + params = spi_nor_get_params(nor, cur_cs_num); + sz += params->size; + } + while (len) { + nor->spimem->spi->cs_index_mask = 1 << cur_cs_num; + ret = spi_nor_write_enable(nor); + if (ret) + goto erase_err; + + offset = addr; + if (nor->flags & SNOR_F_HAS_STACKED) { + params = spi_nor_get_params(nor, cur_cs_num); + offset -= (sz - params->size); + } + ret = spi_nor_erase_sector(nor, offset); + if (ret) + goto erase_err; + + ret = spi_nor_wait_till_ready(nor); + if (ret) + goto erase_err; + + addr += mtd->erasesize; + len -= mtd->erasesize; + + /* + * Flash cross over condition in stacked mode. + */ + if ((nor->flags & SNOR_F_HAS_STACKED) && (addr > sz - 1)) { + cur_cs_num++; + params = spi_nor_get_params(nor, cur_cs_num); + sz += params->size; + } + } + + /* erase multiple sectors */ + } else { + u64 erase_len = 0; + + /* Determine the flash from which the operation need to start */ + while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && + (addr > sz - 1) && params) { + cur_cs_num++; + params = spi_nor_get_params(nor, cur_cs_num); + sz += params->size; + } + /* perform multi sector erase onec per Flash*/ + while (len) { + erase_len = (len > (sz - addr)) ? (sz - addr) : len; + offset = addr; + nor->spimem->spi->cs_index_mask = 1 << cur_cs_num; + if (nor->flags & SNOR_F_HAS_STACKED) { + params = spi_nor_get_params(nor, cur_cs_num); + offset -= (sz - params->size); + } + ret = spi_nor_erase_multi_sectors(nor, offset, erase_len); + if (ret) + goto erase_err; + len -= erase_len; + addr += erase_len; + params = spi_nor_get_params(nor, cur_cs_num); + sz += params->size; + } + } + } else { + nor->spimem->spi->cs_index_mask = SPI_NOR_ENABLE_MULTI_CS; + + /* whole-chip erase? */ + if (len == mtd->size && !(nor->flags & + SNOR_F_NO_OP_CHIP_ERASE)) { + unsigned long timeout; - while (cur_cs_num < SNOR_FLASH_CNT_MAX && params) { - nor->spimem->spi->cs_index_mask = 0x01 << cur_cs_num; ret = spi_nor_write_enable(nor); if (ret) goto erase_err; @@ -1488,90 +1610,45 @@ static int spi_nor_erase(struct mtd_info *mtd, struct erase_info *instr) */ timeout = max(CHIP_ERASE_2MB_READY_WAIT_JIFFIES, CHIP_ERASE_2MB_READY_WAIT_JIFFIES * - (unsigned long)(params->size / SZ_2M)); + (unsigned long)(mtd->size / SZ_2M)); ret = spi_nor_wait_till_ready_with_timeout(nor, timeout); if (ret) goto erase_err; - cur_cs_num++; - } - - /* REVISIT in some cases we could speed up erasing large regions - * by using SPINOR_OP_SE instead of SPINOR_OP_BE_4K. We may have set up - * to use "small sector erase", but that's not always optimal. - */ - /* "sector"-at-a-time erase */ - } else if (spi_nor_has_uniform_erase(nor)) { - /* Determine the flash from which the operation need to start */ - while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && (addr > sz - 1) && params) { - cur_cs_num++; - params = spi_nor_get_params(nor, cur_cs_num); - sz += params->size; - } + /* REVISIT in some cases we could speed up erasing large regions + * by using SPINOR_OP_SE instead of SPINOR_OP_BE_4K. We may have set up + * to use "small sector erase", but that's not always optimal. + */ - while (len) { - nor->spimem->spi->cs_index_mask = 0x01 << cur_cs_num; - ret = spi_nor_write_enable(nor); - if (ret) - goto erase_err; + /* "sector"-at-a-time erase */ + } else if (spi_nor_has_uniform_erase(nor)) { + while (len) { + ret = spi_nor_write_enable(nor); + if (ret) + goto erase_err; - offset = addr; - if (nor->flags & SNOR_F_HAS_STACKED) { - params = spi_nor_get_params(nor, cur_cs_num); - offset -= (sz - params->size); - } + offset = addr / 2; - ret = spi_nor_erase_sector(nor, offset); - if (ret) - goto erase_err; - - ret = spi_nor_wait_till_ready(nor); - if (ret) - goto erase_err; + ret = spi_nor_erase_sector(nor, offset); + if (ret) + goto erase_err; - addr += mtd->erasesize; - len -= mtd->erasesize; + ret = spi_nor_wait_till_ready(nor); + if (ret) + goto erase_err; - /* - * Flash cross over condition in stacked mode. - */ - if ((nor->flags & SNOR_F_HAS_STACKED) && (addr > sz - 1)) { - cur_cs_num++; - params = spi_nor_get_params(nor, cur_cs_num); - sz += params->size; + addr += mtd->erasesize; + len -= mtd->erasesize; } - } - - /* erase multiple sectors */ - } else { - u64 erase_len = 0; - /* Determine the flash from which the operation need to start */ - while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && (addr > sz - 1) && params) { - cur_cs_num++; - params = spi_nor_get_params(nor, cur_cs_num); - sz += params->size; - } - /* perform multi sector erase onec per Flash*/ - while (len) { - erase_len = (len > (sz - addr)) ? (sz - addr) : len; - offset = addr; - nor->spimem->spi->cs_index_mask = 0x01 << cur_cs_num; - if (nor->flags & SNOR_F_HAS_STACKED) { - params = spi_nor_get_params(nor, cur_cs_num); - offset -= (sz - params->size); - } - ret = spi_nor_erase_multi_sectors(nor, offset, erase_len); + /* erase multiple sectors */ + } else { + offset = addr / 2; + ret = spi_nor_erase_multi_sectors(nor, offset, len); if (ret) goto erase_err; - len -= erase_len; - addr += erase_len; - cur_cs_num++; - params = spi_nor_get_params(nor, cur_cs_num); - sz += params->size; } } - ret = spi_nor_write_disable(nor); erase_err: @@ -1771,34 +1848,59 @@ static int spi_nor_read(struct mtd_info *mtd, loff_t from, size_t len, struct spi_nor_flash_parameter *params; ssize_t ret, read_len; u32 cur_cs_num = 0; - u64 sz; + u_char *readbuf; + bool is_ofst_odd = false; + u64 sz = 0; dev_dbg(nor->dev, "from 0x%08x, len %zd\n", (u32)from, len); - ret = spi_nor_lock_and_prep(nor); - if (ret) - return ret; - params = spi_nor_get_params(nor, 0); sz = params->size; - /* Determine the flash from which the operation need to start */ - while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && (from > sz - 1) && params) { - cur_cs_num++; - params = spi_nor_get_params(nor, cur_cs_num); - sz += params->size; + /* + * Cannot read from odd offset in parallel mode, so read + * len + 1 from offset + 1 and ignore offset[0] data. + */ + if ((nor->flags & SNOR_F_HAS_PARALLEL) && (from & 0x01)) { + from = (loff_t)(from - 1); + len = (size_t)(len + 1); + is_ofst_odd = true; + readbuf = kmalloc(len, GFP_KERNEL); + if (!readbuf) + return -ENOMEM; + } else { + readbuf = buf; + } + + if (!(nor->flags & SNOR_F_HAS_PARALLEL)) { + /* Determine the flash from which the operation need to start */ + while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && (from > sz - 1) && params) { + cur_cs_num++; + params = spi_nor_get_params(nor, cur_cs_num); + sz += params->size; + } } + ret = spi_nor_lock_and_prep(nor); + if (ret) + return ret; + while (len) { loff_t addr = from; - nor->spimem->spi->cs_index_mask = 0x01 << cur_cs_num; - read_len = (len > (sz - addr)) ? (sz - addr) : len; - params = spi_nor_get_params(nor, cur_cs_num); - addr -= (sz - params->size); + if (nor->flags & SNOR_F_HAS_PARALLEL) { + nor->spimem->spi->cs_index_mask = SPI_NOR_ENABLE_MULTI_CS; + read_len = len; + addr /= 2; + } else { + nor->spimem->spi->cs_index_mask = 1 << cur_cs_num; + read_len = (len > (sz - addr)) ? (sz - addr) : len; + params = spi_nor_get_params(nor, cur_cs_num); + addr -= (sz - params->size); + } addr = spi_nor_convert_addr(nor, addr); - ret = spi_nor_read_data(nor, addr, len, buf); + ret = spi_nor_read_data(nor, addr, read_len, readbuf); if (ret == 0) { /* We shouldn't see 0-length reads */ ret = -EIO; @@ -1808,8 +1910,20 @@ static int spi_nor_read(struct mtd_info *mtd, loff_t from, size_t len, goto read_err; WARN_ON(ret > read_len); - *retlen += ret; + if (is_ofst_odd) { + /* + * Cannot read from odd offset in parallel mode. + * So read len + 1 from offset + 1 from the flash + * and copy len data from readbuf[1]. + */ + memcpy(buf, (readbuf + 1), (len - 1)); + *retlen += (ret - 1); + } else { + *retlen += ret; + } buf += ret; + if (!is_ofst_odd) + readbuf += ret; from += ret; len -= ret; @@ -1827,6 +1941,9 @@ static int spi_nor_read(struct mtd_info *mtd, loff_t from, size_t len, ret = 0; read_err: + if (is_ofst_odd) + kfree(readbuf); + spi_nor_unlock_and_unprep(nor); return ret; } @@ -1852,13 +1969,38 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len, page_size = params->page_size; sz = params->size; - /* Determine the flash from which the operation need to start */ - while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && (to > sz - 1) && params) { - cur_cs_num++; - params = spi_nor_get_params(nor, cur_cs_num); - sz += params->size; - } + if (nor->flags & SNOR_F_HAS_PARALLEL) { + /* + * Cannot write to odd offset in parallel mode, + * so write 2 byte first. + */ + if (to & 0x01) { + u8 two[2] = {0xff, buf[0]}; + size_t written_len; + + ret = spi_nor_write(mtd, to & ~1, 2, &written_len, two); + if (ret < 0) + return ret; + *retlen += 1; /* We've written only one actual byte */ + ++buf; + --len; + ++to; + } + /* + * Write operation are performed in page size chunks and in + * parallel memories both the flashes are written simultaneously, + * hence doubled the page_size. + */ + page_size <<= 1; + } else { + /* Determine the flash from which the operation need to start */ + while ((cur_cs_num < SNOR_FLASH_CNT_MAX) && (to > sz - 1) && params) { + cur_cs_num++; + params = spi_nor_get_params(nor, cur_cs_num); + sz += params->size; + } + } ret = spi_nor_lock_and_prep(nor); if (ret) return ret; @@ -1882,9 +2024,14 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len, /* the size of data remaining on the first page */ page_remain = min_t(size_t, page_size - page_offset, len - i); - nor->spimem->spi->cs_index_mask = 0x01 << cur_cs_num; - params = spi_nor_get_params(nor, cur_cs_num); - addr -= (sz - params->size); + if (nor->flags & SNOR_F_HAS_PARALLEL) { + nor->spimem->spi->cs_index_mask = SPI_NOR_ENABLE_MULTI_CS; + addr /= 2; + } else { + nor->spimem->spi->cs_index_mask = 1 << cur_cs_num; + params = spi_nor_get_params(nor, cur_cs_num); + addr -= (sz - params->size); + } addr = spi_nor_convert_addr(nor, addr); @@ -2323,7 +2470,15 @@ static int spi_nor_select_erase(struct spi_nor *nor) if (!erase) return -EINVAL; nor->erase_opcode = erase->opcode; - mtd->erasesize = erase->size; + /* + * In parallel-memories the erase operation is + * performed on both the flashes simultaneously + * so, double the erasesize. + */ + if (nor->flags & SNOR_F_HAS_PARALLEL) + mtd->erasesize = erase->size * 2; + else + mtd->erasesize = erase->size; return 0; } @@ -2341,7 +2496,15 @@ static int spi_nor_select_erase(struct spi_nor *nor) if (!erase) return -EINVAL; - mtd->erasesize = erase->size; + /* + * In parallel-memories the erase operation is + * performed on both the flashes simultaneously + * so, double the erasesize. + */ + if (nor->flags & SNOR_F_HAS_PARALLEL) + mtd->erasesize = erase->size * 2; + else + mtd->erasesize = erase->size; return 0; } @@ -2659,7 +2822,22 @@ static void spi_nor_late_init_params(struct spi_nor *nor) nor->flags |= SNOR_F_HAS_STACKED; } } - if (nor->flags & SNOR_F_HAS_STACKED) { + i = 0; + idx = 0; + while (i < SNOR_FLASH_CNT_MAX) { + rc = of_property_read_u64_index(np, "parallel-memories", idx, &flash_size[i]); + if (rc == -EINVAL) { + break; + } else if (rc == -EOVERFLOW) { + idx++; + } else { + idx++; + i++; + if (!(nor->flags & SNOR_F_HAS_PARALLEL)) + nor->flags |= SNOR_F_HAS_PARALLEL; + } + } + if (nor->flags & (SNOR_F_HAS_STACKED | SNOR_F_HAS_PARALLEL)) { for (idx = 1; idx < SNOR_FLASH_CNT_MAX; idx++) { params = spi_nor_get_params(nor, idx); params = devm_kzalloc(nor->dev, sizeof(*params), GFP_KERNEL); @@ -2881,24 +3059,42 @@ static int spi_nor_quad_enable(struct spi_nor *nor) struct spi_nor_flash_parameter *params; int err, idx; - for (idx = 0; idx < SNOR_FLASH_CNT_MAX; idx++) { - params = spi_nor_get_params(nor, idx); - if (params) { - if (!params->quad_enable) - return 0; + if (nor->flags & SNOR_F_HAS_PARALLEL) { + params = spi_nor_get_params(nor, 0); + if (!params->quad_enable) + return 0; - if (!(spi_nor_get_protocol_width(nor->read_proto) == 4 || - spi_nor_get_protocol_width(nor->write_proto) == 4)) - return 0; - /* - * Set the appropriate CS index before - * issuing the command. - */ - nor->spimem->spi->cs_index_mask = 0x01 << idx; + if (!(spi_nor_get_protocol_width(nor->read_proto) == 4 || + spi_nor_get_protocol_width(nor->write_proto) == 4)) + return 0; + /* + * In parallel mode both chip selects i.e., CS0 & + * CS1 need to be asserted simulatneously. + */ + nor->spimem->spi->cs_index_mask = SPI_NOR_ENABLE_MULTI_CS; + err = params->quad_enable(nor); + if (err) + return err; + } else { + for (idx = 0; idx < SNOR_FLASH_CNT_MAX; idx++) { + params = spi_nor_get_params(nor, idx); + if (params) { + if (!params->quad_enable) + return 0; - err = params->quad_enable(nor); - if (err) - return err; + if (!(spi_nor_get_protocol_width(nor->read_proto) == 4 || + spi_nor_get_protocol_width(nor->write_proto) == 4)) + return 0; + /* + * Set the appropriate CS index before + * issuing the command. + */ + nor->spimem->spi->cs_index_mask = 1 << idx; + + err = params->quad_enable(nor); + if (err) + return err; + } } } return err; @@ -2948,17 +3144,29 @@ static int spi_nor_init(struct spi_nor *nor) */ WARN_ONCE(nor->flags & SNOR_F_BROKEN_RESET, "enabling reset hack; may not recover from unexpected reboots\n"); - for (idx = 0; idx < SNOR_FLASH_CNT_MAX; idx++) { - params = spi_nor_get_params(nor, idx); - if (params) { - /* - * Select the appropriate CS index before - * issuing the command. - */ - nor->spimem->spi->cs_index_mask = 0x01 << idx; - err = params->set_4byte_addr_mode(nor, true); - if (err && err != -ENOTSUPP) - return err; + if (nor->flags & SNOR_F_HAS_PARALLEL) { + /* + * In parallel mode both chip selects i.e., CS0 & + * CS1 need to be asserted simulatneously. + */ + nor->spimem->spi->cs_index_mask = SPI_NOR_ENABLE_MULTI_CS; + params = spi_nor_get_params(nor, 0); + err = params->set_4byte_addr_mode(nor, true); + if (err && err != -ENOTSUPP) + return err; + } else { + for (idx = 0; idx < SNOR_FLASH_CNT_MAX; idx++) { + params = spi_nor_get_params(nor, idx); + if (params) { + /* + * Select the appropriate CS index before + * issuing the command. + */ + nor->spimem->spi->cs_index_mask = 1 << idx; + err = params->set_4byte_addr_mode(nor, true); + if (err && err != -ENOTSUPP) + return err; + } } } } @@ -3081,20 +3289,39 @@ void spi_nor_restore(struct spi_nor *nor) /* restore the addressing mode */ if (nor->addr_nbytes == 4 && !(nor->flags & SNOR_F_4B_OPCODES) && nor->flags & SNOR_F_BROKEN_RESET) { - for (idx = 0; idx < SNOR_FLASH_CNT_MAX; idx++) { - params = spi_nor_get_params(nor, idx); - if (params) { + if (nor->flags & SNOR_F_HAS_PARALLEL) { + /* + * In parallel mode both chip selects i.e., CS0 & + * CS1 need to be asserted simulatneously. + */ + nor->spimem->spi->cs_index_mask = SPI_NOR_ENABLE_MULTI_CS; + params = spi_nor_get_params(nor, 0); + ret = params->set_4byte_addr_mode(nor, false); + if (ret) + /* + * Do not stop the execution in the hope that the flash + * will default to the 3-byte address mode after the + * software reset. + */ + dev_err(nor->dev, + "Failed to exit 4-byte address mode, err = %d\n", + ret); + } else { + for (idx = 0; idx < SNOR_FLASH_CNT_MAX; idx++) { + params = spi_nor_get_params(nor, idx); + if (!params) + break; /* * Select the appropriate CS index before * issuing the command. */ - nor->spimem->spi->cs_index_mask = 0x01 << idx; + nor->spimem->spi->cs_index_mask = 1 << idx; ret = params->set_4byte_addr_mode(nor, false); if (ret) /* - * Do not stop the execution in the hope that the flash - * will default to the 3-byte address mode after the - * software reset. + * Do not stop the execution in the hope that the + * flash will default to the 3-byte address mode + * after the software reset. */ dev_err(nor->dev, "Failed to exit 4-byte address mode, err = %d\n", @@ -3184,7 +3411,16 @@ static void spi_nor_set_mtd_info(struct spi_nor *nor) else mtd->_erase = spi_nor_erase; mtd->writesize = params->writesize; - mtd->writebufsize = params->page_size; + /* + * In parallel-memories the write operation is + * performed on both the flashes simultaneously + * one page per flash, so double the writebufsize. + */ + if (nor->flags & SNOR_F_HAS_PARALLEL) + mtd->writebufsize = params->page_size << 1; + else + mtd->writebufsize = params->page_size; + for (idx = 0; idx < SNOR_FLASH_CNT_MAX; idx++) { params = spi_nor_get_params(nor, idx); if (params) diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h index e94107cc465e..4aedc9fbef32 100644 --- a/drivers/mtd/spi-nor/core.h +++ b/drivers/mtd/spi-nor/core.h @@ -14,6 +14,9 @@ /* In single configuration enable CS0 */ #define SPI_NOR_ENABLE_CS0 BIT(0) +/* In parallel configuration enable multiple CS */ +#define SPI_NOR_ENABLE_MULTI_CS (BIT(0) | BIT(1)) + /* Standard SPI NOR flash operations. */ #define SPI_NOR_READID_OP(naddr, ndummy, buf, len) \ SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDID, 0), \ @@ -134,6 +137,7 @@ enum spi_nor_option_flags { SNOR_F_SOFT_RESET = BIT(12), SNOR_F_SWP_IS_VOLATILE = BIT(13), SNOR_F_HAS_STACKED = BIT(14), + SNOR_F_HAS_PARALLEL = BIT(15), }; struct spi_nor_read_command { diff --git a/drivers/mtd/spi-nor/micron-st.c b/drivers/mtd/spi-nor/micron-st.c index b93e16094b6c..9be39f237dfc 100644 --- a/drivers/mtd/spi-nor/micron-st.c +++ b/drivers/mtd/spi-nor/micron-st.c @@ -357,6 +357,9 @@ static int micron_st_nor_read_fsr(struct spi_nor *nor, u8 *fsr) op.data.nbytes = 2; } + if (nor->flags & SNOR_F_HAS_PARALLEL) + op.data.nbytes = 2; + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); ret = spi_mem_exec_op(nor->spimem, &op); @@ -368,6 +371,8 @@ static int micron_st_nor_read_fsr(struct spi_nor *nor, u8 *fsr) if (ret) dev_dbg(nor->dev, "error %d reading FSR\n", ret); + if (nor->flags & SNOR_F_HAS_PARALLEL) + fsr[0] &= fsr[1]; return ret; }