Message ID | 20240215211401.1201004-1-m.felsch@pengutronix.de |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-67685-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:c619:b0:108:e6aa:91d0 with SMTP id hn25csp125984dyb; Thu, 15 Feb 2024 13:14:32 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWz0T5OVgXDzmYX0WzIbxuTDRBNs5ypXdk9BfjLJ6c3F3blXUqF69M2cwlKFCU96pFNsXp8+iyOKd4Gq5H/1im37n+lkQ== X-Google-Smtp-Source: AGHT+IEr1+HG5sDE0Sy/EDSozPvv8jZk2d7sEEDYQenKG4NPANxgOKOEZcxwZO6ZfT3bFCpstZpz X-Received: by 2002:a19:655e:0:b0:511:3a93:5b7a with SMTP id c30-20020a19655e000000b005113a935b7amr2268063lfj.21.1708031671714; Thu, 15 Feb 2024 13:14:31 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708031671; cv=pass; d=google.com; s=arc-20160816; b=MFd2veNvA8Hrzhc5dyXp7QLUs7gXrqE6p0OzELdSUWYqelwJXdzNFCwoYjGV//U1RB pNrTzLHqv/lhOvfPbmwySGPt808iMH2UyylNswzB2GQOOCJqjLnqVCApOHYDeC4Ceh/n hgr4kwIQWX6BkLNrYXp1lAveLXCKpW9vUrIOcTOrrxtJ3DrDH6YvZWXJGCK6c0PD++cD AD+zYOTrN7BKQtoCOk8ifdz5AOT1NXhfbLei7ogosO5JbRfSMZWP7fA7DxsWQMWWy+1j UBqHlLv17bmbmidFPEnGQ3jypF/GtHmbnqOeuGEBkeN2Vf6qRjUWFRtK/HaEmk93CByR mRdA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from; bh=No7yoRQTdEHp6aaJc01gbTokCF4kpA7EppnV+72m5Y0=; fh=PBvVct5645bCpOOct7qmkOyvVtwzqzo3siiY1S5bkc4=; b=OC9iW63cshac5NIBXiqYoEqN2mYfTCNlqM7XLIUT626byAcR7WAz6WiQOdkFuNZ4mT nlDmIN/RpavjY59p80Yxjd0MEtkyGFjmqiF9RAg+Bo3J74meT3PWP4SW3RUrNZhkQOwX Jeu7+SkxHJsuNMBTb5ym4ccDvJCZeU6YyD4CN1TPU9FaO/WO+jOKmlyIRNHxANmOl5Mc bD9OE8W5FOJMZxFHSzvERUJPhwgM4PSE7b1zfO7cwfbZAbmoSwSjsQaDa+ZRWdq/zwEp 71vilVn0iV/7S/3wJWsQLqf2euYq4wRiHmdDXO55HcxHrK8tN9RLJ+N24LcTBf62BwYX g5Pg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=pengutronix.de); spf=pass (google.com: domain of linux-kernel+bounces-67685-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-67685-ouuuleilei=gmail.com@vger.kernel.org" Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id dt17-20020a170906b79100b00a3d1190865esi1018437ejb.185.2024.02.15.13.14.31 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Feb 2024 13:14:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-67685-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=pengutronix.de); spf=pass (google.com: domain of linux-kernel+bounces-67685-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-67685-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 519381F23B1A for <ouuuleilei@gmail.com>; Thu, 15 Feb 2024 21:14:31 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 90DEB13DBA5; Thu, 15 Feb 2024 21:14:18 +0000 (UTC) Received: from metis.whiteo.stw.pengutronix.de (metis.whiteo.stw.pengutronix.de [185.203.201.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92A2B132461 for <linux-kernel@vger.kernel.org>; Thu, 15 Feb 2024 21:14:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.203.201.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708031657; cv=none; b=VccMJpe60yCfpkpNEphBxxGSishxt2gLF16YN4I3RH+A+RXPND1A0hr5saV56C6bEmALCIO3zy9T6E34hxkG0BADS14bo2WHdWrJfL3T/tYAClh7ME9q7ykzJw90/D0JqmXUd2AxYiHqIWTcHEaHOSM+iVthaK3CmVWsdTlbbQM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708031657; c=relaxed/simple; bh=XVy3hOobm6zUM/uUVP4IVcxxMoT0bKOg5SLpDVCDiZE=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=X/EJwnXo2dZAO9cUSx+TK22RYj5DS85FjpzBC6r1os6huJv7wllWXUYmk8dnvceWAoaff8Jyb3PBvLIBSZtno2DdOFe5QqbHm+hfAM/J9GufmTO3hUBFO1XlIZ7T42wXORIRu84tupwxzRSRTXh/aB0lyRs0E51D38CRUVSIxTE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=pengutronix.de; spf=pass smtp.mailfrom=pengutronix.de; arc=none smtp.client-ip=185.203.201.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=pengutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pengutronix.de Received: from dude02.red.stw.pengutronix.de ([2a0a:edc0:0:1101:1d::28]) by metis.whiteo.stw.pengutronix.de with esmtp (Exim 4.92) (envelope-from <m.felsch@pengutronix.de>) id 1raj3W-0001Cg-Ff; Thu, 15 Feb 2024 22:14:06 +0100 From: Marco Felsch <m.felsch@pengutronix.de> To: srinivas.kandagatla@linaro.org, gregkh@linuxfoundation.org, miquel.raynal@bootlin.com, michael@walle.cc, rafal@milecki.pl Cc: linux-kernel@vger.kernel.org, kernel@pengutronix.de Subject: [RFC PATCH] nvmem: core: add sysfs cell write support Date: Thu, 15 Feb 2024 22:14:01 +0100 Message-Id: <20240215211401.1201004-1-m.felsch@pengutronix.de> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 2a0a:edc0:0:1101:1d::28 X-SA-Exim-Mail-From: m.felsch@pengutronix.de X-SA-Exim-Scanned: No (on metis.whiteo.stw.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791001018152060472 X-GMAIL-MSGID: 1791001018152060472 |
Series |
[RFC] nvmem: core: add sysfs cell write support
|
|
Commit Message
Marco Felsch
Feb. 15, 2024, 9:14 p.m. UTC
Add the sysfs cell write support to make it possible to write to exposed
cells from sysfs as well e.g. for device provisioning.
Signed-off-by: Marco Felsch <m.felsch@pengutronix.de>
---
Hi,
the purpose of this patch is to make the NVMEM cells exposed via the
testing ABI sysfs writeable, to allow changes during devie life-time.
Regards,
Marco
drivers/nvmem/core.c | 40 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 39 insertions(+), 1 deletion(-)
Comments
Hi, On Thu Feb 15, 2024 at 10:14 PM CET, Marco Felsch wrote: > @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > struct bin_attribute **cells_attrs, *attrs; > struct nvmem_cell_entry *entry; > unsigned int ncells = 0, i = 0; > + umode_t mode; > int ret = 0; > > mutex_lock(&nvmem_mutex); > @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > goto unlock_mutex; > } > > + mode = nvmem_bin_attr_get_umode(nvmem); > + > /* Initialize each attribute to take the name and size of the cell */ > list_for_each_entry(entry, &nvmem->cells, node) { > sysfs_bin_attr_init(&attrs[i]); > attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, > "%s@%x", entry->name, > entry->offset); > - attrs[i].attr.mode = 0444; cells are not writable if there is a read post process hook, see __nvmem_cell_entry_write(). if (entry->read_post_processing) mode &= ~0222; -michael
Hi Michael, On 24-02-16, Michael Walle wrote: > Hi, > > On Thu Feb 15, 2024 at 10:14 PM CET, Marco Felsch wrote: > > @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > struct bin_attribute **cells_attrs, *attrs; > > struct nvmem_cell_entry *entry; > > unsigned int ncells = 0, i = 0; > > + umode_t mode; > > int ret = 0; > > > > mutex_lock(&nvmem_mutex); > > @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > goto unlock_mutex; > > } > > > > + mode = nvmem_bin_attr_get_umode(nvmem); > > + > > /* Initialize each attribute to take the name and size of the cell */ > > list_for_each_entry(entry, &nvmem->cells, node) { > > sysfs_bin_attr_init(&attrs[i]); > > attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, > > "%s@%x", entry->name, > > entry->offset); > > - attrs[i].attr.mode = 0444; > > cells are not writable if there is a read post process hook, see > __nvmem_cell_entry_write(). > > if (entry->read_post_processing) > mode &= ~0222; good point, thanks for the hint :) I will add this and send a non-rfc version if write-support is something you would like to have. Regards, Marco > > -michael
Hi Marco, m.felsch@pengutronix.de wrote on Fri, 16 Feb 2024 11:07:50 +0100: > Hi Michael, > > On 24-02-16, Michael Walle wrote: > > Hi, > > > > On Thu Feb 15, 2024 at 10:14 PM CET, Marco Felsch wrote: > > > @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > struct bin_attribute **cells_attrs, *attrs; > > > struct nvmem_cell_entry *entry; > > > unsigned int ncells = 0, i = 0; > > > + umode_t mode; > > > int ret = 0; > > > > > > mutex_lock(&nvmem_mutex); > > > @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > goto unlock_mutex; > > > } > > > > > > + mode = nvmem_bin_attr_get_umode(nvmem); > > > + > > > /* Initialize each attribute to take the name and size of the cell */ > > > list_for_each_entry(entry, &nvmem->cells, node) { > > > sysfs_bin_attr_init(&attrs[i]); > > > attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, > > > "%s@%x", entry->name, > > > entry->offset); > > > - attrs[i].attr.mode = 0444; > > > > cells are not writable if there is a read post process hook, see > > __nvmem_cell_entry_write(). > > > > if (entry->read_post_processing) > > mode &= ~0222; > > good point, thanks for the hint :) I will add this and send a non-rfc > version if write-support is something you would like to have. I like the idea but, what about mtd devices (and soon maybe UBI devices)? This may only work on EEPROM-like devices I guess, where each area is fully independent and where no erasure is actually expected. Thanks, Miquèl
On 24-02-19, Miquel Raynal wrote: > Hi Marco, > > m.felsch@pengutronix.de wrote on Fri, 16 Feb 2024 11:07:50 +0100: > > > Hi Michael, > > > > On 24-02-16, Michael Walle wrote: > > > Hi, > > > > > > On Thu Feb 15, 2024 at 10:14 PM CET, Marco Felsch wrote: > > > > @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > > struct bin_attribute **cells_attrs, *attrs; > > > > struct nvmem_cell_entry *entry; > > > > unsigned int ncells = 0, i = 0; > > > > + umode_t mode; > > > > int ret = 0; > > > > > > > > mutex_lock(&nvmem_mutex); > > > > @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > > goto unlock_mutex; > > > > } > > > > > > > > + mode = nvmem_bin_attr_get_umode(nvmem); > > > > + > > > > /* Initialize each attribute to take the name and size of the cell */ > > > > list_for_each_entry(entry, &nvmem->cells, node) { > > > > sysfs_bin_attr_init(&attrs[i]); > > > > attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, > > > > "%s@%x", entry->name, > > > > entry->offset); > > > > - attrs[i].attr.mode = 0444; > > > > > > cells are not writable if there is a read post process hook, see > > > __nvmem_cell_entry_write(). > > > > > > if (entry->read_post_processing) > > > mode &= ~0222; > > > > good point, thanks for the hint :) I will add this and send a non-rfc > > version if write-support is something you would like to have. > > I like the idea but, what about mtd devices (and soon maybe UBI > devices)? This may only work on EEPROM-like devices I guess, where each > area is fully independent and where no erasure is actually expected. For MTD I would say that you need to ensure that you need to align the cells correctly. The cell-write should handle the page erase/write cycle properly. E.g. an SPI-NOR need to align the cells to erase-page size or the nvmem-cell-write need to read-copy-update the cells if they are not erase-paged aligned. Regarding UBI(FS) I'm not sure if this is required at all since you have an filesystem. IMHO nvmem-cells are very lowelevel and are not made for filesystem backed backends. That beeing said: I have no problem if we provide write support for EEPROMs only and adapt it later on to cover spi-nor/nand devices as well. Regards, Marco
On Mon Feb 19, 2024 at 12:53 PM CET, Marco Felsch wrote: > On 24-02-19, Miquel Raynal wrote: > > Hi Marco, > > > > m.felsch@pengutronix.de wrote on Fri, 16 Feb 2024 11:07:50 +0100: > > > > > Hi Michael, > > > > > > On 24-02-16, Michael Walle wrote: > > > > Hi, > > > > > > > > On Thu Feb 15, 2024 at 10:14 PM CET, Marco Felsch wrote: > > > > > @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > > > struct bin_attribute **cells_attrs, *attrs; > > > > > struct nvmem_cell_entry *entry; > > > > > unsigned int ncells = 0, i = 0; > > > > > + umode_t mode; > > > > > int ret = 0; > > > > > > > > > > mutex_lock(&nvmem_mutex); > > > > > @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > > > goto unlock_mutex; > > > > > } > > > > > > > > > > + mode = nvmem_bin_attr_get_umode(nvmem); > > > > > + > > > > > /* Initialize each attribute to take the name and size of the cell */ > > > > > list_for_each_entry(entry, &nvmem->cells, node) { > > > > > sysfs_bin_attr_init(&attrs[i]); > > > > > attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, > > > > > "%s@%x", entry->name, > > > > > entry->offset); > > > > > - attrs[i].attr.mode = 0444; > > > > > > > > cells are not writable if there is a read post process hook, see > > > > __nvmem_cell_entry_write(). > > > > > > > > if (entry->read_post_processing) > > > > mode &= ~0222; > > > > > > good point, thanks for the hint :) I will add this and send a non-rfc > > > version if write-support is something you would like to have. > > > > I like the idea but, what about mtd devices (and soon maybe UBI > > devices)? This may only work on EEPROM-like devices I guess, where each > > area is fully independent and where no erasure is actually expected. > > For MTD I would say that you need to ensure that you need to align the > cells correctly. The cell-write should handle the page erase/write cycle > properly. E.g. an SPI-NOR need to align the cells to erase-page size or > the nvmem-cell-write need to read-copy-update the cells if they are not > erase-paged aligned. > > Regarding UBI(FS) I'm not sure if this is required at all since you have > an filesystem. IMHO nvmem-cells are very lowelevel and are not made for > filesystem backed backends. > > That beeing said: I have no problem if we provide write support for > EEPROMs only and adapt it later on to cover spi-nor/nand devices as > well. Agreed. Honestly, I don't know how much sense this makes for MTD devices. First, the operation itself, seems really dangerous, as you'll have to delete the whole sector. Second, during initial provisioning, I don't think it will make much sense to use the sysfs cells because you cannot combine multiple writes into one. You'll always end up with unnecessary erases. What's your use case here? Just my two cents.. -michael
Hi, michael@walle.cc wrote on Mon, 19 Feb 2024 14:26:16 +0100: > On Mon Feb 19, 2024 at 12:53 PM CET, Marco Felsch wrote: > > On 24-02-19, Miquel Raynal wrote: > > > Hi Marco, > > > > > > m.felsch@pengutronix.de wrote on Fri, 16 Feb 2024 11:07:50 +0100: > > > > > > > Hi Michael, > > > > > > > > On 24-02-16, Michael Walle wrote: > > > > > Hi, > > > > > > > > > > On Thu Feb 15, 2024 at 10:14 PM CET, Marco Felsch wrote: > > > > > > @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > > > > struct bin_attribute **cells_attrs, *attrs; > > > > > > struct nvmem_cell_entry *entry; > > > > > > unsigned int ncells = 0, i = 0; > > > > > > + umode_t mode; > > > > > > int ret = 0; > > > > > > > > > > > > mutex_lock(&nvmem_mutex); > > > > > > @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > > > > goto unlock_mutex; > > > > > > } > > > > > > > > > > > > + mode = nvmem_bin_attr_get_umode(nvmem); > > > > > > + > > > > > > /* Initialize each attribute to take the name and size of the cell */ > > > > > > list_for_each_entry(entry, &nvmem->cells, node) { > > > > > > sysfs_bin_attr_init(&attrs[i]); > > > > > > attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, > > > > > > "%s@%x", entry->name, > > > > > > entry->offset); > > > > > > - attrs[i].attr.mode = 0444; > > > > > > > > > > cells are not writable if there is a read post process hook, see > > > > > __nvmem_cell_entry_write(). > > > > > > > > > > if (entry->read_post_processing) > > > > > mode &= ~0222; > > > > > > > > good point, thanks for the hint :) I will add this and send a non-rfc > > > > version if write-support is something you would like to have. > > > > > > I like the idea but, what about mtd devices (and soon maybe UBI > > > devices)? This may only work on EEPROM-like devices I guess, where each > > > area is fully independent and where no erasure is actually expected. > > > > For MTD I would say that you need to ensure that you need to align the > > cells correctly. The cell-write should handle the page erase/write cycle > > properly. E.g. an SPI-NOR need to align the cells to erase-page size or > > the nvmem-cell-write need to read-copy-update the cells if they are not > > erase-paged aligned. > > > > Regarding UBI(FS) I'm not sure if this is required at all since you have > > an filesystem. IMHO nvmem-cells are very lowelevel and are not made for > > filesystem backed backends. I'm really talking about UBI, not UBIFS. UBI is just like MTD but handles wear leveling. There is a pending series for enabling nvmem cells on top of UBI. > > That beeing said: I have no problem if we provide write support for > > EEPROMs only and adapt it later on to cover spi-nor/nand devices as > > well. > > Agreed. Honestly, I don't know how much sense this makes for MTD > devices. First, the operation itself, seems really dangerous, as > you'll have to delete the whole sector. Second, during initial > provisioning, I don't think it will make much sense to use the sysfs > cells because you cannot combine multiple writes into one. You'll > always end up with unnecessary erases. One cell per erase block would be an immense waste. Read-copy-update would probably work but would as well be very sub-optimal. I guess we could live with it, but as for now there has not been any real request for it, I'd also advise to keep this feature out of the mtd world in general. Thanks, Miquèl
Hi Miquel, Michael, On 24-02-20, Miquel Raynal wrote: > Hi, > > michael@walle.cc wrote on Mon, 19 Feb 2024 14:26:16 +0100: > > > On Mon Feb 19, 2024 at 12:53 PM CET, Marco Felsch wrote: > > > On 24-02-19, Miquel Raynal wrote: > > > > Hi Marco, > > > > > > > > m.felsch@pengutronix.de wrote on Fri, 16 Feb 2024 11:07:50 +0100: > > > > > > > > > Hi Michael, > > > > > > > > > > On 24-02-16, Michael Walle wrote: > > > > > > Hi, > > > > > > > > > > > > On Thu Feb 15, 2024 at 10:14 PM CET, Marco Felsch wrote: > > > > > > > @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > > > > > struct bin_attribute **cells_attrs, *attrs; > > > > > > > struct nvmem_cell_entry *entry; > > > > > > > unsigned int ncells = 0, i = 0; > > > > > > > + umode_t mode; > > > > > > > int ret = 0; > > > > > > > > > > > > > > mutex_lock(&nvmem_mutex); > > > > > > > @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > > > > > > > goto unlock_mutex; > > > > > > > } > > > > > > > > > > > > > > + mode = nvmem_bin_attr_get_umode(nvmem); > > > > > > > + > > > > > > > /* Initialize each attribute to take the name and size of the cell */ > > > > > > > list_for_each_entry(entry, &nvmem->cells, node) { > > > > > > > sysfs_bin_attr_init(&attrs[i]); > > > > > > > attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, > > > > > > > "%s@%x", entry->name, > > > > > > > entry->offset); > > > > > > > - attrs[i].attr.mode = 0444; > > > > > > > > > > > > cells are not writable if there is a read post process hook, see > > > > > > __nvmem_cell_entry_write(). > > > > > > > > > > > > if (entry->read_post_processing) > > > > > > mode &= ~0222; > > > > > > > > > > good point, thanks for the hint :) I will add this and send a non-rfc > > > > > version if write-support is something you would like to have. > > > > > > > > I like the idea but, what about mtd devices (and soon maybe UBI > > > > devices)? This may only work on EEPROM-like devices I guess, where each > > > > area is fully independent and where no erasure is actually expected. > > > > > > For MTD I would say that you need to ensure that you need to align the > > > cells correctly. The cell-write should handle the page erase/write cycle > > > properly. E.g. an SPI-NOR need to align the cells to erase-page size or > > > the nvmem-cell-write need to read-copy-update the cells if they are not > > > erase-paged aligned. > > > > > > Regarding UBI(FS) I'm not sure if this is required at all since you have > > > an filesystem. IMHO nvmem-cells are very lowelevel and are not made for > > > filesystem backed backends. > > I'm really talking about UBI, not UBIFS. UBI is just like MTD but > handles wear leveling. There is a pending series for enabling nvmem > cells on top of UBI. Cells on-top of a wear leveling device? Interesting, the cell-api is very lowlevel which means the specified cell will be at the exact same place on the hardware device as specified in the dts. How do you know that with wear leveling underneath the cell-api? > > > That beeing said: I have no problem if we provide write support for > > > EEPROMs only and adapt it later on to cover spi-nor/nand devices as > > > well. > > > > Agreed. Honestly, I don't know how much sense this makes for MTD > > devices. First, the operation itself, seems really dangerous, as > > you'll have to delete the whole sector. Second, during initial > > provisioning, I don't think it will make much sense to use the sysfs > > cells because you cannot combine multiple writes into one. You'll > > always end up with unnecessary erases. > > One cell per erase block would be an immense waste. Agree. > Read-copy-update would probably work but would as well be very > sub-optimal. I guess we could live with it, but as for now there has > not been any real request for it, I'd also advise to keep this feature > out of the mtd world in general. SPI-NORs are very typical for storing production-data as well but as I said this is another story. I'm fine with limiting it to EEPROMs since this is my use-case :) Regards, Marco
Hi Marco, > > > > Regarding UBI(FS) I'm not sure if this is required at all since you have > > > > an filesystem. IMHO nvmem-cells are very lowelevel and are not made for > > > > filesystem backed backends. > > > > I'm really talking about UBI, not UBIFS. UBI is just like MTD but > > handles wear leveling. There is a pending series for enabling nvmem > > cells on top of UBI. > > Cells on-top of a wear leveling device? Interesting, the cell-api is > very lowlevel which means the specified cell will be at the exact same > place on the hardware device as specified in the dts. How do you know > that with wear leveling underneath the cell-api? https://lore.kernel.org/lkml/cover.1702952891.git.daniel@makrotopia.org/ I haven't tested it though. Thanks, Miquèl
On Tue Feb 20, 2024 at 10:50 AM CET, Marco Felsch wrote: > > Read-copy-update would probably work but would as well be very > > sub-optimal. I guess we could live with it, but as for now there has > > not been any real request for it, I'd also advise to keep this feature > > out of the mtd world in general. > > SPI-NORs are very typical for storing production-data as well but as I > said this is another story. I'm fine with limiting it to EEPROMs since > this is my use-case :) Right, that is just what we are doing on our boards. But we do that in one go in our production environment, like just writing to the mtd partition or the OTP region. The nvmem cells are then just for connecting the devices to this information (like the nvmem-cells property of an ethernet device). Also usually, there is more to it, like removing write protection of said flash (sometimes in a proprietary way). -michael
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index 980123fb4dde..5a497188cfea 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -336,6 +336,40 @@ static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj, return read_len; } +static ssize_t nvmem_cell_attr_write(struct file *filp, struct kobject *kobj, + struct bin_attribute *attr, char *buf, + loff_t pos, size_t count) +{ + struct nvmem_cell_entry *entry; + struct nvmem_cell *cell; + int ret; + + entry = attr->private; + + if (!entry->nvmem->reg_write) + return -EPERM; + + if (pos >= entry->bytes) + return -EFBIG; + + if (pos + count > entry->bytes) + count = entry->bytes - pos; + + cell = nvmem_create_cell(entry, entry->name, 0); + if (IS_ERR(cell)) + return PTR_ERR(cell); + + if (!cell) + return -EINVAL; + + ret = nvmem_cell_write(cell, buf, count); + + kfree_const(cell->id); + kfree(cell); + + return ret; +} + /* default read/write permissions */ static struct bin_attribute bin_attr_rw_nvmem = { .attr = { @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) struct bin_attribute **cells_attrs, *attrs; struct nvmem_cell_entry *entry; unsigned int ncells = 0, i = 0; + umode_t mode; int ret = 0; mutex_lock(&nvmem_mutex); @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) goto unlock_mutex; } + mode = nvmem_bin_attr_get_umode(nvmem); + /* Initialize each attribute to take the name and size of the cell */ list_for_each_entry(entry, &nvmem->cells, node) { sysfs_bin_attr_init(&attrs[i]); attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL, "%s@%x", entry->name, entry->offset); - attrs[i].attr.mode = 0444; + attrs[i].attr.mode = mode; attrs[i].size = entry->bytes; attrs[i].read = &nvmem_cell_attr_read; + attrs[i].write = &nvmem_cell_attr_write; attrs[i].private = entry; if (!attrs[i].attr.name) { ret = -ENOMEM;