Message ID | 20230717075147.43326-4-miquel.raynal@bootlin.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp981376vqt; Mon, 17 Jul 2023 01:40:00 -0700 (PDT) X-Google-Smtp-Source: APBJJlE6VMQwvoB+jUFsvcDLuZg1HMzkBgrOSk2+vlhX4Ur+P2KriIjlhrfV1YiL5QNMyKO7jog/ X-Received: by 2002:a17:902:bc44:b0:1b8:6cae:3570 with SMTP id t4-20020a170902bc4400b001b86cae3570mr11238615plz.11.1689583199793; Mon, 17 Jul 2023 01:39:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689583199; cv=none; d=google.com; s=arc-20160816; b=rO2/Uq+lfUhsujuSGxCx3sSy+cuE41+q2rvaRFV3XkXAyJM/Xf/PyNE3tZgcLmk3IS 9f1CLcQ/6g6rw4j9RU2QO9AjwU3pjHRQR2MNHlILLnLMcImduEuS/Of1+tvOluZIh0Mz +8d3GHAwkrm27Oa3/Ho4kSBoyDIs3PnRbSyHw/kedAJUccohzxhpA5Bgk6LcIRoc3zIY IQslfJzy5p9v93Ixra9coyqPfHN77gYtGJsS6LDUxUQu+ExUfskbKmKjP32r0/HPyx0R vmvQxY4gtGn5tV1HAQXJvXeXsjQsYPrq0zIP0uKiuxj/iqBKlrscoRgPalyT1EZueKA9 tUSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RrYa81jVjDuF0o+K2o6QuVvcpzCkZHcHCjfMG1eLeRQ=; fh=JpygR/0v0UTOXdyljMXVkuezv/HjT4xrIjOwJ6y50X0=; b=eN3QSzO3Ql4KDWa0qB4uoC1V6PweidCqIip1s38qs4h6HZjjhiXG3lsUWfJiWGqfxD uU/1bZAjDvBOxcfIFMqKhtpNsJkT/d4mG5kBpaFJShZNBhh+eAIN2HTaY/Lcw4cWI9Zx JJTNJZZ23qKniRs7yOHeOqp6gOeOBEy3KUDGcXStvAtL/kcyjDDe5F0x9th8EjMk4/oQ 6T34OcTne6yT59zUqXKNishf+jPa6baH4wjOo4qOq2W/9YOIg0Yq96m5WmRubEJ1glzU 11DJ+btPWkPuh5P3ObXEUX+l7UPYdebM7MAGHsrQEeja52eIpsVHj8BHvdZa8TAitI04 3jyw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bootlin.com header.s=gm1 header.b=nQsJwRto; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=bootlin.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a9-20020a170902ecc900b001bb21d502f0si3772025plh.76.2023.07.17.01.39.47; Mon, 17 Jul 2023 01:39:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bootlin.com header.s=gm1 header.b=nQsJwRto; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=bootlin.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231204AbjGQHw0 (ORCPT <rfc822;hadasmailinglist@gmail.com> + 99 others); Mon, 17 Jul 2023 03:52:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230199AbjGQHwE (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 17 Jul 2023 03:52:04 -0400 Received: from relay3-d.mail.gandi.net (relay3-d.mail.gandi.net [217.70.183.195]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D51E1137 for <linux-kernel@vger.kernel.org>; Mon, 17 Jul 2023 00:51:54 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPSA id 9B6B06000D; Mon, 17 Jul 2023 07:51:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1689580313; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RrYa81jVjDuF0o+K2o6QuVvcpzCkZHcHCjfMG1eLeRQ=; b=nQsJwRtoYotAmiu65aNlIFeLw8qkDU99mqiMm6NqYr2VGAquKTZvhPAXsn+5KaVambzyLb MI3y8tUGuwj8lo+pc5lzY5oFoth0ttRacPQFxsIVIPkzoCleiDoLNayYdAWRY/J3r4zKIe +GCqkxSpcO9HUm0nSlvICoX5zznxVJoZhE4OPPKmT8lL8R6MGjUD8k7nSC7WMPvvL5Z0+k T3QNQxvVNUiOAYdsZpieRq36ZreOUy73a/27Cu57k7XRqqT9ZDkVKvc/y6QOeh90cMWeIu JIP0EImpult9T/ONt3J8LfN4cNUhTExD9PwSrjbPa3nYY1Y1mN9JRnLMKDUYmw== From: Miquel Raynal <miquel.raynal@bootlin.com> To: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, <linux-kernel@vger.kernel.org> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>, Robert Marko <robert.marko@sartura.hr>, Luka Perkov <luka.perkov@sartura.hr>, Michael Walle <michael@walle.cc>, Randy Dunlap <rdunlap@infradead.org>, Miquel Raynal <miquel.raynal@bootlin.com> Subject: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs Date: Mon, 17 Jul 2023 09:51:47 +0200 Message-Id: <20230717075147.43326-4-miquel.raynal@bootlin.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230717075147.43326-1-miquel.raynal@bootlin.com> References: <20230717075147.43326-1-miquel.raynal@bootlin.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-GND-Sasl: miquel.raynal@bootlin.com X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_PASS,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771656393777567538 X-GMAIL-MSGID: 1771656393777567538 |
Series |
NVMEM cells in sysfs
|
|
Commit Message
Miquel Raynal
July 17, 2023, 7:51 a.m. UTC
The binary content of nvmem devices is available to the user so in the easiest cases, finding the content of a cell is rather easy as it is just a matter of looking at a known and fixed offset. However, nvmem layouts have been recently introduced to cope with more advanced situations, where the offset and size of the cells is not known in advance or is dynamic. When using layouts, more advanced parsers are used by the kernel in order to give direct access to the content of each cell, regardless of its position/size in the underlying device. Unfortunately, these information are not accessible by users, unless by fully re-implementing the parser logic in userland. Let's expose the cells and their content through sysfs to avoid these situations. Of course the relevant NVMEM sysfs Kconfig option must be enabled for this support to be available. Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute group member will be filled at runtime only when relevant and will remain empty otherwise. In this case, as the cells attribute group will be empty, it will not lead to any additional folder/file creation. Exposed cells are read-only. There is, in practice, everything in the core to support a write path, but as I don't see any need for that, I prefer to keep the interface simple (and probably safer). The interface is documented as being in the "testing" state which means we can later add a write attribute if though relevant. There is one limitation though: if a layout is built as a module but is not properly installed in the system and loaded manually with insmod while the nvmem device driver was built-in, the cells won't appear in sysfs. But if done like that, the cells won't be usable by the built-in kernel drivers anyway. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> --- drivers/nvmem/core.c | 101 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 101 insertions(+)
Comments
Hi, > There is one limitation though: if a layout is built as a module but is > not properly installed in the system and loaded manually with insmod > while the nvmem device driver was built-in, the cells won't appear in > sysfs. But if done like that, the cells won't be usable by the built-in > kernel drivers anyway. What is the difference between manual loading with insmod and automatic module loading? Or is the limitation, layout as M and device driver as Y doesn't work? -michael
On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote: > The binary content of nvmem devices is available to the user so in the > easiest cases, finding the content of a cell is rather easy as it is > just a matter of looking at a known and fixed offset. However, nvmem > layouts have been recently introduced to cope with more advanced > situations, where the offset and size of the cells is not known in > advance or is dynamic. When using layouts, more advanced parsers are > used by the kernel in order to give direct access to the content of each > cell, regardless of its position/size in the underlying > device. Unfortunately, these information are not accessible by users, > unless by fully re-implementing the parser logic in userland. > > Let's expose the cells and their content through sysfs to avoid these > situations. Of course the relevant NVMEM sysfs Kconfig option must be > enabled for this support to be available. > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute > group member will be filled at runtime only when relevant and will > remain empty otherwise. In this case, as the cells attribute group will > be empty, it will not lead to any additional folder/file creation. > > Exposed cells are read-only. There is, in practice, everything in the > core to support a write path, but as I don't see any need for that, I > prefer to keep the interface simple (and probably safer). The interface > is documented as being in the "testing" state which means we can later > add a write attribute if though relevant. > > There is one limitation though: if a layout is built as a module but is > not properly installed in the system and loaded manually with insmod > while the nvmem device driver was built-in, the cells won't appear in > sysfs. But if done like that, the cells won't be usable by the built-in > kernel drivers anyway. Wait, what? That should not be an issue here, if so, then this change is not correct and should be fixed as this is NOT an issue for sysfs (otherwise the whole tree wouldn't work.) Please fix up your dependancies if this is somehow not working properly. thanks, greg k-h
Hi Greg, gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 16:32:09 +0200: > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote: > > The binary content of nvmem devices is available to the user so in the > > easiest cases, finding the content of a cell is rather easy as it is > > just a matter of looking at a known and fixed offset. However, nvmem > > layouts have been recently introduced to cope with more advanced > > situations, where the offset and size of the cells is not known in > > advance or is dynamic. When using layouts, more advanced parsers are > > used by the kernel in order to give direct access to the content of each > > cell, regardless of its position/size in the underlying > > device. Unfortunately, these information are not accessible by users, > > unless by fully re-implementing the parser logic in userland. > > > > Let's expose the cells and their content through sysfs to avoid these > > situations. Of course the relevant NVMEM sysfs Kconfig option must be > > enabled for this support to be available. > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute > > group member will be filled at runtime only when relevant and will > > remain empty otherwise. In this case, as the cells attribute group will > > be empty, it will not lead to any additional folder/file creation. > > > > Exposed cells are read-only. There is, in practice, everything in the > > core to support a write path, but as I don't see any need for that, I > > prefer to keep the interface simple (and probably safer). The interface > > is documented as being in the "testing" state which means we can later > > add a write attribute if though relevant. > > > > There is one limitation though: if a layout is built as a module but is > > not properly installed in the system and loaded manually with insmod > > while the nvmem device driver was built-in, the cells won't appear in > > sysfs. But if done like that, the cells won't be usable by the built-in > > kernel drivers anyway. > > Wait, what? That should not be an issue here, if so, then this change > is not correct and should be fixed as this is NOT an issue for sysfs > (otherwise the whole tree wouldn't work.) > > Please fix up your dependancies if this is somehow not working properly. I'm not sure I fully get your point. There is no way we can describe any dependency between a storage device driver and an nvmem layout. NVMEM is a pure software abstraction, the layout that will be chosen depends on the device tree, but if the layout has not been installed, there is no existing mechanism in the kernel to prevent it from being loaded (how do you know it's not on purpose?). Thanks, Miquèl
Hi Michael, michael@walle.cc wrote on Mon, 17 Jul 2023 14:24:45 +0200: > Hi, > > > There is one limitation though: if a layout is built as a module but is > > not properly installed in the system and loaded manually with insmod > > while the nvmem device driver was built-in, the cells won't appear in > > sysfs. But if done like that, the cells won't be usable by the built-in > > kernel drivers anyway. > > What is the difference between manual loading with insmod and automatic > module loading? Or is the limitation, layout as M and device driver as Y > doesn't work? The nvmem core uses usermodehelper to load the relevant layout module, but that only works if the module was installed correctly (make modules_install). The limitation is: * Any storage device driver that registers an nvmem interface =y (or =m but loaded before the nvmem layout) * The relevant nvmem layout =m *and* not installed with make modules_install If you see a way to workaround this, let me know, but there is no way we can enforce Kconfig dependencies between storage drivers and nvmem layouts IMHO. Thanks, Miquèl
On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote: > Hi Greg, > > gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 16:32:09 +0200: > > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote: > > > The binary content of nvmem devices is available to the user so in the > > > easiest cases, finding the content of a cell is rather easy as it is > > > just a matter of looking at a known and fixed offset. However, nvmem > > > layouts have been recently introduced to cope with more advanced > > > situations, where the offset and size of the cells is not known in > > > advance or is dynamic. When using layouts, more advanced parsers are > > > used by the kernel in order to give direct access to the content of each > > > cell, regardless of its position/size in the underlying > > > device. Unfortunately, these information are not accessible by users, > > > unless by fully re-implementing the parser logic in userland. > > > > > > Let's expose the cells and their content through sysfs to avoid these > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be > > > enabled for this support to be available. > > > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute > > > group member will be filled at runtime only when relevant and will > > > remain empty otherwise. In this case, as the cells attribute group will > > > be empty, it will not lead to any additional folder/file creation. > > > > > > Exposed cells are read-only. There is, in practice, everything in the > > > core to support a write path, but as I don't see any need for that, I > > > prefer to keep the interface simple (and probably safer). The interface > > > is documented as being in the "testing" state which means we can later > > > add a write attribute if though relevant. > > > > > > There is one limitation though: if a layout is built as a module but is > > > not properly installed in the system and loaded manually with insmod > > > while the nvmem device driver was built-in, the cells won't appear in > > > sysfs. But if done like that, the cells won't be usable by the built-in > > > kernel drivers anyway. > > > > Wait, what? That should not be an issue here, if so, then this change > > is not correct and should be fixed as this is NOT an issue for sysfs > > (otherwise the whole tree wouldn't work.) > > > > Please fix up your dependancies if this is somehow not working properly. > > I'm not sure I fully get your point. > > There is no way we can describe any dependency between a storage device > driver and an nvmem layout. NVMEM is a pure software abstraction, the > layout that will be chosen depends on the device tree, but if the > layout has not been installed, there is no existing mechanism in > the kernel to prevent it from being loaded (how do you know it's > not on purpose?). Once a layout has been loaded, the sysfs files should show up, right? Otherwise what does a "layout" do? (hint, I have no idea, it's an odd term to me...) thanks, greg k-h
Hi, On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote: > The binary content of nvmem devices is available to the user so in the > easiest cases, finding the content of a cell is rather easy as it is > just a matter of looking at a known and fixed offset. However, nvmem > layouts have been recently introduced to cope with more advanced > situations, where the offset and size of the cells is not known in > advance or is dynamic. When using layouts, more advanced parsers are > used by the kernel in order to give direct access to the content of each > cell, regardless of its position/size in the underlying > device. Unfortunately, these information are not accessible by users, > unless by fully re-implementing the parser logic in userland. > > Let's expose the cells and their content through sysfs to avoid these > situations. Of course the relevant NVMEM sysfs Kconfig option must be > enabled for this support to be available. > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute > group member will be filled at runtime only when relevant and will > remain empty otherwise. In this case, as the cells attribute group will > be empty, it will not lead to any additional folder/file creation. > > Exposed cells are read-only. There is, in practice, everything in the > core to support a write path, but as I don't see any need for that, I > prefer to keep the interface simple (and probably safer). The interface > is documented as being in the "testing" state which means we can later > add a write attribute if though relevant. > > There is one limitation though: if a layout is built as a module but is > not properly installed in the system and loaded manually with insmod > while the nvmem device driver was built-in, the cells won't appear in > sysfs. But if done like that, the cells won't be usable by the built-in > kernel drivers anyway. > > Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> > Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> > --- > drivers/nvmem/core.c | 101 +++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 101 insertions(+) > > diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c > index 48659106a1e2..6c04a9cf6919 100644 > --- a/drivers/nvmem/core.c > +++ b/drivers/nvmem/core.c > @@ -325,6 +325,43 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj, > return nvmem_bin_attr_get_umode(nvmem); > } > > +static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry, > + const char *id, int index); > + > +static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj, > + struct bin_attribute *attr, char *buf, > + loff_t pos, size_t count) > +{ > + struct nvmem_cell_entry *entry; > + struct nvmem_cell *cell = NULL; > + size_t cell_sz, read_len; > + void *content; > + > + entry = attr->private; > + cell = nvmem_create_cell(entry, entry->name, 0); > + if (IS_ERR(cell)) > + return PTR_ERR(cell); > + > + if (!cell) > + return -EINVAL; > + > + content = nvmem_cell_read(cell, &cell_sz); > + if (IS_ERR(content)) { > + read_len = PTR_ERR(content); > + goto destroy_cell; > + } > + > + read_len = min_t(unsigned int, cell_sz - pos, count); > + memcpy(buf, content + pos, read_len); > + kfree(content); > + > +destroy_cell: > + kfree_const(cell->id); > + kfree(cell); > + > + return read_len; > +} > + > /* default read/write permissions */ > static struct bin_attribute bin_attr_rw_nvmem = { > .attr = { > @@ -346,8 +383,14 @@ static const struct attribute_group nvmem_bin_group = { > .is_bin_visible = nvmem_bin_attr_is_visible, > }; > > +/* Cell attributes will be dynamically allocated */ > +static struct attribute_group nvmem_cells_group = { > + .name = "cells", > +}; > + > static const struct attribute_group *nvmem_dev_groups[] = { > &nvmem_bin_group, > + &nvmem_cells_group, > NULL, > }; > > @@ -406,6 +449,58 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem, > device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom); > } > > +static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) > +{ > + struct bin_attribute **cells_attrs, *attrs; > + struct nvmem_cell_entry *entry; > + unsigned int ncells = 0, i = 0; > + int ret = 0; > + > + mutex_lock(&nvmem_mutex); > + > + if (list_empty(&nvmem->cells)) > + goto unlock_mutex; > + > + /* Allocate an array of attributes with a sentinel */ > + ncells = list_count_nodes(&nvmem->cells); > + cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1, > + sizeof(struct bin_attribute *), GFP_KERNEL); > + if (!cells_attrs) { > + ret = -ENOMEM; > + goto unlock_mutex; > + } > + > + attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL); > + if (!attrs) { > + ret = -ENOMEM; > + goto unlock_mutex; > + } > + > + /* Initialize each attribute to take the name and size of the cell */ > + list_for_each_entry(entry, &nvmem->cells, node) { > + sysfs_bin_attr_init(&attrs[i]); > + attrs[i].attr.name = devm_kstrdup(&nvmem->dev, entry->name, GFP_KERNEL); > + attrs[i].attr.mode = 0444; > + attrs[i].size = entry->bytes; > + attrs[i].read = &nvmem_cell_attr_read; > + attrs[i].private = entry; > + if (!attrs[i].attr.name) { > + ret = -ENOMEM; > + goto unlock_mutex; > + } > + > + cells_attrs[i] = &attrs[i]; > + i++; > + } > + > + nvmem_cells_group.bin_attrs = cells_attrs; > + > +unlock_mutex: > + mutex_unlock(&nvmem_mutex); > + > + return ret; > +} > + > #else /* CONFIG_NVMEM_SYSFS */ > > static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem, > @@ -1006,6 +1101,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) > if (rval) > goto err_remove_cells; > > +#ifdef CONFIG_NVMEM_SYSFS > + rval = nvmem_populate_sysfs_cells(nvmem); > + if (rval) > + goto err_remove_cells; This breaks nvmem / efuse devices with multiple cells that share the same name. Something like this in DT: efuse: efuse@11f10000 { compatible = "mediatek,mt8183-efuse", "mediatek,efuse"; reg = <0 0x11f10000 0 0x1000>; #address-cells = <1>; #size-cells = <1>; thermal_calibration: calib@180 { reg = <0x180 0xc>; }; mipi_tx_calibration: calib@190 { reg = <0x190 0xc>; }; svs_calibration: calib@580 { reg = <0x580 0x64>; }; }; creates three cells, all named DT, and sysfs will complain: sysfs: cannot create duplicate filename '/devices/platform/soc/11f10000.efuse/nvmem1/cells/calib' mediatek,efuse: probe of 11f10000.efuse failed with error -17 This causes the MT8183-based Chromebooks to lose display capability, among other things. The problem lies in the nvmem DT parsing code, where the cell name is derived from the node name, without including the address portion. However I'm not sure we can change that, since it could be considered ABI? ChenYu > +#endif > + > dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name); > > rval = device_add(&nvmem->dev); > -- > 2.34.1 >
Hi Greg, gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 18:59:52 +0200: > On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote: > > Hi Greg, > > > > gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 16:32:09 +0200: > > > > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote: > > > > The binary content of nvmem devices is available to the user so in the > > > > easiest cases, finding the content of a cell is rather easy as it is > > > > just a matter of looking at a known and fixed offset. However, nvmem > > > > layouts have been recently introduced to cope with more advanced > > > > situations, where the offset and size of the cells is not known in > > > > advance or is dynamic. When using layouts, more advanced parsers are > > > > used by the kernel in order to give direct access to the content of each > > > > cell, regardless of its position/size in the underlying > > > > device. Unfortunately, these information are not accessible by users, > > > > unless by fully re-implementing the parser logic in userland. > > > > > > > > Let's expose the cells and their content through sysfs to avoid these > > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be > > > > enabled for this support to be available. > > > > > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute > > > > group member will be filled at runtime only when relevant and will > > > > remain empty otherwise. In this case, as the cells attribute group will > > > > be empty, it will not lead to any additional folder/file creation. > > > > > > > > Exposed cells are read-only. There is, in practice, everything in the > > > > core to support a write path, but as I don't see any need for that, I > > > > prefer to keep the interface simple (and probably safer). The interface > > > > is documented as being in the "testing" state which means we can later > > > > add a write attribute if though relevant. > > > > > > > > There is one limitation though: if a layout is built as a module but is > > > > not properly installed in the system and loaded manually with insmod > > > > while the nvmem device driver was built-in, the cells won't appear in > > > > sysfs. But if done like that, the cells won't be usable by the built-in > > > > kernel drivers anyway. > > > > > > Wait, what? That should not be an issue here, if so, then this change > > > is not correct and should be fixed as this is NOT an issue for sysfs > > > (otherwise the whole tree wouldn't work.) > > > > > > Please fix up your dependancies if this is somehow not working properly. > > > > I'm not sure I fully get your point. > > > > There is no way we can describe any dependency between a storage device > > driver and an nvmem layout. NVMEM is a pure software abstraction, the > > layout that will be chosen depends on the device tree, but if the > > layout has not been installed, there is no existing mechanism in > > the kernel to prevent it from being loaded (how do you know it's > > not on purpose?). > > Once a layout has been loaded, the sysfs files should show up, right? > Otherwise what does a "layout" do? (hint, I have no idea, it's an odd > term to me...) Sorry for the latency in responding to these questions, I'll try to clarify the situation. We have: - device drivers (like NAND flashes, SPI-NOR flashes or EEPROMs) which typically probe and register their devices into the nvmem layer to expose their content through NVMEM. - each registration in NVMEM leads to the creation of the relevant NVMEM cells which can then be used by other device drivers (typically: a network controller retrieving a MAC address from an EEPROM through the generic NVMEM abstraction). We recently covered a slightly new case: the NVMEM cells can be in random places in the storage devices so we need a "dynamic" way to discover them: this is the purpose of the NVMEM layouts. We know cell X is in the device, we just don't know where it is exactly at compile time, the layout driver will discover it dynamically for us at runtime. While the "static cells" parser is built-in the NVMEM subsystem, you explicitly asked to have the layouts modularized. This means registering a storage device in nvmem while no layout driver has been inserted yet is now a scenario. We cannot describe any dependency between a storage device and a layout driver. We cannot defer the probe either because device drivers which don't get access to their NVMEM cell are responsible of choosing what to do (most of the time, the idea is to fallback to a default value to avoid failing the probe for no reason). So to answer your original question: > Once a layout has been loaded, the sysfs files should show up, right? No. The layouts are kind of "libraries" that the NVMEM subsystem uses to try exposing cells *when* a new device is registered in NVMEM (not later). The registration of an NVMEM layout does not trigger any new parsing, because that is not how the NVMEM subsystem was designed. I must emphasize that if the layout driver is installed in /lib/modules/ there is no problem, it will be loaded with usermodehelper. But if it is not, we can very well have the layout driver inserted after, and this case, while in practice possible, is irrelevant from a driver standpoint. It does not make any sense to have these cells created "after" because they are mostly used during probes. An easy workaround would be to unregister/register again the underlying storage device driver. Do these explanations clarify the situation? Thanks, Miquèl
Hi Chen-Yu, > > static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem, > > @@ -1006,6 +1101,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) > > if (rval) > > goto err_remove_cells; > > > > +#ifdef CONFIG_NVMEM_SYSFS > > + rval = nvmem_populate_sysfs_cells(nvmem); > > + if (rval) > > + goto err_remove_cells; > > This breaks nvmem / efuse devices with multiple cells that share the > same name. Something like this in DT: > > efuse: efuse@11f10000 { > compatible = "mediatek,mt8183-efuse", > "mediatek,efuse"; > reg = <0 0x11f10000 0 0x1000>; > #address-cells = <1>; > #size-cells = <1>; > thermal_calibration: calib@180 { > reg = <0x180 0xc>; > }; > > mipi_tx_calibration: calib@190 { > reg = <0x190 0xc>; > }; > > svs_calibration: calib@580 { > reg = <0x580 0x64>; > }; > }; > > creates three cells, all named DT, and sysfs will complain: > > sysfs: cannot create duplicate filename '/devices/platform/soc/11f10000.efuse/nvmem1/cells/calib' > mediatek,efuse: probe of 11f10000.efuse failed with error -17 > > This causes the MT8183-based Chromebooks to lose display capability, > among other things. Sorry for the breakage, I did not identify this case, but you're right this is incorrectly handled currently. > The problem lies in the nvmem DT parsing code, where the cell name is > derived from the node name, without including the address portion. > However I'm not sure we can change that, since it could be considered > ABI? I would be in favor suffixing the cell names anyway as they have not been exposed yet to userspace at all (well, not more than a couple of days in -next). Thanks, Miquèl
On Mon, Jul 31, 2023 at 05:33:13PM +0200, Miquel Raynal wrote: > Hi Greg, > > gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 18:59:52 +0200: > > > On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote: > > > Hi Greg, > > > > > > gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 16:32:09 +0200: > > > > > > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote: > > > > > The binary content of nvmem devices is available to the user so in the > > > > > easiest cases, finding the content of a cell is rather easy as it is > > > > > just a matter of looking at a known and fixed offset. However, nvmem > > > > > layouts have been recently introduced to cope with more advanced > > > > > situations, where the offset and size of the cells is not known in > > > > > advance or is dynamic. When using layouts, more advanced parsers are > > > > > used by the kernel in order to give direct access to the content of each > > > > > cell, regardless of its position/size in the underlying > > > > > device. Unfortunately, these information are not accessible by users, > > > > > unless by fully re-implementing the parser logic in userland. > > > > > > > > > > Let's expose the cells and their content through sysfs to avoid these > > > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be > > > > > enabled for this support to be available. > > > > > > > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute > > > > > group member will be filled at runtime only when relevant and will > > > > > remain empty otherwise. In this case, as the cells attribute group will > > > > > be empty, it will not lead to any additional folder/file creation. > > > > > > > > > > Exposed cells are read-only. There is, in practice, everything in the > > > > > core to support a write path, but as I don't see any need for that, I > > > > > prefer to keep the interface simple (and probably safer). The interface > > > > > is documented as being in the "testing" state which means we can later > > > > > add a write attribute if though relevant. > > > > > > > > > > There is one limitation though: if a layout is built as a module but is > > > > > not properly installed in the system and loaded manually with insmod > > > > > while the nvmem device driver was built-in, the cells won't appear in > > > > > sysfs. But if done like that, the cells won't be usable by the built-in > > > > > kernel drivers anyway. > > > > > > > > Wait, what? That should not be an issue here, if so, then this change > > > > is not correct and should be fixed as this is NOT an issue for sysfs > > > > (otherwise the whole tree wouldn't work.) > > > > > > > > Please fix up your dependancies if this is somehow not working properly. > > > > > > I'm not sure I fully get your point. > > > > > > There is no way we can describe any dependency between a storage device > > > driver and an nvmem layout. NVMEM is a pure software abstraction, the > > > layout that will be chosen depends on the device tree, but if the > > > layout has not been installed, there is no existing mechanism in > > > the kernel to prevent it from being loaded (how do you know it's > > > not on purpose?). > > > > Once a layout has been loaded, the sysfs files should show up, right? > > Otherwise what does a "layout" do? (hint, I have no idea, it's an odd > > term to me...) > > Sorry for the latency in responding to these questions, I'll try to > clarify the situation. > > We have: > - device drivers (like NAND flashes, SPI-NOR flashes or EEPROMs) which > typically probe and register their devices into the nvmem > layer to expose their content through NVMEM. > - each registration in NVMEM leads to the creation of the relevant > NVMEM cells which can then be used by other device drivers > (typically: a network controller retrieving a MAC address from an > EEPROM through the generic NVMEM abstraction). So is a "cell" here a device in the device model? Or something else? > We recently covered a slightly new case: the NVMEM cells can be in > random places in the storage devices so we need a "dynamic" way to > discover them: this is the purpose of the NVMEM layouts. We know cell X > is in the device, we just don't know where it is exactly at compile > time, the layout driver will discover it dynamically for us at runtime. So you then create the needed device when it is found? > While the "static cells" parser is built-in the NVMEM subsystem, you > explicitly asked to have the layouts modularized. This means > registering a storage device in nvmem while no layout driver has been > inserted yet is now a scenario. We cannot describe any dependency > between a storage device and a layout driver. We cannot defer the probe > either because device drivers which don't get access to their NVMEM > cell are responsible of choosing what to do (most of the time, the idea > is to fallback to a default value to avoid failing the probe for no > reason). > > So to answer your original question: > > > Once a layout has been loaded, the sysfs files should show up, right? > > No. The layouts are kind of "libraries" that the NVMEM subsystem uses > to try exposing cells *when* a new device is registered in NVMEM (not > later). The registration of an NVMEM layout does not trigger any new > parsing, because that is not how the NVMEM subsystem was designed. So they are a type of "class" right? Why not just use class devices then? > I must emphasize that if the layout driver is installed in > /lib/modules/ there is no problem, it will be loaded with > usermodehelper. But if it is not, we can very well have the layout > driver inserted after, and this case, while in practice possible, is > irrelevant from a driver standpoint. It does not make any sense to have > these cells created "after" because they are mostly used during probes. > An easy workaround would be to unregister/register again the underlying > storage device driver. We really do not support any situation where a module is NOT in the proper place when device discovery happens. So this shouldn't be an issue, yet you all mention it? So how is it happening? And if you used the class code, would that work better as mentioned above? thanks greg k-h
Hi Greg, gregkh@linuxfoundation.org wrote on Tue, 1 Aug 2023 11:56:40 +0200: > On Mon, Jul 31, 2023 at 05:33:13PM +0200, Miquel Raynal wrote: > > Hi Greg, > > > > gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 18:59:52 +0200: > > > > > On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote: > > > > Hi Greg, > > > > > > > > gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 16:32:09 +0200: > > > > > > > > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote: > > > > > > The binary content of nvmem devices is available to the user so in the > > > > > > easiest cases, finding the content of a cell is rather easy as it is > > > > > > just a matter of looking at a known and fixed offset. However, nvmem > > > > > > layouts have been recently introduced to cope with more advanced > > > > > > situations, where the offset and size of the cells is not known in > > > > > > advance or is dynamic. When using layouts, more advanced parsers are > > > > > > used by the kernel in order to give direct access to the content of each > > > > > > cell, regardless of its position/size in the underlying > > > > > > device. Unfortunately, these information are not accessible by users, > > > > > > unless by fully re-implementing the parser logic in userland. > > > > > > > > > > > > Let's expose the cells and their content through sysfs to avoid these > > > > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be > > > > > > enabled for this support to be available. > > > > > > > > > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute > > > > > > group member will be filled at runtime only when relevant and will > > > > > > remain empty otherwise. In this case, as the cells attribute group will > > > > > > be empty, it will not lead to any additional folder/file creation. > > > > > > > > > > > > Exposed cells are read-only. There is, in practice, everything in the > > > > > > core to support a write path, but as I don't see any need for that, I > > > > > > prefer to keep the interface simple (and probably safer). The interface > > > > > > is documented as being in the "testing" state which means we can later > > > > > > add a write attribute if though relevant. > > > > > > > > > > > > There is one limitation though: if a layout is built as a module but is > > > > > > not properly installed in the system and loaded manually with insmod > > > > > > while the nvmem device driver was built-in, the cells won't appear in > > > > > > sysfs. But if done like that, the cells won't be usable by the built-in > > > > > > kernel drivers anyway. > > > > > > > > > > Wait, what? That should not be an issue here, if so, then this change > > > > > is not correct and should be fixed as this is NOT an issue for sysfs > > > > > (otherwise the whole tree wouldn't work.) > > > > > > > > > > Please fix up your dependancies if this is somehow not working properly. > > > > > > > > I'm not sure I fully get your point. > > > > > > > > There is no way we can describe any dependency between a storage device > > > > driver and an nvmem layout. NVMEM is a pure software abstraction, the > > > > layout that will be chosen depends on the device tree, but if the > > > > layout has not been installed, there is no existing mechanism in > > > > the kernel to prevent it from being loaded (how do you know it's > > > > not on purpose?). > > > > > > Once a layout has been loaded, the sysfs files should show up, right? > > > Otherwise what does a "layout" do? (hint, I have no idea, it's an odd > > > term to me...) > > > > Sorry for the latency in responding to these questions, I'll try to > > clarify the situation. > > > > We have: > > - device drivers (like NAND flashes, SPI-NOR flashes or EEPROMs) which > > typically probe and register their devices into the nvmem > > layer to expose their content through NVMEM. > > - each registration in NVMEM leads to the creation of the relevant > > NVMEM cells which can then be used by other device drivers > > (typically: a network controller retrieving a MAC address from an > > EEPROM through the generic NVMEM abstraction). > > > So is a "cell" here a device in the device model? Or something else? It is not a device in the device model, but I am wondering if it should not be one actually. I discussed with Rafal about another issue in the current design (dependence over a layout driver which might defer forever a storage device probe) which might be solved if the core was handling these layouts differently. > > We recently covered a slightly new case: the NVMEM cells can be in > > random places in the storage devices so we need a "dynamic" way to > > discover them: this is the purpose of the NVMEM layouts. We know cell X > > is in the device, we just don't know where it is exactly at compile > > time, the layout driver will discover it dynamically for us at runtime. > > So you then create the needed device when it is found? We don't create devices, but we match the layouts with the NVMEM devices thanks to the of_ logic. > > While the "static cells" parser is built-in the NVMEM subsystem, you > > explicitly asked to have the layouts modularized. This means > > registering a storage device in nvmem while no layout driver has been > > inserted yet is now a scenario. We cannot describe any dependency > > between a storage device and a layout driver. We cannot defer the probe > > either because device drivers which don't get access to their NVMEM > > cell are responsible of choosing what to do (most of the time, the idea > > is to fallback to a default value to avoid failing the probe for no > > reason). > > > > So to answer your original question: > > > > > Once a layout has been loaded, the sysfs files should show up, right? > > > > No. The layouts are kind of "libraries" that the NVMEM subsystem uses > > to try exposing cells *when* a new device is registered in NVMEM (not > > later). The registration of an NVMEM layout does not trigger any new > > parsing, because that is not how the NVMEM subsystem was designed. > > So they are a type of "class" right? Why not just use class devices > then? > > > I must emphasize that if the layout driver is installed in > > /lib/modules/ there is no problem, it will be loaded with > > usermodehelper. But if it is not, we can very well have the layout > > driver inserted after, and this case, while in practice possible, is > > irrelevant from a driver standpoint. It does not make any sense to have > > these cells created "after" because they are mostly used during probes. > > An easy workaround would be to unregister/register again the underlying > > storage device driver. > > We really do not support any situation where a module is NOT in the > proper place when device discovery happens. Great, I didn't know. Then there is no issue. > So this shouldn't be an > issue, yet you all mention it? So how is it happening? Just transparency, I'm giving all details I can. I'll try to come with something slightly different than what we have with the current approach. Thanks, Miquèl
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index 48659106a1e2..6c04a9cf6919 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -325,6 +325,43 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj, return nvmem_bin_attr_get_umode(nvmem); } +static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry, + const char *id, int index); + +static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj, + struct bin_attribute *attr, char *buf, + loff_t pos, size_t count) +{ + struct nvmem_cell_entry *entry; + struct nvmem_cell *cell = NULL; + size_t cell_sz, read_len; + void *content; + + entry = attr->private; + cell = nvmem_create_cell(entry, entry->name, 0); + if (IS_ERR(cell)) + return PTR_ERR(cell); + + if (!cell) + return -EINVAL; + + content = nvmem_cell_read(cell, &cell_sz); + if (IS_ERR(content)) { + read_len = PTR_ERR(content); + goto destroy_cell; + } + + read_len = min_t(unsigned int, cell_sz - pos, count); + memcpy(buf, content + pos, read_len); + kfree(content); + +destroy_cell: + kfree_const(cell->id); + kfree(cell); + + return read_len; +} + /* default read/write permissions */ static struct bin_attribute bin_attr_rw_nvmem = { .attr = { @@ -346,8 +383,14 @@ static const struct attribute_group nvmem_bin_group = { .is_bin_visible = nvmem_bin_attr_is_visible, }; +/* Cell attributes will be dynamically allocated */ +static struct attribute_group nvmem_cells_group = { + .name = "cells", +}; + static const struct attribute_group *nvmem_dev_groups[] = { &nvmem_bin_group, + &nvmem_cells_group, NULL, }; @@ -406,6 +449,58 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem, device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom); } +static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem) +{ + struct bin_attribute **cells_attrs, *attrs; + struct nvmem_cell_entry *entry; + unsigned int ncells = 0, i = 0; + int ret = 0; + + mutex_lock(&nvmem_mutex); + + if (list_empty(&nvmem->cells)) + goto unlock_mutex; + + /* Allocate an array of attributes with a sentinel */ + ncells = list_count_nodes(&nvmem->cells); + cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1, + sizeof(struct bin_attribute *), GFP_KERNEL); + if (!cells_attrs) { + ret = -ENOMEM; + goto unlock_mutex; + } + + attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL); + if (!attrs) { + ret = -ENOMEM; + goto unlock_mutex; + } + + /* Initialize each attribute to take the name and size of the cell */ + list_for_each_entry(entry, &nvmem->cells, node) { + sysfs_bin_attr_init(&attrs[i]); + attrs[i].attr.name = devm_kstrdup(&nvmem->dev, entry->name, GFP_KERNEL); + attrs[i].attr.mode = 0444; + attrs[i].size = entry->bytes; + attrs[i].read = &nvmem_cell_attr_read; + attrs[i].private = entry; + if (!attrs[i].attr.name) { + ret = -ENOMEM; + goto unlock_mutex; + } + + cells_attrs[i] = &attrs[i]; + i++; + } + + nvmem_cells_group.bin_attrs = cells_attrs; + +unlock_mutex: + mutex_unlock(&nvmem_mutex); + + return ret; +} + #else /* CONFIG_NVMEM_SYSFS */ static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem, @@ -1006,6 +1101,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) if (rval) goto err_remove_cells; +#ifdef CONFIG_NVMEM_SYSFS + rval = nvmem_populate_sysfs_cells(nvmem); + if (rval) + goto err_remove_cells; +#endif + dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name); rval = device_add(&nvmem->dev);