[v3,4/4] nvmem: core: Expose cells through sysfs
Commit Message
The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell, regardless of its position/size in the underlying
device. Unfortunately, these information are not accessible by users,
unless by fully re-implementing the parser logic in userland.
Let's expose the cells and their content through sysfs to avoid these
situations. Of course the relevant NVMEM sysfs Kconfig option must be
enabled for this support to be available.
Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
group member will be filled at runtime only when relevant and will
remain empty otherwise. In this case, as the cells attribute group will
be empty, it will not lead to any additional folder/file creation.
Exposed cells are read-only. There is, in practice, everything in the
core to support a write path, but as I don't see any need for that, I
prefer to keep the interface simple (and probably safer). The interface
is documented as being in the "testing" state which means we can later
add a write attribute if though relevant.
There is one limitation though: if a layout is built as a module but is
not properly installed in the system and loaded manually with insmod
while the nvmem device driver was built-in, the cells won't appear in
sysfs. But if done like that, the cells won't be usable by the built-in
kernel drivers anyway.
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
---
drivers/nvmem/core.c | 128 +++++++++++++++++++++++++++++++++++++++++--
1 file changed, 124 insertions(+), 4 deletions(-)
Comments
Thanks for doing this Miquel
Few minor comments below.
On 05/06/2023 14:34, Miquel Raynal wrote:
> The binary content of nvmem devices is available to the user so in the
> easiest cases, finding the content of a cell is rather easy as it is
> just a matter of looking at a known and fixed offset. However, nvmem
> layouts have been recently introduced to cope with more advanced
> situations, where the offset and size of the cells is not known in
> advance or is dynamic. When using layouts, more advanced parsers are
> used by the kernel in order to give direct access to the content of each
> cell, regardless of its position/size in the underlying
> device. Unfortunately, these information are not accessible by users,
> unless by fully re-implementing the parser logic in userland.
>
> Let's expose the cells and their content through sysfs to avoid these
> situations. Of course the relevant NVMEM sysfs Kconfig option must be
> enabled for this support to be available.
>
> Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> group member will be filled at runtime only when relevant and will
> remain empty otherwise. In this case, as the cells attribute group will
> be empty, it will not lead to any additional folder/file creation.
>
> Exposed cells are read-only. There is, in practice, everything in the
> core to support a write path, but as I don't see any need for that, I
> prefer to keep the interface simple (and probably safer). The interface
> is documented as being in the "testing" state which means we can later
> add a write attribute if though relevant.
>
> There is one limitation though: if a layout is built as a module but is
> not properly installed in the system and loaded manually with insmod
> while the nvmem device driver was built-in, the cells won't appear in
> sysfs. But if done like that, the cells won't be usable by the built-in
> kernel drivers anyway.
>
> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
> ---
> drivers/nvmem/core.c | 128 +++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 124 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index 342cd380b420..625e3de273b7 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -325,6 +325,61 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
> return nvmem_bin_attr_get_umode(nvmem);
> }
>
> +static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
> + const char *id, int index);
> +
> +static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
> + struct bin_attribute *attr, char *buf,
> + loff_t pos, size_t count)
> +{
> + struct nvmem_cell_entry *entry;
> + struct nvmem_cell *cell = NULL;
> + struct nvmem_device *nvmem;
> + size_t cell_sz, read_len;
> + struct device *dev;
> + void *content;
> +
> + if (attr->private)
> + dev = attr->private;
I see no point in this, as we do not set any private data for this
attribute.
How about storing nvmem_cell_entry in private and using it here.
This should also avoid looping for every cell in the follow on code.
> + else
> + dev = kobj_to_dev(kobj);
new line here.
> + nvmem = to_nvmem_device(dev);
> +
> + mutex_lock(&nvmem_mutex);
> + list_for_each_entry(entry, &nvmem->cells, node) {
> + if (strncmp(entry->name, attr->attr.name, XATTR_NAME_MAX))
> + continue;
> +
> + cell = nvmem_create_cell(entry, entry->name, 0);
> + if (IS_ERR(cell)) {
> + mutex_unlock(&nvmem_mutex);
> + return PTR_ERR(cell);
> + }
> +
> + break;
> + }
> + mutex_unlock(&nvmem_mutex);
> +
> + if (!cell)
> + return -EINVAL;
> +
> + content = nvmem_cell_read(cell, &cell_sz);
> + if (IS_ERR(content)) {
> + read_len = PTR_ERR(content);
> + goto destroy_cell;
> + }
> +
> + read_len = min_t(unsigned int, cell_sz - pos, count); > + memcpy(buf, content + pos, read_len);
> + kfree(content);
> +
> +destroy_cell:
> + kfree_const(cell->id);
> + kfree(cell);
> +
> + return read_len;
> +}
> +
> /* default read/write permissions */
> static struct bin_attribute bin_attr_rw_nvmem = {
> .attr = {
> @@ -346,8 +401,14 @@ static const struct attribute_group nvmem_bin_group = {
> .is_bin_visible = nvmem_bin_attr_is_visible,
> };
>
> +/* Cell attributes will be dynamically allocated */
> +static struct attribute_group nvmem_cells_group = {
> + .name = "cells",
> +};
> +
> static const struct attribute_group *nvmem_dev_groups[] = {
> &nvmem_bin_group,
> + &nvmem_cells_group,
> NULL,
> };
>
> @@ -406,6 +467,59 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
> device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
> }
>
> +static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem)
> +{
> + struct bin_attribute **cells_attrs, *attrs;
> + struct nvmem_cell_entry *entry;
> + unsigned int ncells = 0, i = 0;
> + int ret = 0;
> +
> + mutex_lock(&nvmem_mutex);
> +
> + if (list_empty(&nvmem->cells))
> + goto unlock_mutex;
> +
> + list_for_each_entry(entry, &nvmem->cells, node)
> + ncells++;
list_count_nodes?
> +
> + /* Allocate an array of attributes with a sentinel */
> + cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
> + sizeof(struct bin_attribute *), GFP_KERNEL);
> + if (!cells_attrs) {
> + ret = -ENOMEM;
> + goto unlock_mutex;
> + }
> +
> + attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
> + if (!attrs) {
> + ret = -ENOMEM;
> + goto unlock_mutex;
> + }
> +
> + /* Initialize each attribute to take the name and size of the cell */
> + list_for_each_entry(entry, &nvmem->cells, node) {
> + sysfs_bin_attr_init(&attrs[i]);
> + attrs[i].attr.name = devm_kstrdup(&nvmem->dev, entry->name, GFP_KERNEL);
> + attrs[i].attr.mode = 0444;
> + attrs[i].size = entry->bytes;
> + attrs[i].read = &nvmem_cell_attr_read;
> + if (!attrs[i].attr.name) {
> + ret = -ENOMEM;
> + goto unlock_mutex;
> + }
> +
> + cells_attrs[i] = &attrs[i];
> + i++;
> + }
> +
> + nvmem_cells_group.bin_attrs = cells_attrs;
> +
> +unlock_mutex:
> + mutex_unlock(&nvmem_mutex);
> +
> + return ret;
> +}
> +
> #else /* CONFIG_NVMEM_SYSFS */
>
> static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
> @@ -976,16 +1090,22 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> if (rval)
> goto err_remove_cells;
>
> + rval = nvmem_add_cells_from_layout(nvmem);
> + if (rval)
> + goto err_remove_cells;
> +
> +#ifdef CONFIG_NVMEM_SYSFS
> + rval = nvmem_populate_sysfs_cells(nvmem);
> + if (rval)
> + goto err_remove_cells;
> +#endif
> +
> dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
>
> rval = device_add(&nvmem->dev);
> if (rval)
> goto err_remove_cells;
>
> - rval = nvmem_add_cells_from_layout(nvmem);
> - if (rval)
> - goto err_remove_cells;
> -
> blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
>
> return nvmem;
Hi Srinivas,
srinivas.kandagatla@linaro.org wrote on Tue, 6 Jun 2023 11:31:07 +0100:
> Thanks for doing this Miquel
Great, I'm glad you like the idea :)
>
> Few minor comments below.
>
> On 05/06/2023 14:34, Miquel Raynal wrote:
> > The binary content of nvmem devices is available to the user so in the
> > easiest cases, finding the content of a cell is rather easy as it is
> > just a matter of looking at a known and fixed offset. However, nvmem
> > layouts have been recently introduced to cope with more advanced
> > situations, where the offset and size of the cells is not known in
> > advance or is dynamic. When using layouts, more advanced parsers are
> > used by the kernel in order to give direct access to the content of each
> > cell, regardless of its position/size in the underlying
> > device. Unfortunately, these information are not accessible by users,
> > unless by fully re-implementing the parser logic in userland.
> >
> > Let's expose the cells and their content through sysfs to avoid these
> > situations. Of course the relevant NVMEM sysfs Kconfig option must be
> > enabled for this support to be available.
> >
> > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> > group member will be filled at runtime only when relevant and will
> > remain empty otherwise. In this case, as the cells attribute group will
> > be empty, it will not lead to any additional folder/file creation.
> >
> > Exposed cells are read-only. There is, in practice, everything in the
> > core to support a write path, but as I don't see any need for that, I
> > prefer to keep the interface simple (and probably safer). The interface
> > is documented as being in the "testing" state which means we can later
> > add a write attribute if though relevant.
> >
> > There is one limitation though: if a layout is built as a module but is
> > not properly installed in the system and loaded manually with insmod
> > while the nvmem device driver was built-in, the cells won't appear in
> > sysfs. But if done like that, the cells won't be usable by the built-in
> > kernel drivers anyway.
> >
> > Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
> > ---
> > drivers/nvmem/core.c | 128 +++++++++++++++++++++++++++++++++++++++++--
> > 1 file changed, 124 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> > index 342cd380b420..625e3de273b7 100644
> > --- a/drivers/nvmem/core.c
> > +++ b/drivers/nvmem/core.c
> > @@ -325,6 +325,61 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
> > return nvmem_bin_attr_get_umode(nvmem);
> > }
> > > +static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
> > + const char *id, int index);
> > +
> > +static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
> > + struct bin_attribute *attr, char *buf,
> > + loff_t pos, size_t count)
> > +{
> > + struct nvmem_cell_entry *entry;
> > + struct nvmem_cell *cell = NULL;
> > + struct nvmem_device *nvmem;
> > + size_t cell_sz, read_len;
> > + struct device *dev;
> > + void *content;
> > +
> > + if (attr->private)
> > + dev = attr->private;
> I see no point in this, as we do not set any private data for this attribute.
>
> How about storing nvmem_cell_entry in private and using it here.
> This should also avoid looping for every cell in the follow on code.
Mmmh, excellent idea, that will simplify a lot the helper, I'll do
that, thanks a lot for the suggestion!
>
> > + else
> > + dev = kobj_to_dev(kobj);
>
> new line here.
>
> > + nvmem = to_nvmem_device(dev);
> > +
> > + mutex_lock(&nvmem_mutex);
> > + list_for_each_entry(entry, &nvmem->cells, node) {
> > + if (strncmp(entry->name, attr->attr.name, XATTR_NAME_MAX))
> > + continue;
> > +
> > + cell = nvmem_create_cell(entry, entry->name, 0);
>
> > + if (IS_ERR(cell)) {
> > + mutex_unlock(&nvmem_mutex);
> > + return PTR_ERR(cell);
> > + }
> > +
> > + break;
> > + }
> > + mutex_unlock(&nvmem_mutex);
> > +
> > + if (!cell)
> > + return -EINVAL;
> > +
> > + content = nvmem_cell_read(cell, &cell_sz);
> > + if (IS_ERR(content)) {
> > + read_len = PTR_ERR(content);
> > + goto destroy_cell;
> > + }
> > +
> > + read_len = min_t(unsigned int, cell_sz - pos, count); > + memcpy(buf, content + pos, read_len);
> > + kfree(content);
> > +
> > +destroy_cell:
> > + kfree_const(cell->id);
> > + kfree(cell);
> > +
> > + return read_len;
> > +}
> > +
> > /* default read/write permissions */
> > static struct bin_attribute bin_attr_rw_nvmem = {
> > .attr = {
> > @@ -346,8 +401,14 @@ static const struct attribute_group nvmem_bin_group = {
> > .is_bin_visible = nvmem_bin_attr_is_visible,
> > };
> > > +/* Cell attributes will be dynamically allocated */
> > +static struct attribute_group nvmem_cells_group = {
> > + .name = "cells",
> > +};
> > +
> > static const struct attribute_group *nvmem_dev_groups[] = {
> > &nvmem_bin_group,
> > + &nvmem_cells_group,
> > NULL,
> > };
> > > @@ -406,6 +467,59 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
> > device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
> > }
> > > +static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem)
> > +{
> > + struct bin_attribute **cells_attrs, *attrs;
> > + struct nvmem_cell_entry *entry;
> > + unsigned int ncells = 0, i = 0;
> > + int ret = 0;
> > +
> > + mutex_lock(&nvmem_mutex);
> > +
> > + if (list_empty(&nvmem->cells))
> > + goto unlock_mutex;
> > +
> > + list_for_each_entry(entry, &nvmem->cells, node)
> > + ncells++;
>
> list_count_nodes?
I didn't even know about this macro. Of course I'll use it!
> > +
> > + /* Allocate an array of attributes with a sentinel */
> > + cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
> > + sizeof(struct bin_attribute *), GFP_KERNEL);
> > + if (!cells_attrs) {
> > + ret = -ENOMEM;
> > + goto unlock_mutex;
> > + }
> > +
> > + attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
> > + if (!attrs) {
> > + ret = -ENOMEM;
> > + goto unlock_mutex;
> > + }
> > +
> > + /* Initialize each attribute to take the name and size of the cell */
> > + list_for_each_entry(entry, &nvmem->cells, node) {
> > + sysfs_bin_attr_init(&attrs[i]);
> > + attrs[i].attr.name = devm_kstrdup(&nvmem->dev, entry->name, GFP_KERNEL);
> > + attrs[i].attr.mode = 0444;
> > + attrs[i].size = entry->bytes;
> > + attrs[i].read = &nvmem_cell_attr_read;
> > + if (!attrs[i].attr.name) {
> > + ret = -ENOMEM;
> > + goto unlock_mutex;
> > + }
> > +
> > + cells_attrs[i] = &attrs[i];
> > + i++;
> > + }
> > +
> > + nvmem_cells_group.bin_attrs = cells_attrs;
> > +
> > +unlock_mutex:
> > + mutex_unlock(&nvmem_mutex);
> > +
> > + return ret;
> > +}
> > +
> > #else /* CONFIG_NVMEM_SYSFS */
> > > static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
> > @@ -976,16 +1090,22 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> > if (rval)
> > goto err_remove_cells;
> > > + rval = nvmem_add_cells_from_layout(nvmem);
> > + if (rval)
> > + goto err_remove_cells;
> > +
> > +#ifdef CONFIG_NVMEM_SYSFS
> > + rval = nvmem_populate_sysfs_cells(nvmem);
> > + if (rval)
> > + goto err_remove_cells;
> > +#endif
> > +
> > dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
> > > rval = device_add(&nvmem->dev);
> > if (rval)
> > goto err_remove_cells;
> > > - rval = nvmem_add_cells_from_layout(nvmem);
> > - if (rval)
> > - goto err_remove_cells;
> > -
> > blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
> > > return nvmem;
Thanks,
Miquèl
@@ -325,6 +325,61 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
return nvmem_bin_attr_get_umode(nvmem);
}
+static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
+ const char *id, int index);
+
+static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t pos, size_t count)
+{
+ struct nvmem_cell_entry *entry;
+ struct nvmem_cell *cell = NULL;
+ struct nvmem_device *nvmem;
+ size_t cell_sz, read_len;
+ struct device *dev;
+ void *content;
+
+ if (attr->private)
+ dev = attr->private;
+ else
+ dev = kobj_to_dev(kobj);
+ nvmem = to_nvmem_device(dev);
+
+ mutex_lock(&nvmem_mutex);
+ list_for_each_entry(entry, &nvmem->cells, node) {
+ if (strncmp(entry->name, attr->attr.name, XATTR_NAME_MAX))
+ continue;
+
+ cell = nvmem_create_cell(entry, entry->name, 0);
+ if (IS_ERR(cell)) {
+ mutex_unlock(&nvmem_mutex);
+ return PTR_ERR(cell);
+ }
+
+ break;
+ }
+ mutex_unlock(&nvmem_mutex);
+
+ if (!cell)
+ return -EINVAL;
+
+ content = nvmem_cell_read(cell, &cell_sz);
+ if (IS_ERR(content)) {
+ read_len = PTR_ERR(content);
+ goto destroy_cell;
+ }
+
+ read_len = min_t(unsigned int, cell_sz - pos, count);
+ memcpy(buf, content + pos, read_len);
+ kfree(content);
+
+destroy_cell:
+ kfree_const(cell->id);
+ kfree(cell);
+
+ return read_len;
+}
+
/* default read/write permissions */
static struct bin_attribute bin_attr_rw_nvmem = {
.attr = {
@@ -346,8 +401,14 @@ static const struct attribute_group nvmem_bin_group = {
.is_bin_visible = nvmem_bin_attr_is_visible,
};
+/* Cell attributes will be dynamically allocated */
+static struct attribute_group nvmem_cells_group = {
+ .name = "cells",
+};
+
static const struct attribute_group *nvmem_dev_groups[] = {
&nvmem_bin_group,
+ &nvmem_cells_group,
NULL,
};
@@ -406,6 +467,59 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
}
+static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem)
+{
+ struct bin_attribute **cells_attrs, *attrs;
+ struct nvmem_cell_entry *entry;
+ unsigned int ncells = 0, i = 0;
+ int ret = 0;
+
+ mutex_lock(&nvmem_mutex);
+
+ if (list_empty(&nvmem->cells))
+ goto unlock_mutex;
+
+ list_for_each_entry(entry, &nvmem->cells, node)
+ ncells++;
+
+ /* Allocate an array of attributes with a sentinel */
+ cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
+ sizeof(struct bin_attribute *), GFP_KERNEL);
+ if (!cells_attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
+ if (!attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ /* Initialize each attribute to take the name and size of the cell */
+ list_for_each_entry(entry, &nvmem->cells, node) {
+ sysfs_bin_attr_init(&attrs[i]);
+ attrs[i].attr.name = devm_kstrdup(&nvmem->dev, entry->name, GFP_KERNEL);
+ attrs[i].attr.mode = 0444;
+ attrs[i].size = entry->bytes;
+ attrs[i].read = &nvmem_cell_attr_read;
+ if (!attrs[i].attr.name) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ cells_attrs[i] = &attrs[i];
+ i++;
+ }
+
+ nvmem_cells_group.bin_attrs = cells_attrs;
+
+unlock_mutex:
+ mutex_unlock(&nvmem_mutex);
+
+ return ret;
+}
+
#else /* CONFIG_NVMEM_SYSFS */
static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
@@ -976,16 +1090,22 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;
+ rval = nvmem_add_cells_from_layout(nvmem);
+ if (rval)
+ goto err_remove_cells;
+
+#ifdef CONFIG_NVMEM_SYSFS
+ rval = nvmem_populate_sysfs_cells(nvmem);
+ if (rval)
+ goto err_remove_cells;
+#endif
+
dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
rval = device_add(&nvmem->dev);
if (rval)
goto err_remove_cells;
- rval = nvmem_add_cells_from_layout(nvmem);
- if (rval)
- goto err_remove_cells;
-
blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
return nvmem;