[v10,1/3] nvmem: core: Rework layouts to become platform devices
Commit Message
Current layout support was initially written without modules support in
mind. When the requirement for module support rose, the existing base
was improved to adopt modularization support, but kind of a design flaw
was introduced. With the existing implementation, when a storage device
registers into NVMEM, the core tries to hook a layout (if any) and
populates its cells immediately. This means, if the hardware description
expects a layout to be hooked up, but no driver was provided for that,
the storage medium will fail to probe and try later from
scratch. Technically, the layouts are more like a "plus" and, even we
consider that the hardware description shall be correct, we could still
probe the storage device (especially if it contains the rootfs).
One way to overcome this situation is to consider the layouts as
devices, and leverage the existing notifier mechanism. When a new NVMEM
device is registered, we can:
- populate its nvmem-layout child, if any
- try to modprobe the relevant driver, if relevant
- try to hook the NVMEM device with a layout in the notifier
And when a new layout is registered:
- try to hook all the existing NVMEM devices which are not yet hooked to
a layout with the new layout
This way, there is no strong order to enforce, any NVMEM device creation
or NVMEM layout driver insertion will be observed as a new event which
may lead to the creation of additional cells, without disturbing the
probes with costly (and sometimes endless) deferrals.
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
---
drivers/nvmem/core.c | 140 ++++++++++++++++++++++++-------
drivers/nvmem/layouts/onie-tlv.c | 39 +++++++--
drivers/nvmem/layouts/sl28vpd.c | 39 +++++++--
include/linux/nvmem-provider.h | 11 +--
4 files changed, 180 insertions(+), 49 deletions(-)
Comments
On 2023-09-22 19:48, Miquel Raynal wrote:
> Current layout support was initially written without modules support in
> mind. When the requirement for module support rose, the existing base
> was improved to adopt modularization support, but kind of a design flaw
> was introduced. With the existing implementation, when a storage device
> registers into NVMEM, the core tries to hook a layout (if any) and
> populates its cells immediately. This means, if the hardware
> description
> expects a layout to be hooked up, but no driver was provided for that,
> the storage medium will fail to probe and try later from
> scratch. Technically, the layouts are more like a "plus" and, even we
> consider that the hardware description shall be correct, we could still
> probe the storage device (especially if it contains the rootfs).
>
> One way to overcome this situation is to consider the layouts as
> devices, and leverage the existing notifier mechanism. When a new NVMEM
> device is registered, we can:
> - populate its nvmem-layout child, if any
> - try to modprobe the relevant driver, if relevant
> - try to hook the NVMEM device with a layout in the notifier
> And when a new layout is registered:
> - try to hook all the existing NVMEM devices which are not yet hooked
> to
> a layout with the new layout
> This way, there is no strong order to enforce, any NVMEM device
> creation
> or NVMEM layout driver insertion will be observed as a new event which
> may lead to the creation of additional cells, without disturbing the
> probes with costly (and sometimes endless) deferrals.
>
> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
I rebased & tested my patch converting U-Boot NVMEM device to NVMEM
layout on top of this. It worked!
Tested-by: Rafał Miłecki <rafal@milecki.pl>
For reference what I used:
partitions {
partition-loader {
compatible = "brcm,u-boot";
partition-u-boot-env {
compatible = "nvmem-cells";
nvmem-layout {
compatible = "brcm,env";
base_mac_addr: ethaddr {
#nvmem-cell-cells = <1>;
};
};
};
};
};
Thanks Miquel for the work on this.
I have one comment below.
On 22/09/2023 18:48, Miquel Raynal wrote:
> Current layout support was initially written without modules support in
> mind. When the requirement for module support rose, the existing base
> was improved to adopt modularization support, but kind of a design flaw
> was introduced. With the existing implementation, when a storage device
> registers into NVMEM, the core tries to hook a layout (if any) and
> populates its cells immediately. This means, if the hardware description
> expects a layout to be hooked up, but no driver was provided for that,
> the storage medium will fail to probe and try later from
> scratch. Technically, the layouts are more like a "plus" and, even we
> consider that the hardware description shall be correct, we could still
> probe the storage device (especially if it contains the rootfs).
>
> One way to overcome this situation is to consider the layouts as
> devices, and leverage the existing notifier mechanism. When a new NVMEM
> device is registered, we can:
> - populate its nvmem-layout child, if any
> - try to modprobe the relevant driver, if relevant
> - try to hook the NVMEM device with a layout in the notifier
> And when a new layout is registered:
> - try to hook all the existing NVMEM devices which are not yet hooked to
> a layout with the new layout
> This way, there is no strong order to enforce, any NVMEM device creation
> or NVMEM layout driver insertion will be observed as a new event which
> may lead to the creation of additional cells, without disturbing the
> probes with costly (and sometimes endless) deferrals.
>
> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
> ---
> drivers/nvmem/core.c | 140 ++++++++++++++++++++++++-------
> drivers/nvmem/layouts/onie-tlv.c | 39 +++++++--
> drivers/nvmem/layouts/sl28vpd.c | 39 +++++++--
> include/linux/nvmem-provider.h | 11 +--
> 4 files changed, 180 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index eaf6a3fe8ca6..14dd3ae777bb 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -17,11 +17,13 @@
> #include <linux/nvmem-provider.h>
> #include <linux/gpio/consumer.h>
> #include <linux/of.h>
> +#include <linux/of_platform.h>
> #include <linux/slab.h>
>
> struct nvmem_device {
> struct module *owner;
> struct device dev;
> + struct list_head node;
> int stride;
> int word_size;
> int id;
> @@ -75,6 +77,7 @@ static LIST_HEAD(nvmem_cell_tables);
> static DEFINE_MUTEX(nvmem_lookup_mutex);
> static LIST_HEAD(nvmem_lookup_list);
>
> +struct notifier_block nvmem_nb;
> static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
>
> static DEFINE_SPINLOCK(nvmem_layout_lock);
> @@ -790,23 +793,16 @@ EXPORT_SYMBOL_GPL(nvmem_layout_unregister);
> static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
> {
> struct device_node *layout_np;
> - struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
> + struct nvmem_layout *l, *layout = NULL;
>
> layout_np = of_nvmem_layout_get_container(nvmem);
> if (!layout_np)
> return NULL;
>
> - /*
> - * In case the nvmem device was built-in while the layout was built as a
> - * module, we shall manually request the layout driver loading otherwise
> - * we'll never have any match.
> - */
> - of_request_module(layout_np);
> -
> spin_lock(&nvmem_layout_lock);
>
> list_for_each_entry(l, &nvmem_layouts, node) {
> - if (of_match_node(l->of_match_table, layout_np)) {
> + if (of_match_node(l->dev->driver->of_match_table, layout_np)) {
> if (try_module_get(l->owner))
> layout = l;
>
> @@ -863,7 +859,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> const struct of_device_id *match;
>
> layout_np = of_nvmem_layout_get_container(nvmem);
> - match = of_match_node(layout->of_match_table, layout_np);
> + match = of_match_node(layout->dev->driver->of_match_table, layout_np);
>
> return match ? match->data : NULL;
> }
> @@ -882,6 +878,7 @@ EXPORT_SYMBOL_GPL(nvmem_layout_get_match_data);
> struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> {
> struct nvmem_device *nvmem;
> + struct device_node *layout_np;
> int rval;
>
> if (!config->dev)
> @@ -974,19 +971,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> goto err_put_device;
> }
>
> - /*
> - * If the driver supplied a layout by config->layout, the module
> - * pointer will be NULL and nvmem_layout_put() will be a noop.
> - */
> - nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
> - if (IS_ERR(nvmem->layout)) {
> - rval = PTR_ERR(nvmem->layout);
> - nvmem->layout = NULL;
> -
> - if (rval == -EPROBE_DEFER)
> - goto err_teardown_compat;
> - }
> -
> if (config->cells) {
> rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
> if (rval)
> @@ -1005,24 +989,27 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> if (rval)
> goto err_remove_cells;
>
> - rval = nvmem_add_cells_from_layout(nvmem);
> - if (rval)
> - goto err_remove_cells;
> -
> dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
>
> rval = device_add(&nvmem->dev);
> if (rval)
> goto err_remove_cells;
>
> + /* Populate layouts as devices */
> + layout_np = of_nvmem_layout_get_container(nvmem);
> + if (layout_np) {
> + rval = of_platform_populate(nvmem->dev.of_node, NULL, NULL, NULL);
> + of_node_put(layout_np);
> + if (rval)
> + goto err_remove_cells;
> + }
> +
> blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
>
> return nvmem;
>
> err_remove_cells:
> nvmem_device_remove_all_cells(nvmem);
> - nvmem_layout_put(nvmem->layout);
> -err_teardown_compat:
> if (config->compat)
> nvmem_sysfs_remove_compat(nvmem, config);
> err_put_device:
> @@ -2124,13 +2111,106 @@ const char *nvmem_dev_name(struct nvmem_device *nvmem)
> }
> EXPORT_SYMBOL_GPL(nvmem_dev_name);
>
> +static void nvmem_try_loading_layout_driver(struct nvmem_device *nvmem)
> +{
> + struct device_node *layout_np;
> +
> + layout_np = of_nvmem_layout_get_container(nvmem);
> + if (layout_np) {
> + of_request_module(layout_np);
> + of_node_put(layout_np);
> + }
> +}
> +
> +static int nvmem_match_available_layout(struct nvmem_device *nvmem)
> +{
> + int ret;
> +
> + if (nvmem->layout)
> + return 0;
> +
> + nvmem->layout = nvmem_layout_get(nvmem);
> + if (!nvmem->layout)
> + return 0;
> +
> + ret = nvmem_add_cells_from_layout(nvmem);
> + if (ret) {
> + nvmem_layout_put(nvmem->layout);
> + nvmem->layout = NULL;
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int nvmem_dev_match_available_layout(struct device *dev, void *data)
> +{
> + struct nvmem_device *nvmem = to_nvmem_device(dev);
> +
> + return nvmem_match_available_layout(nvmem);
> +}
> +
> +static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
> +{
> + return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
> +}
> +
> +/*
> + * When an NVMEM device is registered, try to match against a layout and
> + * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
> + * which could use it properly expose their cells.
> + */
> +static int nvmem_notifier_call(struct notifier_block *notifier,
> + unsigned long event_flags, void *context)
> +{
> + struct nvmem_device *nvmem = NULL;
> + int ret;
> +
> + switch (event_flags) {
> + case NVMEM_ADD:
> + nvmem = context;
> + break;
> + case NVMEM_LAYOUT_ADD:
> + break;
> + default:
> + return NOTIFY_DONE;
> + }
It looks bit unnatural for core to register notifier for its own events.
Why do we need the notifier at core level, can we not just handle this
in core before raising these events, instead of registering a notifier cb?
--srini
> +
> + if (nvmem) {
> + /*
> + * In case the nvmem device was built-in while the layout was
> + * built as a module, manually request loading the layout driver.
> + */
> + nvmem_try_loading_layout_driver(nvmem);
> +
> + /* Populate the cells of the new nvmem device from its layout, if any */
> + ret = nvmem_match_available_layout(nvmem);
> + } else {
> + /* NVMEM devices might be "waiting" for this layout */
> + ret = nvmem_for_each_dev(nvmem_dev_match_available_layout);
> + }
> +
> + if (ret)
> + return notifier_from_errno(ret);
> +
> + return NOTIFY_OK;
> +}
> +
> static int __init nvmem_init(void)
> {
> - return bus_register(&nvmem_bus_type);
> + int ret;
> +
> + ret = bus_register(&nvmem_bus_type);
> + if (ret)
> + return ret;
> +
> + nvmem_nb.notifier_call = &nvmem_notifier_call;
> + return nvmem_register_notifier(&nvmem_nb);
> }
>
> static void __exit nvmem_exit(void)
> {
> + nvmem_unregister_notifier(&nvmem_nb);
> bus_unregister(&nvmem_bus_type);
> }
>
On Fri, Sep 22, 2023 at 07:48:52PM +0200, Miquel Raynal wrote:
> Current layout support was initially written without modules support in
> mind. When the requirement for module support rose, the existing base
> was improved to adopt modularization support, but kind of a design flaw
> was introduced. With the existing implementation, when a storage device
> registers into NVMEM, the core tries to hook a layout (if any) and
> populates its cells immediately. This means, if the hardware description
> expects a layout to be hooked up, but no driver was provided for that,
> the storage medium will fail to probe and try later from
> scratch. Technically, the layouts are more like a "plus" and, even we
> consider that the hardware description shall be correct, we could still
> probe the storage device (especially if it contains the rootfs).
>
> One way to overcome this situation is to consider the layouts as
> devices, and leverage the existing notifier mechanism. When a new NVMEM
> device is registered, we can:
> - populate its nvmem-layout child, if any
> - try to modprobe the relevant driver, if relevant
> - try to hook the NVMEM device with a layout in the notifier
> And when a new layout is registered:
> - try to hook all the existing NVMEM devices which are not yet hooked to
> a layout with the new layout
> This way, there is no strong order to enforce, any NVMEM device creation
> or NVMEM layout driver insertion will be observed as a new event which
> may lead to the creation of additional cells, without disturbing the
> probes with costly (and sometimes endless) deferrals.
>
> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Did I miss why these were decided to be platform devices and not normal
devices on their own "bus" that are attached to the parent device
properly? Why platform for a dynamic thing?
If I did agree with this, it should be documented here in the changelog
why this is required to be this way so I don't ask the question again in
the future :)
thanks,
greg k-h
Hi Greg,
gregkh@linuxfoundation.org wrote on Mon, 2 Oct 2023 11:35:02 +0200:
> On Fri, Sep 22, 2023 at 07:48:52PM +0200, Miquel Raynal wrote:
> > Current layout support was initially written without modules support in
> > mind. When the requirement for module support rose, the existing base
> > was improved to adopt modularization support, but kind of a design flaw
> > was introduced. With the existing implementation, when a storage device
> > registers into NVMEM, the core tries to hook a layout (if any) and
> > populates its cells immediately. This means, if the hardware description
> > expects a layout to be hooked up, but no driver was provided for that,
> > the storage medium will fail to probe and try later from
> > scratch. Technically, the layouts are more like a "plus" and, even we
> > consider that the hardware description shall be correct, we could still
> > probe the storage device (especially if it contains the rootfs).
> >
> > One way to overcome this situation is to consider the layouts as
> > devices, and leverage the existing notifier mechanism. When a new NVMEM
> > device is registered, we can:
> > - populate its nvmem-layout child, if any
> > - try to modprobe the relevant driver, if relevant
> > - try to hook the NVMEM device with a layout in the notifier
> > And when a new layout is registered:
> > - try to hook all the existing NVMEM devices which are not yet hooked to
> > a layout with the new layout
> > This way, there is no strong order to enforce, any NVMEM device creation
> > or NVMEM layout driver insertion will be observed as a new event which
> > may lead to the creation of additional cells, without disturbing the
> > probes with costly (and sometimes endless) deferrals.
> >
> > Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
>
> Did I miss why these were decided to be platform devices and not normal
> devices on their own "bus" that are attached to the parent device
> properly? Why platform for a dynamic thing?
I don't think you missed anything, following the discussion "how to
picture these layouts as devices" I came up with the simplest
approach: using the platform infrastructure. I thought creating my own
additional bus just for that would involve too much code duplication.
I agree the current implementation kind of abuses the platform
infrastructure. I will have a look in order to maybe mutate this into
its own bus.
> If I did agree with this, it should be documented here in the changelog
> why this is required to be this way so I don't ask the question again in
> the future :)
Haha, I don't think you did ;)
Thanks,
Miquèl
Hi Srinivas,
> > +static int nvmem_dev_match_available_layout(struct device *dev, void *data)
> > +{
> > + struct nvmem_device *nvmem = to_nvmem_device(dev);
> > +
> > + return nvmem_match_available_layout(nvmem);
> > +}
> > +
> > +static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
> > +{
> > + return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
> > +}
> > +
> > +/*
> > + * When an NVMEM device is registered, try to match against a layout and
> > + * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
> > + * which could use it properly expose their cells.
> > + */
> > +static int nvmem_notifier_call(struct notifier_block *notifier,
> > + unsigned long event_flags, void *context)
> > +{
> > + struct nvmem_device *nvmem = NULL;
> > + int ret;
> > +
> > + switch (event_flags) {
> > + case NVMEM_ADD:
> > + nvmem = context;
> > + break;
> > + case NVMEM_LAYOUT_ADD:
> > + break;
> > + default:
> > + return NOTIFY_DONE;
> > + }
>
> It looks bit unnatural for core to register notifier for its own events.
>
>
> Why do we need the notifier at core level, can we not just handle this in core before raising these events, instead of registering a notifier cb?
There is no good place to do that "synchronously". We need some kind of
notification mechanism in these two cases:
* A memory device is being probed -> if a matching layout driver is
already available, we need to parse the device and expose the cells,
but not in the thread registering the memory device.
* A layout driver is being insmod'ed -> if a memory device needs it to
create cells we need to parse the device content, but I find it
crappy to start device-specific parsing in the registration handler.
So probe of the memory device is not a good place for this, nor is the
registration of the layout driver. Yet, we need to do the same
operation upon two different "events".
This notifier mechanism is a clean and easy way to get notified and
implement a callback which is also not blocking the thread doing the
initial registration. I am personally not bothered using it only
internally. If you have another mechanism in mind to perform a similar
operation, or a way to avoid this need I'll do the switch.
Thanks,
Miquèl
Hi Miquel,
miquel.raynal@bootlin.com wrote on Tue, 3 Oct 2023 11:43:26 +0200:
> Hi Srinivas,
>
> > > +static int nvmem_dev_match_available_layout(struct device *dev, void *data)
> > > +{
> > > + struct nvmem_device *nvmem = to_nvmem_device(dev);
> > > +
> > > + return nvmem_match_available_layout(nvmem);
> > > +}
> > > +
> > > +static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
> > > +{
> > > + return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
> > > +}
> > > +
> > > +/*
> > > + * When an NVMEM device is registered, try to match against a layout and
> > > + * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
> > > + * which could use it properly expose their cells.
> > > + */
> > > +static int nvmem_notifier_call(struct notifier_block *notifier,
> > > + unsigned long event_flags, void *context)
> > > +{
> > > + struct nvmem_device *nvmem = NULL;
> > > + int ret;
> > > +
> > > + switch (event_flags) {
> > > + case NVMEM_ADD:
> > > + nvmem = context;
> > > + break;
> > > + case NVMEM_LAYOUT_ADD:
> > > + break;
> > > + default:
> > > + return NOTIFY_DONE;
> > > + }
> >
> > It looks bit unnatural for core to register notifier for its own events.
> >
> >
> > Why do we need the notifier at core level, can we not just handle this in core before raising these events, instead of registering a notifier cb?
>
> There is no good place to do that "synchronously". We need some kind of
> notification mechanism in these two cases:
> * A memory device is being probed -> if a matching layout driver is
> already available, we need to parse the device and expose the cells,
> but not in the thread registering the memory device.
> * A layout driver is being insmod'ed -> if a memory device needs it to
> create cells we need to parse the device content, but I find it
> crappy to start device-specific parsing in the registration handler.
>
> So probe of the memory device is not a good place for this, nor is the
> registration of the layout driver. Yet, we need to do the same
> operation upon two different "events".
>
> This notifier mechanism is a clean and easy way to get notified and
> implement a callback which is also not blocking the thread doing the
> initial registration. I am personally not bothered using it only
> internally. If you have another mechanism in mind to perform a similar
> operation, or a way to avoid this need I'll do the switch.
Since I've changed the way nvmem devices and layouts are dependent in
v11, I've been giving this a second thought and I think this can now be
avoided. I've improved the layout registration callback to actually
retrieve the nvmem device this layout is probing on and populates
the dynamic cells *there* (instead of during the probe of the nvmem
device itself). This way I could drop some boilerplate which is no
longer necessary. It comes at a low cost: there are now two places were
sysfs cells can be added.
I am cleaning up all this stuff and then let you and Greg review the
v12.
Thanks,
Miquèl
@@ -17,11 +17,13 @@
#include <linux/nvmem-provider.h>
#include <linux/gpio/consumer.h>
#include <linux/of.h>
+#include <linux/of_platform.h>
#include <linux/slab.h>
struct nvmem_device {
struct module *owner;
struct device dev;
+ struct list_head node;
int stride;
int word_size;
int id;
@@ -75,6 +77,7 @@ static LIST_HEAD(nvmem_cell_tables);
static DEFINE_MUTEX(nvmem_lookup_mutex);
static LIST_HEAD(nvmem_lookup_list);
+struct notifier_block nvmem_nb;
static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
static DEFINE_SPINLOCK(nvmem_layout_lock);
@@ -790,23 +793,16 @@ EXPORT_SYMBOL_GPL(nvmem_layout_unregister);
static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
{
struct device_node *layout_np;
- struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
+ struct nvmem_layout *l, *layout = NULL;
layout_np = of_nvmem_layout_get_container(nvmem);
if (!layout_np)
return NULL;
- /*
- * In case the nvmem device was built-in while the layout was built as a
- * module, we shall manually request the layout driver loading otherwise
- * we'll never have any match.
- */
- of_request_module(layout_np);
-
spin_lock(&nvmem_layout_lock);
list_for_each_entry(l, &nvmem_layouts, node) {
- if (of_match_node(l->of_match_table, layout_np)) {
+ if (of_match_node(l->dev->driver->of_match_table, layout_np)) {
if (try_module_get(l->owner))
layout = l;
@@ -863,7 +859,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
const struct of_device_id *match;
layout_np = of_nvmem_layout_get_container(nvmem);
- match = of_match_node(layout->of_match_table, layout_np);
+ match = of_match_node(layout->dev->driver->of_match_table, layout_np);
return match ? match->data : NULL;
}
@@ -882,6 +878,7 @@ EXPORT_SYMBOL_GPL(nvmem_layout_get_match_data);
struct nvmem_device *nvmem_register(const struct nvmem_config *config)
{
struct nvmem_device *nvmem;
+ struct device_node *layout_np;
int rval;
if (!config->dev)
@@ -974,19 +971,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
goto err_put_device;
}
- /*
- * If the driver supplied a layout by config->layout, the module
- * pointer will be NULL and nvmem_layout_put() will be a noop.
- */
- nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
- if (IS_ERR(nvmem->layout)) {
- rval = PTR_ERR(nvmem->layout);
- nvmem->layout = NULL;
-
- if (rval == -EPROBE_DEFER)
- goto err_teardown_compat;
- }
-
if (config->cells) {
rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
if (rval)
@@ -1005,24 +989,27 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;
- rval = nvmem_add_cells_from_layout(nvmem);
- if (rval)
- goto err_remove_cells;
-
dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
rval = device_add(&nvmem->dev);
if (rval)
goto err_remove_cells;
+ /* Populate layouts as devices */
+ layout_np = of_nvmem_layout_get_container(nvmem);
+ if (layout_np) {
+ rval = of_platform_populate(nvmem->dev.of_node, NULL, NULL, NULL);
+ of_node_put(layout_np);
+ if (rval)
+ goto err_remove_cells;
+ }
+
blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
return nvmem;
err_remove_cells:
nvmem_device_remove_all_cells(nvmem);
- nvmem_layout_put(nvmem->layout);
-err_teardown_compat:
if (config->compat)
nvmem_sysfs_remove_compat(nvmem, config);
err_put_device:
@@ -2124,13 +2111,106 @@ const char *nvmem_dev_name(struct nvmem_device *nvmem)
}
EXPORT_SYMBOL_GPL(nvmem_dev_name);
+static void nvmem_try_loading_layout_driver(struct nvmem_device *nvmem)
+{
+ struct device_node *layout_np;
+
+ layout_np = of_nvmem_layout_get_container(nvmem);
+ if (layout_np) {
+ of_request_module(layout_np);
+ of_node_put(layout_np);
+ }
+}
+
+static int nvmem_match_available_layout(struct nvmem_device *nvmem)
+{
+ int ret;
+
+ if (nvmem->layout)
+ return 0;
+
+ nvmem->layout = nvmem_layout_get(nvmem);
+ if (!nvmem->layout)
+ return 0;
+
+ ret = nvmem_add_cells_from_layout(nvmem);
+ if (ret) {
+ nvmem_layout_put(nvmem->layout);
+ nvmem->layout = NULL;
+ return ret;
+ }
+
+ return 0;
+}
+
+static int nvmem_dev_match_available_layout(struct device *dev, void *data)
+{
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+
+ return nvmem_match_available_layout(nvmem);
+}
+
+static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
+{
+ return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
+}
+
+/*
+ * When an NVMEM device is registered, try to match against a layout and
+ * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
+ * which could use it properly expose their cells.
+ */
+static int nvmem_notifier_call(struct notifier_block *notifier,
+ unsigned long event_flags, void *context)
+{
+ struct nvmem_device *nvmem = NULL;
+ int ret;
+
+ switch (event_flags) {
+ case NVMEM_ADD:
+ nvmem = context;
+ break;
+ case NVMEM_LAYOUT_ADD:
+ break;
+ default:
+ return NOTIFY_DONE;
+ }
+
+ if (nvmem) {
+ /*
+ * In case the nvmem device was built-in while the layout was
+ * built as a module, manually request loading the layout driver.
+ */
+ nvmem_try_loading_layout_driver(nvmem);
+
+ /* Populate the cells of the new nvmem device from its layout, if any */
+ ret = nvmem_match_available_layout(nvmem);
+ } else {
+ /* NVMEM devices might be "waiting" for this layout */
+ ret = nvmem_for_each_dev(nvmem_dev_match_available_layout);
+ }
+
+ if (ret)
+ return notifier_from_errno(ret);
+
+ return NOTIFY_OK;
+}
+
static int __init nvmem_init(void)
{
- return bus_register(&nvmem_bus_type);
+ int ret;
+
+ ret = bus_register(&nvmem_bus_type);
+ if (ret)
+ return ret;
+
+ nvmem_nb.notifier_call = &nvmem_notifier_call;
+ return nvmem_register_notifier(&nvmem_nb);
}
static void __exit nvmem_exit(void)
{
+ nvmem_unregister_notifier(&nvmem_nb);
bus_unregister(&nvmem_bus_type);
}
@@ -13,6 +13,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/nvmem-provider.h>
#include <linux/of.h>
+#include <linux/platform_device.h>
#define ONIE_TLV_MAX_LEN 2048
#define ONIE_TLV_CRC_FIELD_SZ 6
@@ -226,18 +227,46 @@ static int onie_tlv_parse_table(struct device *dev, struct nvmem_device *nvmem,
return 0;
}
+static int onie_tlv_probe(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(&pdev->dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = onie_tlv_parse_table;
+ layout->dev = &pdev->dev;
+
+ platform_set_drvdata(pdev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int onie_tlv_remove(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout = platform_get_drvdata(pdev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id onie_tlv_of_match_table[] = {
{ .compatible = "onie,tlv-layout", },
{},
};
MODULE_DEVICE_TABLE(of, onie_tlv_of_match_table);
-static struct nvmem_layout onie_tlv_layout = {
- .name = "ONIE tlv layout",
- .of_match_table = onie_tlv_of_match_table,
- .add_cells = onie_tlv_parse_table,
+static struct platform_driver onie_tlv_layout = {
+ .driver = {
+ .name = "onie-tlv-layout",
+ .of_match_table = onie_tlv_of_match_table,
+ },
+ .probe = onie_tlv_probe,
+ .remove = onie_tlv_remove,
};
-module_nvmem_layout_driver(onie_tlv_layout);
+module_platform_driver(onie_tlv_layout);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Miquel Raynal <miquel.raynal@bootlin.com>");
@@ -5,6 +5,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/nvmem-provider.h>
#include <linux/of.h>
+#include <linux/platform_device.h>
#include <uapi/linux/if_ether.h>
#define SL28VPD_MAGIC 'V'
@@ -135,18 +136,46 @@ static int sl28vpd_add_cells(struct device *dev, struct nvmem_device *nvmem,
return 0;
}
+static int sl28vpd_probe(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(&pdev->dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = sl28vpd_add_cells;
+ layout->dev = &pdev->dev;
+
+ platform_set_drvdata(pdev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int sl28vpd_remove(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout = platform_get_drvdata(pdev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id sl28vpd_of_match_table[] = {
{ .compatible = "kontron,sl28-vpd" },
{},
};
MODULE_DEVICE_TABLE(of, sl28vpd_of_match_table);
-static struct nvmem_layout sl28vpd_layout = {
- .name = "sl28-vpd",
- .of_match_table = sl28vpd_of_match_table,
- .add_cells = sl28vpd_add_cells,
+static struct platform_driver sl28vpd_layout = {
+ .driver = {
+ .name = "kontron-sl28vpd-layout",
+ .of_match_table = sl28vpd_of_match_table,
+ },
+ .probe = sl28vpd_probe,
+ .remove = sl28vpd_remove,
};
-module_nvmem_layout_driver(sl28vpd_layout);
+module_platform_driver(sl28vpd_layout);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Michael Walle <michael@walle.cc>");
@@ -154,8 +154,7 @@ struct nvmem_cell_table {
/**
* struct nvmem_layout - NVMEM layout definitions
*
- * @name: Layout name.
- * @of_match_table: Open firmware match table.
+ * @dev: Device-model layout device.
* @add_cells: Will be called if a nvmem device is found which
* has this layout. The function will add layout
* specific cells with nvmem_add_one_cell().
@@ -170,8 +169,7 @@ struct nvmem_cell_table {
* cells.
*/
struct nvmem_layout {
- const char *name;
- const struct of_device_id *of_match_table;
+ struct device *dev;
int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
struct nvmem_layout *layout);
void (*fixup_cell_info)(struct nvmem_device *nvmem,
@@ -243,9 +241,4 @@ nvmem_layout_get_match_data(struct nvmem_device *nvmem,
}
#endif /* CONFIG_NVMEM */
-
-#define module_nvmem_layout_driver(__layout_driver) \
- module_driver(__layout_driver, nvmem_layout_register, \
- nvmem_layout_unregister)
-
#endif /* ifndef _LINUX_NVMEM_PROVIDER_H */