Message ID | 20221206200740.3567551-1-michael@walle.cc |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp3026580wrr; Tue, 6 Dec 2022 12:08:29 -0800 (PST) X-Google-Smtp-Source: AA0mqf6I5zxmpLqMw7Jm49kf/Ft+QHtgxuxtlowLaPA+EIRHyEhLLn2cFYVfz2PjXwZY5DO4S3Oy X-Received: by 2002:a62:fb11:0:b0:56b:dbab:5362 with SMTP id x17-20020a62fb11000000b0056bdbab5362mr91165804pfm.47.1670357309485; Tue, 06 Dec 2022 12:08:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670357309; cv=none; d=google.com; s=arc-20160816; b=ItYGzE2MmXVGn4O3L/L4ZHGnxTVJG5Gb0MT3l+3WIQSpQlwLi50I32SLyGALnaDIQ7 LEQug4yIzJh/d+G8eqz6oeuWn/zq+9VygdiOF8uoq3E/Cvs6/ud5RcijIAgvVJ5R/ams Dxka5lvPTMeHO4vmSFYvioGUK/JFHoXhAc5mhtAquZFjANL/GRPS9ISaZ/7P9VVk5BRW rqui8a4E+O2XD/wmdzJ4TxpI1DIwy367w0sq20zN269fIZ8G6HvmncUzqddhgUPed6k8 AcqRs9pXbn0KEi4RHvQ359pxZYFM91sru7who/nYQbxdRMPrcl8Bw6lExLQVRJin9Ecn DXkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=+5VJCq6HrWVbUfh7jLp9RMyTddhCI3UUSk7hyWXiihk=; b=jFKfJGV8v9nCDNRSWcdEct7Cr6UO+5PnKGZTItoszG6kU93Jyu9yS8KLZF1njglqFq GXwLPdoNemnWhRVRPfEY77WR5GWB/DNgyCouC4Tt8WyUv7VJb7jAlHsZ/xnmjPKtt3D3 xPDeXO4TN11jdszoyrcAAKTYkWNfcc3mQ85inRYtPPKTQz7AjgqXm5NeQhjRmIa6VlsA RdWalkGEkghLGeBuMmnCH3QEyYyAqpGYY/+sBWUvbtHEzPYKWKEKg2Lh8uAVygWjgOF7 O7rb2MXqBE/hJQtQ4bDyoCDnPGEup12dAKDmmzeLKF4s6Vx0erEIHaBl9nWozAwgOmnX 46jQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@walle.cc header.s=mail2022082101 header.b=e1ZKqZWD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=walle.cc Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s5-20020a170902ea0500b00186be05c702si20345498plg.530.2022.12.06.12.08.16; Tue, 06 Dec 2022 12:08:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@walle.cc header.s=mail2022082101 header.b=e1ZKqZWD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=walle.cc Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229506AbiLFUHz (ORCPT <rfc822;jaysivo@gmail.com> + 99 others); Tue, 6 Dec 2022 15:07:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229612AbiLFUHu (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 6 Dec 2022 15:07:50 -0500 Received: from mail.3ffe.de (0001.3ffe.de [159.69.201.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41E2AC775; Tue, 6 Dec 2022 12:07:47 -0800 (PST) Received: from mwalle01.kontron.local. (unknown [213.135.10.150]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.3ffe.de (Postfix) with ESMTPSA id A99D42247; Tue, 6 Dec 2022 21:07:45 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=walle.cc; s=mail2022082101; t=1670357265; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=+5VJCq6HrWVbUfh7jLp9RMyTddhCI3UUSk7hyWXiihk=; b=e1ZKqZWDyGy1cz7MiWVPa8brkrG1dhqG0Lm33epbGYEnVpKKMlWB+IvXqcBt5nCFnOcnHI r9ZoRYTAc+KAg6qT1/sFpIza/1a77DSoo1V72AeOgLtstbENkS6cdKrfOin//Oe6Hzaz2c l/LDYPIaLcYrayKFsCC5blOeIZ1T4XlfP6xgYzcVuxzXSCrWNbEN/HUR7RO3xsyMbNY35I HTRy6VB0I3yF+ix7jbNJT+kZ1iiu29w+Dz0CiNXc4zU4TcwlgYqioT8j+CJfR/k80hhMrg VbOdQzz3Q0undA4BTBLf/XCiMHD8YIUiOZ3v+6mmobIPFoZRvr7zT4KVFenoeg== From: Michael Walle <michael@walle.cc> To: Jonathan Corbet <corbet@lwn.net>, Srinivas Kandagatla <srinivas.kandagatla@linaro.org>, Miquel Raynal <miquel.raynal@bootlin.com>, Rob Herring <robh+dt@kernel.org>, Frank Rowand <frowand.list@gmail.com>, Sascha Hauer <s.hauer@pengutronix.de> Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org, Michael Walle <michael@walle.cc>, Dan Carpenter <error27@gmail.com> Subject: [PATCH v5 00/21] nvmem: core: introduce NVMEM layouts Date: Tue, 6 Dec 2022 21:07:19 +0100 Message-Id: <20221206200740.3567551-1-michael@walle.cc> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam: Yes X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751496586467055806?= X-GMAIL-MSGID: =?utf-8?q?1751496586467055806?= |
Series |
nvmem: core: introduce NVMEM layouts
|
|
Message
Michael Walle
Dec. 6, 2022, 8:07 p.m. UTC
This is now the third attempt to fetch the MAC addresses from the VPD for the Kontron sl28 boards. Previous discussions can be found here: https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ NVMEM cells are typically added by board code or by the devicetree. But as the cells get more complex, there is (valid) push back from the devicetree maintainers to not put that handling in the devicetree. Therefore, introduce NVMEM layouts. They operate on the NVMEM device and can add cells during runtime. That way it is possible to add more complex cells than it is possible right now with the offset/length/bits description in the device tree. For example, you can have post processing for individual cells (think of endian swapping, or ethernet offset handling). The imx-ocotp driver is the only user of the global post processing hook, convert it to nvmem layouts and drop the global post pocessing hook. For now, the layouts are selected by the device tree. But the idea is that also board files or other drivers could set a layout. Although no code for that exists yet. Thanks to Miquel, the device tree bindings are already approved and merged. NVMEM layouts as modules? While possible in principle, it doesn't make any sense because the NVMEM core can't be compiled as a module. The layouts needs to be available at probe time. (That is also the reason why they get registered with subsys_initcall().) So if the NVMEM core would be a module, the layouts could be modules, too. Michael Walle (19): net: add helper eth_addr_add() of: base: add of_parse_phandle_with_optional_args() of: property: make #.*-cells optional for simple props of: property: add #nvmem-cell-cells property nvmem: core: fix device node refcounting nvmem: core: add an index parameter to the cell nvmem: core: move struct nvmem_cell_info to nvmem-provider.h nvmem: core: drop the removal of the cells in nvmem_add_cells() nvmem: core: fix cell removal on error nvmem: core: add nvmem_add_one_cell() nvmem: core: use nvmem_add_one_cell() in nvmem_add_cells_from_of() nvmem: core: introduce NVMEM layouts nvmem: core: add per-cell post processing nvmem: core: allow to modify a cell before adding it nvmem: imx-ocotp: replace global post processing with layouts nvmem: cell: drop global cell_post_process nvmem: core: provide own priv pointer in post process callback nvmem: layouts: add sl28vpd layout MAINTAINERS: add myself as sl28vpd nvmem layout driver Miquel Raynal (2): nvmem: layouts: Add ONIE tlv layout driver MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer Documentation/driver-api/nvmem.rst | 15 ++ MAINTAINERS | 12 ++ drivers/nvmem/Kconfig | 4 + drivers/nvmem/Makefile | 1 + drivers/nvmem/core.c | 295 +++++++++++++++++++++-------- drivers/nvmem/imx-ocotp.c | 34 ++-- drivers/nvmem/layouts/Kconfig | 23 +++ drivers/nvmem/layouts/Makefile | 7 + drivers/nvmem/layouts/onie-tlv.c | 244 ++++++++++++++++++++++++ drivers/nvmem/layouts/sl28vpd.c | 153 +++++++++++++++ drivers/of/property.c | 6 +- include/linux/etherdevice.h | 14 ++ include/linux/nvmem-consumer.h | 17 +- include/linux/nvmem-provider.h | 95 +++++++++- include/linux/of.h | 25 +++ 15 files changed, 837 insertions(+), 108 deletions(-) create mode 100644 drivers/nvmem/layouts/Kconfig create mode 100644 drivers/nvmem/layouts/Makefile create mode 100644 drivers/nvmem/layouts/onie-tlv.c create mode 100644 drivers/nvmem/layouts/sl28vpd.c
Comments
Hi Srinivas, michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > This is now the third attempt to fetch the MAC addresses from the VPD > for the Kontron sl28 boards. Previous discussions can be found here: > https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > > > NVMEM cells are typically added by board code or by the devicetree. But > as the cells get more complex, there is (valid) push back from the > devicetree maintainers to not put that handling in the devicetree. > > Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > can add cells during runtime. That way it is possible to add more complex > cells than it is possible right now with the offset/length/bits > description in the device tree. For example, you can have post processing > for individual cells (think of endian swapping, or ethernet offset > handling). > > The imx-ocotp driver is the only user of the global post processing hook, > convert it to nvmem layouts and drop the global post pocessing hook. > > For now, the layouts are selected by the device tree. But the idea is > that also board files or other drivers could set a layout. Although no > code for that exists yet. > > Thanks to Miquel, the device tree bindings are already approved and merged. > > NVMEM layouts as modules? > While possible in principle, it doesn't make any sense because the NVMEM > core can't be compiled as a module. The layouts needs to be available at > probe time. (That is also the reason why they get registered with > subsys_initcall().) So if the NVMEM core would be a module, the layouts > could be modules, too. I believe this series still applies even though -rc1 (and -rc2) are out now, may we know if you consider merging it anytime soon or if there are still discrepancies in the implementation you would like to discuss? Otherwise I would really like to see this laying in -next a few weeks before being sent out to Linus, just in case. Thanks, Miquèl
Hi Miquel, On 03/01/2023 15:39, Miquel Raynal wrote: > Hi Srinivas, > > michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > >> This is now the third attempt to fetch the MAC addresses from the VPD >> for the Kontron sl28 boards. Previous discussions can be found here: >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ >> >> >> NVMEM cells are typically added by board code or by the devicetree. But >> as the cells get more complex, there is (valid) push back from the >> devicetree maintainers to not put that handling in the devicetree. >> >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and >> can add cells during runtime. That way it is possible to add more complex >> cells than it is possible right now with the offset/length/bits >> description in the device tree. For example, you can have post processing >> for individual cells (think of endian swapping, or ethernet offset >> handling). >> >> The imx-ocotp driver is the only user of the global post processing hook, >> convert it to nvmem layouts and drop the global post pocessing hook. >> >> For now, the layouts are selected by the device tree. But the idea is >> that also board files or other drivers could set a layout. Although no >> code for that exists yet. >> >> Thanks to Miquel, the device tree bindings are already approved and merged. >> >> NVMEM layouts as modules? >> While possible in principle, it doesn't make any sense because the NVMEM >> core can't be compiled as a module. The layouts needs to be available at >> probe time. (That is also the reason why they get registered with >> subsys_initcall().) So if the NVMEM core would be a module, the layouts >> could be modules, too. > > I believe this series still applies even though -rc1 (and -rc2) are out > now, may we know if you consider merging it anytime soon or if there > are still discrepancies in the implementation you would like to > discuss? Otherwise I would really like to see this laying in -next a > few weeks before being sent out to Linus, just in case. Thanks for the work! Lets get some testing in -next. Applied now, --srini > > Thanks, > Miquèl
Hi Srinivas, srinivas.kandagatla@linaro.org wrote on Tue, 3 Jan 2023 15:51:31 +0000: > Hi Miquel, > > On 03/01/2023 15:39, Miquel Raynal wrote: > > Hi Srinivas, > > > > michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > > > >> This is now the third attempt to fetch the MAC addresses from the VPD > >> for the Kontron sl28 boards. Previous discussions can be found here: > >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > >> > >> > >> NVMEM cells are typically added by board code or by the devicetree. But > >> as the cells get more complex, there is (valid) push back from the > >> devicetree maintainers to not put that handling in the devicetree. > >> > >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > >> can add cells during runtime. That way it is possible to add more complex > >> cells than it is possible right now with the offset/length/bits > >> description in the device tree. For example, you can have post processing > >> for individual cells (think of endian swapping, or ethernet offset > >> handling). > >> > >> The imx-ocotp driver is the only user of the global post processing hook, > >> convert it to nvmem layouts and drop the global post pocessing hook. > >> > >> For now, the layouts are selected by the device tree. But the idea is > >> that also board files or other drivers could set a layout. Although no > >> code for that exists yet. > >> > >> Thanks to Miquel, the device tree bindings are already approved and merged. > >> > >> NVMEM layouts as modules? > >> While possible in principle, it doesn't make any sense because the NVMEM > >> core can't be compiled as a module. The layouts needs to be available at > >> probe time. (That is also the reason why they get registered with > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > >> could be modules, too. > > > > I believe this series still applies even though -rc1 (and -rc2) are out > > now, may we know if you consider merging it anytime soon or if there > > are still discrepancies in the implementation you would like to > > discuss? Otherwise I would really like to see this laying in -next a > > few weeks before being sent out to Linus, just in case. > > Thanks for the work! > > Lets get some testing in -next. > > > Applied now, Excellent! Thanks a lot for the quick answer and thanks for applying, let's see how it behaves. Thanks, Miquèl
Am Dienstag, 3. Januar 2023, 16:51:31 CET schrieb Srinivas Kandagatla: > Hi Miquel, > > On 03/01/2023 15:39, Miquel Raynal wrote: > > Hi Srinivas, > > > > michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > >> This is now the third attempt to fetch the MAC addresses from the VPD > >> for the Kontron sl28 boards. Previous discussions can be found here: > >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > >> > >> > >> NVMEM cells are typically added by board code or by the devicetree. But > >> as the cells get more complex, there is (valid) push back from the > >> devicetree maintainers to not put that handling in the devicetree. > >> > >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > >> can add cells during runtime. That way it is possible to add more complex > >> cells than it is possible right now with the offset/length/bits > >> description in the device tree. For example, you can have post processing > >> for individual cells (think of endian swapping, or ethernet offset > >> handling). > >> > >> The imx-ocotp driver is the only user of the global post processing hook, > >> convert it to nvmem layouts and drop the global post pocessing hook. > >> > >> For now, the layouts are selected by the device tree. But the idea is > >> that also board files or other drivers could set a layout. Although no > >> code for that exists yet. > >> > >> Thanks to Miquel, the device tree bindings are already approved and > >> merged. > >> > >> NVMEM layouts as modules? > >> While possible in principle, it doesn't make any sense because the NVMEM > >> core can't be compiled as a module. The layouts needs to be available at > >> probe time. (That is also the reason why they get registered with > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > >> could be modules, too. > > > > I believe this series still applies even though -rc1 (and -rc2) are out > > now, may we know if you consider merging it anytime soon or if there > > are still discrepancies in the implementation you would like to > > discuss? Otherwise I would really like to see this laying in -next a > > few weeks before being sent out to Linus, just in case. > > Thanks for the work! > > Lets get some testing in -next. This causes the following errors on existing boards (imx8mq-tqma8mq- mba8mx.dtb): root@tqma8-common:~# uname -r 6.2.0-rc2-next-20230105 > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/ efuse@30350000/soc-uid@4 > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get #nvmem-cell-cells for /soc@0/bus@30000000/efuse@30350000/mac-address@90 These are caused because '#nvmem-cell-cells = <0>;' is not explicitly set in DT. > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem cell io_impedance_ctrl > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22 These are caused because of_nvmem_cell_get() now returns -EINVAL instead of - ENODEV if the requested nvmem cell is not available. Best regards, Alexander
Hello, alexander.stein@ew.tq-group.com wrote on Thu, 05 Jan 2023 12:04:52 +0100: > Am Dienstag, 3. Januar 2023, 16:51:31 CET schrieb Srinivas Kandagatla: > > Hi Miquel, > > > > On 03/01/2023 15:39, Miquel Raynal wrote: > > > Hi Srinivas, > > > > > > michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > > >> This is now the third attempt to fetch the MAC addresses from the VPD > > >> for the Kontron sl28 boards. Previous discussions can be found here: > > >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > > >> > > >> > > >> NVMEM cells are typically added by board code or by the devicetree. But > > >> as the cells get more complex, there is (valid) push back from the > > >> devicetree maintainers to not put that handling in the devicetree. > > >> > > >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > > >> can add cells during runtime. That way it is possible to add more complex > > >> cells than it is possible right now with the offset/length/bits > > >> description in the device tree. For example, you can have post processing > > >> for individual cells (think of endian swapping, or ethernet offset > > >> handling). > > >> > > >> The imx-ocotp driver is the only user of the global post processing hook, > > >> convert it to nvmem layouts and drop the global post pocessing hook. > > >> > > >> For now, the layouts are selected by the device tree. But the idea is > > >> that also board files or other drivers could set a layout. Although no > > >> code for that exists yet. > > >> > > >> Thanks to Miquel, the device tree bindings are already approved and > > >> merged. > > >> > > >> NVMEM layouts as modules? > > >> While possible in principle, it doesn't make any sense because the NVMEM > > >> core can't be compiled as a module. The layouts needs to be available at > > >> probe time. (That is also the reason why they get registered with > > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > > >> could be modules, too. > > > > > > I believe this series still applies even though -rc1 (and -rc2) are out > > > now, may we know if you consider merging it anytime soon or if there > > > are still discrepancies in the implementation you would like to > > > discuss? Otherwise I would really like to see this laying in -next a > > > few weeks before being sent out to Linus, just in case. > > > > Thanks for the work! > > > > Lets get some testing in -next. > > This causes the following errors on existing boards (imx8mq-tqma8mq- > mba8mx.dtb): > root@tqma8-common:~# uname -r > 6.2.0-rc2-next-20230105 > > > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/ > efuse@30350000/soc-uid@4 > > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get #nvmem-cell-cells > for /soc@0/bus@30000000/efuse@30350000/mac-address@90 > > These are caused because '#nvmem-cell-cells = <0>;' is not explicitly set in > DT. > > > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem cell > io_impedance_ctrl > > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22 > > These are caused because of_nvmem_cell_get() now returns -EINVAL instead of - > ENODEV if the requested nvmem cell is not available. Should we just assume #nvmem-cell-cells = <0> by default? I guess it's a safe assumption. Thanks, Miquèl
Hi Alexander, thanks for debugging. I'm not yet sure what is going wrong, so I have some more questions below. >> This causes the following errors on existing boards (imx8mq-tqma8mq- >> mba8mx.dtb): >> root@tqma8-common:~# uname -r >> 6.2.0-rc2-next-20230105 >> >> > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/ >> efuse@30350000/soc-uid@4 >> > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get #nvmem-cell-cells >> for /soc@0/bus@30000000/efuse@30350000/mac-address@90 >> >> These are caused because '#nvmem-cell-cells = <0>;' is not explicitly >> set in >> DT. >> >> > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem cell >> io_impedance_ctrl >> > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22 >> >> These are caused because of_nvmem_cell_get() now returns -EINVAL >> instead of - >> ENODEV if the requested nvmem cell is not available. What do you mean with not available? Not yet available because of probe order? > Should we just assume #nvmem-cell-cells = <0> by default? I guess it's > a safe assumption. Actually, that's what patch 2/21 is for. Alexander, did you verify that the EINVAL is returned by of_parse_phandle_with_optional_args()? -michael
Hi Michael, Am Donnerstag, 5. Januar 2023, 13:11:37 CET schrieb Michael Walle: > Hi Alexander, > > thanks for debugging. I'm not yet sure what is going wrong, so > I have some more questions below. > > >> This causes the following errors on existing boards (imx8mq-tqma8mq- > >> mba8mx.dtb): > >> root@tqma8-common:~# uname -r > >> 6.2.0-rc2-next-20230105 > >> > >> > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/ > >> > >> efuse@30350000/soc-uid@4 > >> > >> > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get > >> > #nvmem-cell-cells > >> > >> for /soc@0/bus@30000000/efuse@30350000/mac-address@90 > >> > >> These are caused because '#nvmem-cell-cells = <0>;' is not explicitly > >> set in > >> DT. > >> > >> > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem > >> > cell > >> > >> io_impedance_ctrl > >> > >> > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22 > >> > >> These are caused because of_nvmem_cell_get() now returns -EINVAL > >> instead of - > >> ENODEV if the requested nvmem cell is not available. > > What do you mean with not available? Not yet available because of probe > order? Ah, I was talking about there is no nvmem cell being used in my PHY node, e.g. no 'nvmem-cells' nor 'nvmem-cell-names' (set to 'io_impedance_ctrl'). That's why of_property_match_string returns -EINVAL. > > Should we just assume #nvmem-cell-cells = <0> by default? I guess it's > > a safe assumption. > > Actually, that's what patch 2/21 is for. > > Alexander, did you verify that the EINVAL is returned by > of_parse_phandle_with_optional_args()? Yep. --8<-- diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index 1b61c8bf0de4..f2a85a31d039 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -1339,9 +1339,11 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, const char *id) if (id) index = of_property_match_string(np, "nvmem-cell-names", id); + pr_info("%s: index: %d\n", __func__, index); ret = of_parse_phandle_with_optional_args(np, "nvmem-cells", "#nvmem-cell-cells", index, &cell_spec); + pr_info("%s: of_parse_phandle_with_optional_args: %d\n", __func__, ret); if (ret) return ERR_PTR(ret); --8<-- Results in: > [ 1.861896] of_nvmem_cell_get: index: -22 > [ 1.865934] of_nvmem_cell_get: of_parse_phandle_with_optional_args: -22 > [ 1.872595] TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem cell io_impedance_ctrl > [ 2.402575] TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22 So, the index is wrong in the first place, but this was no problem until now. Best regards, Alexander
Hi, Am 2023-01-05 13:21, schrieb Alexander Stein: > Am Donnerstag, 5. Januar 2023, 13:11:37 CET schrieb Michael Walle: >> thanks for debugging. I'm not yet sure what is going wrong, so >> I have some more questions below. >> >> >> This causes the following errors on existing boards (imx8mq-tqma8mq- >> >> mba8mx.dtb): >> >> root@tqma8-common:~# uname -r >> >> 6.2.0-rc2-next-20230105 >> >> >> >> > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/ >> >> >> >> efuse@30350000/soc-uid@4 >> >> >> >> > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get >> >> > #nvmem-cell-cells >> >> >> >> for /soc@0/bus@30000000/efuse@30350000/mac-address@90 >> >> >> >> These are caused because '#nvmem-cell-cells = <0>;' is not explicitly >> >> set in >> >> DT. >> >> >> >> > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem >> >> > cell >> >> >> >> io_impedance_ctrl >> >> >> >> > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22 >> >> >> >> These are caused because of_nvmem_cell_get() now returns -EINVAL >> >> instead of - >> >> ENODEV if the requested nvmem cell is not available. >> >> What do you mean with not available? Not yet available because of >> probe >> order? > > Ah, I was talking about there is no nvmem cell being used in my PHY > node, e.g. > no 'nvmem-cells' nor 'nvmem-cell-names' (set to 'io_impedance_ctrl'). > That's > why of_property_match_string returns -EINVAL. Ahh I see. You mean ENOENT instead of ENODEV, right? >> > Should we just assume #nvmem-cell-cells = <0> by default? I guess it's >> > a safe assumption. >> >> Actually, that's what patch 2/21 is for. >> >> Alexander, did you verify that the EINVAL is returned by >> of_parse_phandle_with_optional_args()? > > Yep. > > --8<-- > diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c > index 1b61c8bf0de4..f2a85a31d039 100644 > --- a/drivers/nvmem/core.c > +++ b/drivers/nvmem/core.c > @@ -1339,9 +1339,11 @@ struct nvmem_cell *of_nvmem_cell_get(struct > device_node > *np, const char *id) > if (id) > index = of_property_match_string(np, > "nvmem-cell-names", id); > > + pr_info("%s: index: %d\n", __func__, index); > ret = of_parse_phandle_with_optional_args(np, "nvmem-cells", > "#nvmem-cell-cells", > index, &cell_spec); > + pr_info("%s: of_parse_phandle_with_optional_args: %d\n", > __func__, > ret); > if (ret) > return ERR_PTR(ret); > --8<-- > > Results in: >> [ 1.861896] of_nvmem_cell_get: index: -22 >> [ 1.865934] of_nvmem_cell_get: of_parse_phandle_with_optional_args: >> -22 >> [ 1.872595] TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: >> failed to > get nvmem cell io_impedance_ctrl >> [ 2.402575] TI DP83867: probe of 30be0000.ethernet-1:0e failed with >> error > -22 > > So, the index is wrong in the first place, but this was no problem > until now. Thanks, could you try the following patch: diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index 1b61c8bf0de4..1085abfcd9b1 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -1336,8 +1336,11 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, const char *id) int ret; /* if cell name exists, find index to the name */ - if (id) + if (id) { index = of_property_match_string(np, "nvmem-cell-names", id); + if (index < 0) + return ERR_PTR(-ENOENT); + } ret = of_parse_phandle_with_optional_args(np, "nvmem-cells", "#nvmem-cell-cells", Before patch 6/21, the -EINVAL was passed as index to of_parse_phandle() which then returned NULL, which caused the nvmem core to return ENOENT. I have a vague memory, that I made sure, that of_parse_phandle_with_optional_args() will also propagate the wrong index to its return code. But now, it won't be converted to ENOENT. -michael
Hi Michael, Am Donnerstag, 5. Januar 2023, 13:51:53 CET schrieb Michael Walle: > Hi, > > Am 2023-01-05 13:21, schrieb Alexander Stein: > > Am Donnerstag, 5. Januar 2023, 13:11:37 CET schrieb Michael Walle: > >> thanks for debugging. I'm not yet sure what is going wrong, so > >> I have some more questions below. > >> > >> >> This causes the following errors on existing boards (imx8mq-tqma8mq- > >> >> mba8mx.dtb): > >> >> root@tqma8-common:~# uname -r > >> >> 6.2.0-rc2-next-20230105 > >> >> > >> >> > OF: /soc@0: could not get #nvmem-cell-cells for /soc@0/bus@30000000/ > >> >> > >> >> efuse@30350000/soc-uid@4 > >> >> > >> >> > OF: /soc@0/bus@30800000/ethernet@30be0000: could not get > >> >> > #nvmem-cell-cells > >> >> > >> >> for /soc@0/bus@30000000/efuse@30350000/mac-address@90 > >> >> > >> >> These are caused because '#nvmem-cell-cells = <0>;' is not explicitly > >> >> set in > >> >> DT. > >> >> > >> >> > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get > >> >> > nvmem > >> >> > cell > >> >> > >> >> io_impedance_ctrl > >> >> > >> >> > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22 > >> >> > >> >> These are caused because of_nvmem_cell_get() now returns -EINVAL > >> >> instead of - > >> >> ENODEV if the requested nvmem cell is not available. > >> > >> What do you mean with not available? Not yet available because of > >> probe > >> order? > > > > Ah, I was talking about there is no nvmem cell being used in my PHY > > node, e.g. > > no 'nvmem-cells' nor 'nvmem-cell-names' (set to 'io_impedance_ctrl'). > > That's > > why of_property_match_string returns -EINVAL. > > Ahh I see. You mean ENOENT instead of ENODEV, right? Yeah you are right here, ENOENT is the one missing. > >> > Should we just assume #nvmem-cell-cells = <0> by default? I guess it's > >> > a safe assumption. > >> > >> Actually, that's what patch 2/21 is for. > >> > >> Alexander, did you verify that the EINVAL is returned by > >> of_parse_phandle_with_optional_args()? > > > > Yep. > > > > --8<-- > > diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c > > index 1b61c8bf0de4..f2a85a31d039 100644 > > --- a/drivers/nvmem/core.c > > +++ b/drivers/nvmem/core.c > > @@ -1339,9 +1339,11 @@ struct nvmem_cell *of_nvmem_cell_get(struct > > device_node > > *np, const char *id) > > > > if (id) > > > > index = of_property_match_string(np, > > > > "nvmem-cell-names", id); > > > > + pr_info("%s: index: %d\n", __func__, index); > > > > ret = of_parse_phandle_with_optional_args(np, "nvmem-cells", > > > > "#nvmem-cell-cells", > > index, &cell_spec); > > > > + pr_info("%s: of_parse_phandle_with_optional_args: %d\n", > > __func__, > > ret); > > > > if (ret) > > > > return ERR_PTR(ret); > > > > --8<-- > > > > Results in: > >> [ 1.861896] of_nvmem_cell_get: index: -22 > >> [ 1.865934] of_nvmem_cell_get: of_parse_phandle_with_optional_args: > >> -22 > >> [ 1.872595] TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: > >> failed to > > > > get nvmem cell io_impedance_ctrl > > > >> [ 2.402575] TI DP83867: probe of 30be0000.ethernet-1:0e failed with > >> error > > > > -22 > > > > So, the index is wrong in the first place, but this was no problem > > until now. > > Thanks, could you try the following patch: > > diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c > index 1b61c8bf0de4..1085abfcd9b1 100644 > --- a/drivers/nvmem/core.c > +++ b/drivers/nvmem/core.c > @@ -1336,8 +1336,11 @@ struct nvmem_cell *of_nvmem_cell_get(struct > device_node *np, const char *id) > int ret; > > /* if cell name exists, find index to the name */ > - if (id) > + if (id) { > index = of_property_match_string(np, "nvmem-cell-names", > id); > + if (index < 0) > + return ERR_PTR(-ENOENT); > + } > > ret = of_parse_phandle_with_optional_args(np, "nvmem-cells", > "#nvmem-cell-cells", > > Before patch 6/21, the -EINVAL was passed as index to of_parse_phandle() > which then returned NULL, which caused the nvmem core to return ENOENT. > I have a vague memory, that I made sure, that > of_parse_phandle_with_optional_args() will also propagate the > wrong index to its return code. But now, it won't be converted > to ENOENT. Yes, this does the trick. Thanks Best regards, Alexander
Hi Michael/Miquel, I had to revert Layout patches due to comments from Greg about Making the layouts as built-in rather than modules, he is not ready to merge them as it is. His original comment, "Why are we going back to "custom-built" kernel configurations? Why can this not be a loadable module? Distros are now forced to enable these layout and all kernels will have this dead code in the tree without any choice in the matter? That's not ok, these need to be auto-loaded based on the hardware representation like any other kernel module. You can't force them to be always present, sorry. " I have applied most of the patches except nvmem: core: introduce NVMEM layouts nvmem: core: add per-cell post processing nvmem: core: allow to modify a cell before adding it nvmem: imx-ocotp: replace global post processing with layouts nvmem: cell: drop global cell_post_process nvmem: core: provide own priv pointer in post process callback nvmem: layouts: add sl28vpd layout MAINTAINERS: add myself as sl28vpd nvmem layout driver nvmem: layouts: Add ONIE tlv layout driver MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer nvmem: core: return -ENOENT if nvmem cell is not found nvmem: layouts: Fix spelling mistake "platforn" -> "platform" dt-bindings: nvmem: Fix spelling mistake "platforn" -> "platform" nvmem: core: fix nvmem_layout_get_match_data() Please rebase your patches on top of nvmem-next once layouts are converted to loadable modules. thanks, srini On 03/01/2023 15:39, Miquel Raynal wrote: > Hi Srinivas, > > michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > >> This is now the third attempt to fetch the MAC addresses from the VPD >> for the Kontron sl28 boards. Previous discussions can be found here: >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ >> >> >> NVMEM cells are typically added by board code or by the devicetree. But >> as the cells get more complex, there is (valid) push back from the >> devicetree maintainers to not put that handling in the devicetree. >> >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and >> can add cells during runtime. That way it is possible to add more complex >> cells than it is possible right now with the offset/length/bits >> description in the device tree. For example, you can have post processing >> for individual cells (think of endian swapping, or ethernet offset >> handling). >> >> The imx-ocotp driver is the only user of the global post processing hook, >> convert it to nvmem layouts and drop the global post pocessing hook. >> >> For now, the layouts are selected by the device tree. But the idea is >> that also board files or other drivers could set a layout. Although no >> code for that exists yet. >> >> Thanks to Miquel, the device tree bindings are already approved and merged. >> >> NVMEM layouts as modules? >> While possible in principle, it doesn't make any sense because the NVMEM >> core can't be compiled as a module. The layouts needs to be available at >> probe time. (That is also the reason why they get registered with >> subsys_initcall().) So if the NVMEM core would be a module, the layouts >> could be modules, too. > > I believe this series still applies even though -rc1 (and -rc2) are out > now, may we know if you consider merging it anytime soon or if there > are still discrepancies in the implementation you would like to > discuss? Otherwise I would really like to see this laying in -next a > few weeks before being sent out to Linus, just in case. > > Thanks, > Miquèl
Hi Srinivas, + Greg srinivas.kandagatla@linaro.org wrote on Mon, 6 Feb 2023 20:31:46 +0000: > Hi Michael/Miquel, > > I had to revert Layout patches due to comments from Greg about Making the layouts as built-in rather than modules, he is not ready to merge them as it is. Ok this is the second time I see something similar happening: - maintainer or maintainers group doing the review/apply job and sending to "upper" maintainer - upper maintainer refusing for a "questionable" reason at this stage. I am not saying the review is incorrect or anything. I'm just wondering whether, for the second time, I am facing a fair situation, either myself as a contributor or the intermediate maintainer who's being kind of bypassed. What I mean is: the review process has happened. Nothing was hidden, this series has started leaving on the mailing lists more than two years ago. The contribution process which has been in place for many years asks the contributors to send new versions when the review process leads to comments, which we did. Once the series has been "accepted" it is expected that this series will be pulled during the next merge window. If there is something else to fix, there are 6 to 8 long weeks where contributors' fixes are welcome. Why not letting us the opportunity to use them? Why, for the second time, I am facing an extremely urgent situation where I have to cancel all my commitments just because a random comment has been made on a series which has been standing still for months? What I would expect instead, is a discussion on the cover letter of the series where Michael explained why he did no choose to use modules in the first place. If it appears that for some reason it is best to enable NVMEM layouts as modules, we will send a timely series on top of the current one to enable that particular case. > >> NVMEM layouts as modules? > >> While possible in principle, it doesn't make any sense because the NVMEM > >> core can't be compiled as a module. The layouts needs to be available at > >> probe time. (That is also the reason why they get registered with > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > >> could be modules, too. I know Michael is busy after the FOSDEM and so am I, so, Greg, would you accept to take the PR as it is, participate to the discussion and wait for an update? Thanks, Miquèl > His original comment, > > "Why are we going back to "custom-built" kernel configurations? Why can > this not be a loadable module? Distros are now forced to enable these > layout and all kernels will have this dead code in the tree without any > choice in the matter? > > That's not ok, these need to be auto-loaded based on the hardware > representation like any other kernel module. You can't force them to be > always present, sorry. > " > > I have applied most of the patches except > > nvmem: core: introduce NVMEM layouts > nvmem: core: add per-cell post processing > nvmem: core: allow to modify a cell before adding it > nvmem: imx-ocotp: replace global post processing with layouts > nvmem: cell: drop global cell_post_process > nvmem: core: provide own priv pointer in post process callback > nvmem: layouts: add sl28vpd layout > MAINTAINERS: add myself as sl28vpd nvmem layout driver > nvmem: layouts: Add ONIE tlv layout driver > MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer > nvmem: core: return -ENOENT if nvmem cell is not found > nvmem: layouts: Fix spelling mistake "platforn" -> "platform" > dt-bindings: nvmem: Fix spelling mistake "platforn" -> "platform" > nvmem: core: fix nvmem_layout_get_match_data() > > Please rebase your patches on top of nvmem-next once layouts are converted to loadable modules. > > thanks, > srini > > > > On 03/01/2023 15:39, Miquel Raynal wrote: > > Hi Srinivas, > > > > michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > > > >> This is now the third attempt to fetch the MAC addresses from the VPD > >> for the Kontron sl28 boards. Previous discussions can be found here: > >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > >> > >> > >> NVMEM cells are typically added by board code or by the devicetree. But > >> as the cells get more complex, there is (valid) push back from the > >> devicetree maintainers to not put that handling in the devicetree. > >> > >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > >> can add cells during runtime. That way it is possible to add more complex > >> cells than it is possible right now with the offset/length/bits > >> description in the device tree. For example, you can have post processing > >> for individual cells (think of endian swapping, or ethernet offset > >> handling). > >> > >> The imx-ocotp driver is the only user of the global post processing hook, > >> convert it to nvmem layouts and drop the global post pocessing hook. > >> > >> For now, the layouts are selected by the device tree. But the idea is > >> that also board files or other drivers could set a layout. Although no > >> code for that exists yet. > >> > >> Thanks to Miquel, the device tree bindings are already approved and merged. > >> > >> NVMEM layouts as modules? > >> While possible in principle, it doesn't make any sense because the NVMEM > >> core can't be compiled as a module. The layouts needs to be available at > >> probe time. (That is also the reason why they get registered with > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > >> could be modules, too. > > > > I believe this series still applies even though -rc1 (and -rc2) are out > > now, may we know if you consider merging it anytime soon or if there > > are still discrepancies in the implementation you would like to > > discuss? Otherwise I would really like to see this laying in -next a > > few weeks before being sent out to Linus, just in case. > > > > Thanks, > > Miquèl
On Mon, Feb 06, 2023 at 11:47:13PM +0100, Miquel Raynal wrote: > Hi Srinivas, > > + Greg > > srinivas.kandagatla@linaro.org wrote on Mon, 6 Feb 2023 20:31:46 +0000: > > > Hi Michael/Miquel, > > > > I had to revert Layout patches due to comments from Greg about Making the layouts as built-in rather than modules, he is not ready to merge them as it is. > > Ok this is the second time I see something similar happening: > - maintainer or maintainers group doing the review/apply job and > sending to "upper" maintainer > - upper maintainer refusing for a "questionable" reason at this stage. Only the second time? You've gotten lucky then :) This happens all the time based on experience levels of reviewers and just the very nature of how this whole process works. It's nothing unusual and is good overall for the health of the project. In other words, this is a a feature, not a bug. > I am not saying the review is incorrect or anything. I'm just wondering > whether, for the second time, I am facing a fair situation, either > myself as a contributor or the intermediate maintainer who's being kind > of bypassed. > > What I mean is: the review process has happened. Nothing was hidden, > this series has started leaving on the mailing lists more than two > years ago. The contribution process which has been in place for many > years asks the contributors to send new versions when the review > process leads to comments, which we did. Once the series has been > "accepted" it is expected that this series will be pulled during the > next merge window. If there is something else to fix, there are 6 to 8 > long weeks where contributors' fixes are welcome. Why not letting us the > opportunity to use them? Why, for the second time, I am facing an > extremely urgent situation where I have to cancel all my commitments > just because a random comment has been made on a series which has been > standing still for months? There's no need to cancel anything, there are no deadlines in kernel development and I am not asking for any sort of rush whatsoever. So relax, take a week or two off (or month), and come back with an updated patch series when you are ready. And feel free to cc: me on it if you want my reviews (as I objected to these patches as-is) so that we don't end up in the same situation (where one maintainer accepted something, but the maintainer they sent it to rejected it.) Again, there's no rush, and this is totally normal. > What I would expect instead, is a discussion on the cover letter of the > series where Michael explained why he did no choose to use modules in > the first place. If it appears that for some reason it is best to > enable NVMEM layouts as modules, we will send a timely series on top > of the current one to enable that particular case. Why not rework the existing series to handle this and not require "fixups" at the end of the series? We don't normally create bugs and then fix them up in the same patch set, as you know, so this shouldn't be treated any differently. > > >> NVMEM layouts as modules? > > >> While possible in principle, it doesn't make any sense because the NVMEM > > >> core can't be compiled as a module. The layouts needs to be available at > > >> probe time. (That is also the reason why they get registered with > > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > > >> could be modules, too. > > I know Michael is busy after the FOSDEM and so am I, so, Greg, would > you accept to take the PR as it is, participate to the discussion and > wait for an update? Kernel development doesn't work on "PR" :) And no, I can't take these, as I don't agree with them, and I totally imagine others will object for the same reason I did (and then they would object to me, as the patches would be in my tree, as I am then responsible for them.) So send an updated version whenever you have the chance. Again, there's no rush, deadline, or anything else here. Code is accepted when it is ready and correct, not anytime earlier. thanks, greg k-h