[V7,0/3] Generate device tree node for pci devices

Message ID 1674183732-5157-1-git-send-email-lizhi.hou@amd.com
Headers
Series Generate device tree node for pci devices |

Message

Lizhi Hou Jan. 20, 2023, 3:02 a.m. UTC
  This patch series introduces OF overlay support for PCI devices which
primarily addresses two use cases. First, it provides a data driven method
to describe hardware peripherals that are present in a PCI endpoint and
hence can be accessed by the PCI host. Second, it allows reuse of a OF
compatible driver -- often used in SoC platforms -- in a PCI host based
system.

There are 2 series devices rely on this patch:

  1) Xilinx Alveo Accelerator cards (FPGA based device)
  2) Microchip LAN9662 Ethernet Controller

     Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/

Normally, the PCI core discovers PCI devices and their BARs using the
PCI enumeration process. However, the process does not provide a way to
discover the hardware peripherals that are present in a PCI device, and
which can be accessed through the PCI BARs. Also, the enumeration process
does not provide a way to associate MSI-X vectors of a PCI device with the
hardware peripherals that are present in the device. PCI device drivers
often use header files to describe the hardware peripherals and their
resources as there is no standard data driven way to do so. This patch
series proposes to use flattened device tree blob to describe the
peripherals in a data driven way. Based on previous discussion, using
device tree overlay is the best way to unflatten the blob and populate
platform devices. To use device tree overlay, there are three obvious
problems that need to be resolved.

First, we need to create a base tree for non-DT system such as x86_64. A
patch series has been submitted for this:
https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/

Second, a device tree node corresponding to the PCI endpoint is required
for overlaying the flattened device tree blob for that PCI endpoint.
Because PCI is a self-discoverable bus, a device tree node is usually not
created for PCI devices. This series adds support to generate a device
tree node for a PCI device which advertises itself using PCI quirks
infrastructure.

Third, we need to generate device tree nodes for PCI bridges since a child
PCI endpoint may choose to have a device tree node created.

This patch series is made up of three patches.

The first patch is adding OF interface to create or destroy OF node
dynamically.

The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
When the option is turned on, the kernel will generate device tree nodes
for all PCI bridges unconditionally. The patch also shows how to use the
PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
tree node for a device. Specifically, the patch generates a device tree
node for Xilinx Alveo U50 PCIe accelerator device. The generated device
tree nodes do not have any property.

The third patch adds basic properties ('reg', 'compatible' and
'device_type') to the dynamically generated device tree nodes. More
properties can be added in the future.

Here is the example of device tree nodes generated within the ARM64 QEMU.
# lspci -t    
-[0000:00]-+-00.0
           +-01.0-[01]--
           +-01.1-[02]----00.0
           +-01.2-[03]----00.0
           +-01.3-[04]----00.0
           +-01.4-[05]----00.0
           +-01.5-[06]--
           +-01.6-[07]--
           +-01.7-[08]--
           +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
           |                                            \-00.1
           +-02.1-[0c]--
           \-03.0-[0d-0e]----00.0-[0e]----01.0

# tree /sys/firmware/devicetree/base/pcie\@10000000
/sys/firmware/devicetree/base/pcie@10000000
|-- #address-cells
|-- #interrupt-cells
|-- #size-cells
|-- bus-range
|-- compatible
|-- device_type
|-- dma-coherent
|-- interrupt-map
|-- interrupt-map-mask
|-- linux,pci-domain
|-- msi-parent
|-- name
|-- pci@1,0
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@1,1
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@1,2
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@1,3
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@1,4
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@1,5
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@1,6
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@1,7
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@2,0
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- pci@0,0
|   |   |-- #address-cells
|   |   |-- #size-cells
|   |   |-- compatible
|   |   |-- device_type
|   |   |-- pci@0,0
|   |   |   |-- #address-cells
|   |   |   |-- #size-cells
|   |   |   |-- compatible
|   |   |   |-- dev@0,0
|   |   |   |   |-- compatible
|   |   |   |   `-- reg
|   |   |   |-- dev@0,1
|   |   |   |   |-- compatible
|   |   |   |   `-- reg
|   |   |   |-- device_type
|   |   |   |-- ranges
|   |   |   `-- reg
|   |   |-- ranges
|   |   `-- reg
|   |-- ranges
|   `-- reg
|-- pci@2,1
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- ranges
|   `-- reg
|-- pci@3,0
|   |-- #address-cells
|   |-- #size-cells
|   |-- compatible
|   |-- device_type
|   |-- pci@0,0
|   |   |-- #address-cells
|   |   |-- #size-cells
|   |   |-- compatible
|   |   |-- device_type
|   |   |-- ranges
|   |   `-- reg
|   |-- ranges
|   `-- reg
|-- ranges
`-- reg

Changes since v6:
- Removed single line wrapper functions
- Added Signed-off-by Clément Léger <clement.leger@bootlin.com>

Changes since v5:
- Fixed code review comments
- Fixed incorrect 'ranges' and 'reg' properties and verified address
  translation.

Changes since RFC v4:
- Fixed code review comments

Changes since RFC v3:
- Split the Xilinx Alveo U50 PCI quirk to a separate patch
- Minor changes in commit description and code comment

Changes since RFC v2:
- Merged patch 3 with patch 2
- Added OF interfaces of_changeset_add_prop_* and use them to create
  properties.
- Added '#address-cells', '#size-cells' and 'ranges' properties.

Changes since RFC v1:
- Added one patch to create basic properties.
- To move DT related code out of PCI subsystem, replaced of_node_alloc()
  with of_create_node()/of_destroy_node()

Lizhi Hou (3):
  of: dynamic: Add interfaces for creating device node dynamically
  PCI: Create device tree node for selected devices
  PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50

 drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
 drivers/pci/Kconfig         |  12 ++
 drivers/pci/Makefile        |   1 +
 drivers/pci/bus.c           |   2 +
 drivers/pci/msi/irqdomain.c |   6 +-
 drivers/pci/of.c            |  71 ++++++++++++
 drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
 drivers/pci/pci-driver.c    |   3 +-
 drivers/pci/pci.h           |  19 ++++
 drivers/pci/quirks.c        |  11 ++
 drivers/pci/remove.c        |   1 +
 include/linux/of.h          |  24 ++++
 12 files changed, 556 insertions(+), 3 deletions(-)
 create mode 100644 drivers/pci/of_property.c
  

Comments

Frank Rowand Jan. 23, 2023, 4:32 a.m. UTC | #1
Hi Rob, Lizhi,

On 1/19/23 21:02, Lizhi Hou wrote:
> This patch series introduces OF overlay support for PCI devices which
> primarily addresses two use cases. First, it provides a data driven method
> to describe hardware peripherals that are present in a PCI endpoint and
> hence can be accessed by the PCI host. Second, it allows reuse of a OF
> compatible driver -- often used in SoC platforms -- in a PCI host based
> system.

I had hoped to review this series by today, but have not yet due to working
on some new unittest features.  I hope to get to this series Monday.

-Frank

> 
> There are 2 series devices rely on this patch:
> 
>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>   2) Microchip LAN9662 Ethernet Controller
> 
>      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> 
> Normally, the PCI core discovers PCI devices and their BARs using the
> PCI enumeration process. However, the process does not provide a way to
> discover the hardware peripherals that are present in a PCI device, and
> which can be accessed through the PCI BARs. Also, the enumeration process
> does not provide a way to associate MSI-X vectors of a PCI device with the
> hardware peripherals that are present in the device. PCI device drivers
> often use header files to describe the hardware peripherals and their
> resources as there is no standard data driven way to do so. This patch
> series proposes to use flattened device tree blob to describe the
> peripherals in a data driven way. Based on previous discussion, using
> device tree overlay is the best way to unflatten the blob and populate
> platform devices. To use device tree overlay, there are three obvious
> problems that need to be resolved.
> 
> First, we need to create a base tree for non-DT system such as x86_64. A
> patch series has been submitted for this:
> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
> 
> Second, a device tree node corresponding to the PCI endpoint is required
> for overlaying the flattened device tree blob for that PCI endpoint.
> Because PCI is a self-discoverable bus, a device tree node is usually not
> created for PCI devices. This series adds support to generate a device
> tree node for a PCI device which advertises itself using PCI quirks
> infrastructure.
> 
> Third, we need to generate device tree nodes for PCI bridges since a child
> PCI endpoint may choose to have a device tree node created.
> 
> This patch series is made up of three patches.
> 
> The first patch is adding OF interface to create or destroy OF node
> dynamically.
> 
> The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
> When the option is turned on, the kernel will generate device tree nodes
> for all PCI bridges unconditionally. The patch also shows how to use the
> PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
> tree node for a device. Specifically, the patch generates a device tree
> node for Xilinx Alveo U50 PCIe accelerator device. The generated device
> tree nodes do not have any property.
> 
> The third patch adds basic properties ('reg', 'compatible' and
> 'device_type') to the dynamically generated device tree nodes. More
> properties can be added in the future.
> 
> Here is the example of device tree nodes generated within the ARM64 QEMU.
> # lspci -t    
> -[0000:00]-+-00.0
>            +-01.0-[01]--
>            +-01.1-[02]----00.0
>            +-01.2-[03]----00.0
>            +-01.3-[04]----00.0
>            +-01.4-[05]----00.0
>            +-01.5-[06]--
>            +-01.6-[07]--
>            +-01.7-[08]--
>            +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
>            |                                            \-00.1
>            +-02.1-[0c]--
>            \-03.0-[0d-0e]----00.0-[0e]----01.0
> 
> # tree /sys/firmware/devicetree/base/pcie\@10000000
> /sys/firmware/devicetree/base/pcie@10000000
> |-- #address-cells
> |-- #interrupt-cells
> |-- #size-cells
> |-- bus-range
> |-- compatible
> |-- device_type
> |-- dma-coherent
> |-- interrupt-map
> |-- interrupt-map-mask
> |-- linux,pci-domain
> |-- msi-parent
> |-- name
> |-- pci@1,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,1
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,2
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,3
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,4
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,5
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,6
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,7
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@2,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- pci@0,0
> |   |   |-- #address-cells
> |   |   |-- #size-cells
> |   |   |-- compatible
> |   |   |-- device_type
> |   |   |-- pci@0,0
> |   |   |   |-- #address-cells
> |   |   |   |-- #size-cells
> |   |   |   |-- compatible
> |   |   |   |-- dev@0,0
> |   |   |   |   |-- compatible
> |   |   |   |   `-- reg
> |   |   |   |-- dev@0,1
> |   |   |   |   |-- compatible
> |   |   |   |   `-- reg
> |   |   |   |-- device_type
> |   |   |   |-- ranges
> |   |   |   `-- reg
> |   |   |-- ranges
> |   |   `-- reg
> |   |-- ranges
> |   `-- reg
> |-- pci@2,1
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@3,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- pci@0,0
> |   |   |-- #address-cells
> |   |   |-- #size-cells
> |   |   |-- compatible
> |   |   |-- device_type
> |   |   |-- ranges
> |   |   `-- reg
> |   |-- ranges
> |   `-- reg
> |-- ranges
> `-- reg
> 
> Changes since v6:
> - Removed single line wrapper functions
> - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
> 
> Changes since v5:
> - Fixed code review comments
> - Fixed incorrect 'ranges' and 'reg' properties and verified address
>   translation.
> 
> Changes since RFC v4:
> - Fixed code review comments
> 
> Changes since RFC v3:
> - Split the Xilinx Alveo U50 PCI quirk to a separate patch
> - Minor changes in commit description and code comment
> 
> Changes since RFC v2:
> - Merged patch 3 with patch 2
> - Added OF interfaces of_changeset_add_prop_* and use them to create
>   properties.
> - Added '#address-cells', '#size-cells' and 'ranges' properties.
> 
> Changes since RFC v1:
> - Added one patch to create basic properties.
> - To move DT related code out of PCI subsystem, replaced of_node_alloc()
>   with of_create_node()/of_destroy_node()
> 
> Lizhi Hou (3):
>   of: dynamic: Add interfaces for creating device node dynamically
>   PCI: Create device tree node for selected devices
>   PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
> 
>  drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
>  drivers/pci/Kconfig         |  12 ++
>  drivers/pci/Makefile        |   1 +
>  drivers/pci/bus.c           |   2 +
>  drivers/pci/msi/irqdomain.c |   6 +-
>  drivers/pci/of.c            |  71 ++++++++++++
>  drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
>  drivers/pci/pci-driver.c    |   3 +-
>  drivers/pci/pci.h           |  19 ++++
>  drivers/pci/quirks.c        |  11 ++
>  drivers/pci/remove.c        |   1 +
>  include/linux/of.h          |  24 ++++
>  12 files changed, 556 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/pci/of_property.c
>
  
Frank Rowand Jan. 23, 2023, 11:40 p.m. UTC | #2
On 1/22/23 22:32, Frank Rowand wrote:
> Hi Rob, Lizhi,
> 
> On 1/19/23 21:02, Lizhi Hou wrote:
>> This patch series introduces OF overlay support for PCI devices which
>> primarily addresses two use cases. First, it provides a data driven method
>> to describe hardware peripherals that are present in a PCI endpoint and
>> hence can be accessed by the PCI host. Second, it allows reuse of a OF
>> compatible driver -- often used in SoC platforms -- in a PCI host based
>> system.
> 
> I had hoped to review this series by today, but have not yet due to working
> on some new unittest features.  I hope to get to this series Monday.

I have skimmed through the series, and dug more deeply into portions of
the patches, but have not yet fully examined the full series.

I will be on vacation for a week or so, and will resume the detailed
reading of the series.

One overall comment at this point, is that this series is somewhat
duplicating portions of the (incomplete and fragile) overlay functionality
but not fully.  One trivial example is no locking to prevent interference
between tree modification by overlay code that could be occurring
asynchronously at the same time this new code is modifying the tree.

-Frank

> 
> -Frank
> 
>>
>> There are 2 series devices rely on this patch:
>>
>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>>   2) Microchip LAN9662 Ethernet Controller
>>
>>      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>
>> Normally, the PCI core discovers PCI devices and their BARs using the
>> PCI enumeration process. However, the process does not provide a way to
>> discover the hardware peripherals that are present in a PCI device, and
>> which can be accessed through the PCI BARs. Also, the enumeration process
>> does not provide a way to associate MSI-X vectors of a PCI device with the
>> hardware peripherals that are present in the device. PCI device drivers
>> often use header files to describe the hardware peripherals and their
>> resources as there is no standard data driven way to do so. This patch
>> series proposes to use flattened device tree blob to describe the
>> peripherals in a data driven way. Based on previous discussion, using
>> device tree overlay is the best way to unflatten the blob and populate
>> platform devices. To use device tree overlay, there are three obvious
>> problems that need to be resolved.
>>
>> First, we need to create a base tree for non-DT system such as x86_64. A
>> patch series has been submitted for this:
>> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
>> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
>>
>> Second, a device tree node corresponding to the PCI endpoint is required
>> for overlaying the flattened device tree blob for that PCI endpoint.
>> Because PCI is a self-discoverable bus, a device tree node is usually not
>> created for PCI devices. This series adds support to generate a device
>> tree node for a PCI device which advertises itself using PCI quirks
>> infrastructure.
>>
>> Third, we need to generate device tree nodes for PCI bridges since a child
>> PCI endpoint may choose to have a device tree node created.
>>
>> This patch series is made up of three patches.
>>
>> The first patch is adding OF interface to create or destroy OF node
>> dynamically.
>>
>> The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
>> When the option is turned on, the kernel will generate device tree nodes
>> for all PCI bridges unconditionally. The patch also shows how to use the
>> PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
>> tree node for a device. Specifically, the patch generates a device tree
>> node for Xilinx Alveo U50 PCIe accelerator device. The generated device
>> tree nodes do not have any property.
>>
>> The third patch adds basic properties ('reg', 'compatible' and
>> 'device_type') to the dynamically generated device tree nodes. More
>> properties can be added in the future.
>>
>> Here is the example of device tree nodes generated within the ARM64 QEMU.
>> # lspci -t    
>> -[0000:00]-+-00.0
>>            +-01.0-[01]--
>>            +-01.1-[02]----00.0
>>            +-01.2-[03]----00.0
>>            +-01.3-[04]----00.0
>>            +-01.4-[05]----00.0
>>            +-01.5-[06]--
>>            +-01.6-[07]--
>>            +-01.7-[08]--
>>            +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
>>            |                                            \-00.1
>>            +-02.1-[0c]--
>>            \-03.0-[0d-0e]----00.0-[0e]----01.0
>>
>> # tree /sys/firmware/devicetree/base/pcie\@10000000
>> /sys/firmware/devicetree/base/pcie@10000000
>> |-- #address-cells
>> |-- #interrupt-cells
>> |-- #size-cells
>> |-- bus-range
>> |-- compatible
>> |-- device_type
>> |-- dma-coherent
>> |-- interrupt-map
>> |-- interrupt-map-mask
>> |-- linux,pci-domain
>> |-- msi-parent
>> |-- name
>> |-- pci@1,0
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,1
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,2
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,3
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,4
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,5
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,6
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,7
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@2,0
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- pci@0,0
>> |   |   |-- #address-cells
>> |   |   |-- #size-cells
>> |   |   |-- compatible
>> |   |   |-- device_type
>> |   |   |-- pci@0,0
>> |   |   |   |-- #address-cells
>> |   |   |   |-- #size-cells
>> |   |   |   |-- compatible
>> |   |   |   |-- dev@0,0
>> |   |   |   |   |-- compatible
>> |   |   |   |   `-- reg
>> |   |   |   |-- dev@0,1
>> |   |   |   |   |-- compatible
>> |   |   |   |   `-- reg
>> |   |   |   |-- device_type
>> |   |   |   |-- ranges
>> |   |   |   `-- reg
>> |   |   |-- ranges
>> |   |   `-- reg
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@2,1
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@3,0
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- pci@0,0
>> |   |   |-- #address-cells
>> |   |   |-- #size-cells
>> |   |   |-- compatible
>> |   |   |-- device_type
>> |   |   |-- ranges
>> |   |   `-- reg
>> |   |-- ranges
>> |   `-- reg
>> |-- ranges
>> `-- reg
>>
>> Changes since v6:
>> - Removed single line wrapper functions
>> - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
>>
>> Changes since v5:
>> - Fixed code review comments
>> - Fixed incorrect 'ranges' and 'reg' properties and verified address
>>   translation.
>>
>> Changes since RFC v4:
>> - Fixed code review comments
>>
>> Changes since RFC v3:
>> - Split the Xilinx Alveo U50 PCI quirk to a separate patch
>> - Minor changes in commit description and code comment
>>
>> Changes since RFC v2:
>> - Merged patch 3 with patch 2
>> - Added OF interfaces of_changeset_add_prop_* and use them to create
>>   properties.
>> - Added '#address-cells', '#size-cells' and 'ranges' properties.
>>
>> Changes since RFC v1:
>> - Added one patch to create basic properties.
>> - To move DT related code out of PCI subsystem, replaced of_node_alloc()
>>   with of_create_node()/of_destroy_node()
>>
>> Lizhi Hou (3):
>>   of: dynamic: Add interfaces for creating device node dynamically
>>   PCI: Create device tree node for selected devices
>>   PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
>>
>>  drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
>>  drivers/pci/Kconfig         |  12 ++
>>  drivers/pci/Makefile        |   1 +
>>  drivers/pci/bus.c           |   2 +
>>  drivers/pci/msi/irqdomain.c |   6 +-
>>  drivers/pci/of.c            |  71 ++++++++++++
>>  drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
>>  drivers/pci/pci-driver.c    |   3 +-
>>  drivers/pci/pci.h           |  19 ++++
>>  drivers/pci/quirks.c        |  11 ++
>>  drivers/pci/remove.c        |   1 +
>>  include/linux/of.h          |  24 ++++
>>  12 files changed, 556 insertions(+), 3 deletions(-)
>>  create mode 100644 drivers/pci/of_property.c
>>
>
  
Christian Gmeiner Feb. 10, 2023, 9:37 a.m. UTC | #3
Hi all

> This patch series introduces OF overlay support for PCI devices which
> primarily addresses two use cases. First, it provides a data driven method
> to describe hardware peripherals that are present in a PCI endpoint and
> hence can be accessed by the PCI host. Second, it allows reuse of a OF
> compatible driver -- often used in SoC platforms -- in a PCI host based
> system.
>

I have tested this patch series on a FPGA connected via PCIe. I would love
to see this patch series hit mainline soon.

Tested-by: Christian Gmeiner <christian.gmeiner@gmail.com>
  
Frank Rowand Feb. 22, 2023, 3:26 p.m. UTC | #4
Hi Lizhi,

On 1/23/23 17:40, Frank Rowand wrote:
> On 1/22/23 22:32, Frank Rowand wrote:
>> Hi Rob, Lizhi,
>>
>> On 1/19/23 21:02, Lizhi Hou wrote:
>>> This patch series introduces OF overlay support for PCI devices which
>>> primarily addresses two use cases. First, it provides a data driven method
>>> to describe hardware peripherals that are present in a PCI endpoint and
>>> hence can be accessed by the PCI host. Second, it allows reuse of a OF
>>> compatible driver -- often used in SoC platforms -- in a PCI host based
>>> system.
>>
>> I had hoped to review this series by today, but have not yet due to working
>> on some new unittest features.  I hope to get to this series Monday.
> 
> I have skimmed through the series, and dug more deeply into portions of
> the patches, but have not yet fully examined the full series.
> 
> I will be on vacation for a week or so, and will resume the detailed
> reading of the series.

I am back from vacation, and I have completed the other devicetree
related tasks that were also at the top of my devicetree todo list.

I will resume the detailed reading of the series as the item that is
now at the top of my devicetree todo list.  But I consider any review
comments coming out of this as trivial compared to the issue raised in
the following paragraph:

> 
> One overall comment at this point, is that this series is somewhat
> duplicating portions of the (incomplete and fragile) overlay functionality
> but not fully.  One trivial example is no locking to prevent interference
> between tree modification by overlay code that could be occurring
> asynchronously at the same time this new code is modifying the tree.

Since there was no reply to the above paragraph, I am guessing that what
I wrote was not clear enough.  As far as I am concerned, this issue is
critical.  This patch series creates a body of code that is _more fragile_
and _more incomplete_ than the existing overlay code.  I have been very
clear over many years that the overlay code is not usable in its current
implementation.

Further, the body of code in this patch series will interact with the
overlay code in a manner that makes the overlay code even more fragile
and not usable.

-Frank

> 
> -Frank
> 
>>
>> -Frank
>>
>>>
>>> There are 2 series devices rely on this patch:
>>>
>>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>>>   2) Microchip LAN9662 Ethernet Controller
>>>
>>>      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>>
>>> Normally, the PCI core discovers PCI devices and their BARs using the
>>> PCI enumeration process. However, the process does not provide a way to
>>> discover the hardware peripherals that are present in a PCI device, and
>>> which can be accessed through the PCI BARs. Also, the enumeration process
>>> does not provide a way to associate MSI-X vectors of a PCI device with the
>>> hardware peripherals that are present in the device. PCI device drivers
>>> often use header files to describe the hardware peripherals and their
>>> resources as there is no standard data driven way to do so. This patch
>>> series proposes to use flattened device tree blob to describe the
>>> peripherals in a data driven way. Based on previous discussion, using
>>> device tree overlay is the best way to unflatten the blob and populate
>>> platform devices. To use device tree overlay, there are three obvious
>>> problems that need to be resolved.
>>>
>>> First, we need to create a base tree for non-DT system such as x86_64. A
>>> patch series has been submitted for this:
>>> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
>>> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
>>>
>>> Second, a device tree node corresponding to the PCI endpoint is required
>>> for overlaying the flattened device tree blob for that PCI endpoint.
>>> Because PCI is a self-discoverable bus, a device tree node is usually not
>>> created for PCI devices. This series adds support to generate a device
>>> tree node for a PCI device which advertises itself using PCI quirks
>>> infrastructure.
>>>
>>> Third, we need to generate device tree nodes for PCI bridges since a child
>>> PCI endpoint may choose to have a device tree node created.
>>>
>>> This patch series is made up of three patches.
>>>
>>> The first patch is adding OF interface to create or destroy OF node
>>> dynamically.
>>>
>>> The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
>>> When the option is turned on, the kernel will generate device tree nodes
>>> for all PCI bridges unconditionally. The patch also shows how to use the
>>> PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
>>> tree node for a device. Specifically, the patch generates a device tree
>>> node for Xilinx Alveo U50 PCIe accelerator device. The generated device
>>> tree nodes do not have any property.
>>>
>>> The third patch adds basic properties ('reg', 'compatible' and
>>> 'device_type') to the dynamically generated device tree nodes. More
>>> properties can be added in the future.
>>>
>>> Here is the example of device tree nodes generated within the ARM64 QEMU.
>>> # lspci -t    
>>> -[0000:00]-+-00.0
>>>            +-01.0-[01]--
>>>            +-01.1-[02]----00.0
>>>            +-01.2-[03]----00.0
>>>            +-01.3-[04]----00.0
>>>            +-01.4-[05]----00.0
>>>            +-01.5-[06]--
>>>            +-01.6-[07]--
>>>            +-01.7-[08]--
>>>            +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
>>>            |                                            \-00.1
>>>            +-02.1-[0c]--
>>>            \-03.0-[0d-0e]----00.0-[0e]----01.0
>>>
>>> # tree /sys/firmware/devicetree/base/pcie\@10000000
>>> /sys/firmware/devicetree/base/pcie@10000000
>>> |-- #address-cells
>>> |-- #interrupt-cells
>>> |-- #size-cells
>>> |-- bus-range
>>> |-- compatible
>>> |-- device_type
>>> |-- dma-coherent
>>> |-- interrupt-map
>>> |-- interrupt-map-mask
>>> |-- linux,pci-domain
>>> |-- msi-parent
>>> |-- name
>>> |-- pci@1,0
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,1
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,2
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,3
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,4
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,5
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,6
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,7
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@2,0
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- pci@0,0
>>> |   |   |-- #address-cells
>>> |   |   |-- #size-cells
>>> |   |   |-- compatible
>>> |   |   |-- device_type
>>> |   |   |-- pci@0,0
>>> |   |   |   |-- #address-cells
>>> |   |   |   |-- #size-cells
>>> |   |   |   |-- compatible
>>> |   |   |   |-- dev@0,0
>>> |   |   |   |   |-- compatible
>>> |   |   |   |   `-- reg
>>> |   |   |   |-- dev@0,1
>>> |   |   |   |   |-- compatible
>>> |   |   |   |   `-- reg
>>> |   |   |   |-- device_type
>>> |   |   |   |-- ranges
>>> |   |   |   `-- reg
>>> |   |   |-- ranges
>>> |   |   `-- reg
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@2,1
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@3,0
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- pci@0,0
>>> |   |   |-- #address-cells
>>> |   |   |-- #size-cells
>>> |   |   |-- compatible
>>> |   |   |-- device_type
>>> |   |   |-- ranges
>>> |   |   `-- reg
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- ranges
>>> `-- reg
>>>
>>> Changes since v6:
>>> - Removed single line wrapper functions
>>> - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
>>>
>>> Changes since v5:
>>> - Fixed code review comments
>>> - Fixed incorrect 'ranges' and 'reg' properties and verified address
>>>   translation.
>>>
>>> Changes since RFC v4:
>>> - Fixed code review comments
>>>
>>> Changes since RFC v3:
>>> - Split the Xilinx Alveo U50 PCI quirk to a separate patch
>>> - Minor changes in commit description and code comment
>>>
>>> Changes since RFC v2:
>>> - Merged patch 3 with patch 2
>>> - Added OF interfaces of_changeset_add_prop_* and use them to create
>>>   properties.
>>> - Added '#address-cells', '#size-cells' and 'ranges' properties.
>>>
>>> Changes since RFC v1:
>>> - Added one patch to create basic properties.
>>> - To move DT related code out of PCI subsystem, replaced of_node_alloc()
>>>   with of_create_node()/of_destroy_node()
>>>
>>> Lizhi Hou (3):
>>>   of: dynamic: Add interfaces for creating device node dynamically
>>>   PCI: Create device tree node for selected devices
>>>   PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
>>>
>>>  drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
>>>  drivers/pci/Kconfig         |  12 ++
>>>  drivers/pci/Makefile        |   1 +
>>>  drivers/pci/bus.c           |   2 +
>>>  drivers/pci/msi/irqdomain.c |   6 +-
>>>  drivers/pci/of.c            |  71 ++++++++++++
>>>  drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
>>>  drivers/pci/pci-driver.c    |   3 +-
>>>  drivers/pci/pci.h           |  19 ++++
>>>  drivers/pci/quirks.c        |  11 ++
>>>  drivers/pci/remove.c        |   1 +
>>>  include/linux/of.h          |  24 ++++
>>>  12 files changed, 556 insertions(+), 3 deletions(-)
>>>  create mode 100644 drivers/pci/of_property.c
>>>
>>
>
  
Lizhi Hou Feb. 23, 2023, 12:37 a.m. UTC | #5
Hi Frank,

On 2/22/23 07:26, Frank Rowand wrote:
> Hi Lizhi,
>
> On 1/23/23 17:40, Frank Rowand wrote:
>> On 1/22/23 22:32, Frank Rowand wrote:
>>> Hi Rob, Lizhi,
>>>
>>> On 1/19/23 21:02, Lizhi Hou wrote:
>>>> This patch series introduces OF overlay support for PCI devices which
>>>> primarily addresses two use cases. First, it provides a data driven method
>>>> to describe hardware peripherals that are present in a PCI endpoint and
>>>> hence can be accessed by the PCI host. Second, it allows reuse of a OF
>>>> compatible driver -- often used in SoC platforms -- in a PCI host based
>>>> system.
>>> I had hoped to review this series by today, but have not yet due to working
>>> on some new unittest features.  I hope to get to this series Monday.
>> I have skimmed through the series, and dug more deeply into portions of
>> the patches, but have not yet fully examined the full series.
>>
>> I will be on vacation for a week or so, and will resume the detailed
>> reading of the series.
> I am back from vacation, and I have completed the other devicetree
> related tasks that were also at the top of my devicetree todo list.
>
> I will resume the detailed reading of the series as the item that is
> now at the top of my devicetree todo list.  But I consider any review
> comments coming out of this as trivial compared to the issue raised in
> the following paragraph:
>
>> One overall comment at this point, is that this series is somewhat
>> duplicating portions of the (incomplete and fragile) overlay functionality
>> but not fully.  One trivial example is no locking to prevent interference
>> between tree modification by overlay code that could be occurring
>> asynchronously at the same time this new code is modifying the tree.
> Since there was no reply to the above paragraph, I am guessing that what
> I wrote was not clear enough.  As far as I am concerned, this issue is
> critical.  This patch series creates a body of code that is _more fragile_
> and _more incomplete_ than the existing overlay code.  I have been very
> clear over many years that the overlay code is not usable in its current
> implementation.
>
> Further, the body of code in this patch series will interact with the
> overlay code in a manner that makes the overlay code even more fragile
> and not usable.
>
> -Frank

I did not fully understand the concern here. And I thought you will add 
more after your vacation. :)

This patch calls of_changeset_apply() to add new nodes to base tree. And 
inside of_changeset_apply(),

global lock of_mutex is used to protect the change.  And this function 
is also used by hotplug-cpu.c, i2c-demux-pinctrl.c, pnv_php.c.

Do you mean of_changeset_apply() is fragile and should not be used?


The scope of this patch is just create a device tree node for pci device 
and add the node to the base tree during pci enumeration.

Could you give me an example code for the race condition you are 
worrying about?


Thanks,

Lizhi


>
>> -Frank
>>
>>> -Frank
>>>
>>>> There are 2 series devices rely on this patch:
>>>>
>>>>    1) Xilinx Alveo Accelerator cards (FPGA based device)
>>>>    2) Microchip LAN9662 Ethernet Controller
>>>>
>>>>       Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>>>
>>>> Normally, the PCI core discovers PCI devices and their BARs using the
>>>> PCI enumeration process. However, the process does not provide a way to
>>>> discover the hardware peripherals that are present in a PCI device, and
>>>> which can be accessed through the PCI BARs. Also, the enumeration process
>>>> does not provide a way to associate MSI-X vectors of a PCI device with the
>>>> hardware peripherals that are present in the device. PCI device drivers
>>>> often use header files to describe the hardware peripherals and their
>>>> resources as there is no standard data driven way to do so. This patch
>>>> series proposes to use flattened device tree blob to describe the
>>>> peripherals in a data driven way. Based on previous discussion, using
>>>> device tree overlay is the best way to unflatten the blob and populate
>>>> platform devices. To use device tree overlay, there are three obvious
>>>> problems that need to be resolved.
>>>>
>>>> First, we need to create a base tree for non-DT system such as x86_64. A
>>>> patch series has been submitted for this:
>>>> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
>>>> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
>>>>
>>>> Second, a device tree node corresponding to the PCI endpoint is required
>>>> for overlaying the flattened device tree blob for that PCI endpoint.
>>>> Because PCI is a self-discoverable bus, a device tree node is usually not
>>>> created for PCI devices. This series adds support to generate a device
>>>> tree node for a PCI device which advertises itself using PCI quirks
>>>> infrastructure.
>>>>
>>>> Third, we need to generate device tree nodes for PCI bridges since a child
>>>> PCI endpoint may choose to have a device tree node created.
>>>>
>>>> This patch series is made up of three patches.
>>>>
>>>> The first patch is adding OF interface to create or destroy OF node
>>>> dynamically.
>>>>
>>>> The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
>>>> When the option is turned on, the kernel will generate device tree nodes
>>>> for all PCI bridges unconditionally. The patch also shows how to use the
>>>> PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
>>>> tree node for a device. Specifically, the patch generates a device tree
>>>> node for Xilinx Alveo U50 PCIe accelerator device. The generated device
>>>> tree nodes do not have any property.
>>>>
>>>> The third patch adds basic properties ('reg', 'compatible' and
>>>> 'device_type') to the dynamically generated device tree nodes. More
>>>> properties can be added in the future.
>>>>
>>>> Here is the example of device tree nodes generated within the ARM64 QEMU.
>>>> # lspci -t
>>>> -[0000:00]-+-00.0
>>>>             +-01.0-[01]--
>>>>             +-01.1-[02]----00.0
>>>>             +-01.2-[03]----00.0
>>>>             +-01.3-[04]----00.0
>>>>             +-01.4-[05]----00.0
>>>>             +-01.5-[06]--
>>>>             +-01.6-[07]--
>>>>             +-01.7-[08]--
>>>>             +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
>>>>             |                                            \-00.1
>>>>             +-02.1-[0c]--
>>>>             \-03.0-[0d-0e]----00.0-[0e]----01.0
>>>>
>>>> # tree /sys/firmware/devicetree/base/pcie\@10000000
>>>> /sys/firmware/devicetree/base/pcie@10000000
>>>> |-- #address-cells
>>>> |-- #interrupt-cells
>>>> |-- #size-cells
>>>> |-- bus-range
>>>> |-- compatible
>>>> |-- device_type
>>>> |-- dma-coherent
>>>> |-- interrupt-map
>>>> |-- interrupt-map-mask
>>>> |-- linux,pci-domain
>>>> |-- msi-parent
>>>> |-- name
>>>> |-- pci@1,0
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@1,1
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@1,2
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@1,3
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@1,4
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@1,5
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@1,6
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@1,7
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@2,0
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- pci@0,0
>>>> |   |   |-- #address-cells
>>>> |   |   |-- #size-cells
>>>> |   |   |-- compatible
>>>> |   |   |-- device_type
>>>> |   |   |-- pci@0,0
>>>> |   |   |   |-- #address-cells
>>>> |   |   |   |-- #size-cells
>>>> |   |   |   |-- compatible
>>>> |   |   |   |-- dev@0,0
>>>> |   |   |   |   |-- compatible
>>>> |   |   |   |   `-- reg
>>>> |   |   |   |-- dev@0,1
>>>> |   |   |   |   |-- compatible
>>>> |   |   |   |   `-- reg
>>>> |   |   |   |-- device_type
>>>> |   |   |   |-- ranges
>>>> |   |   |   `-- reg
>>>> |   |   |-- ranges
>>>> |   |   `-- reg
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@2,1
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- pci@3,0
>>>> |   |-- #address-cells
>>>> |   |-- #size-cells
>>>> |   |-- compatible
>>>> |   |-- device_type
>>>> |   |-- pci@0,0
>>>> |   |   |-- #address-cells
>>>> |   |   |-- #size-cells
>>>> |   |   |-- compatible
>>>> |   |   |-- device_type
>>>> |   |   |-- ranges
>>>> |   |   `-- reg
>>>> |   |-- ranges
>>>> |   `-- reg
>>>> |-- ranges
>>>> `-- reg
>>>>
>>>> Changes since v6:
>>>> - Removed single line wrapper functions
>>>> - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
>>>>
>>>> Changes since v5:
>>>> - Fixed code review comments
>>>> - Fixed incorrect 'ranges' and 'reg' properties and verified address
>>>>    translation.
>>>>
>>>> Changes since RFC v4:
>>>> - Fixed code review comments
>>>>
>>>> Changes since RFC v3:
>>>> - Split the Xilinx Alveo U50 PCI quirk to a separate patch
>>>> - Minor changes in commit description and code comment
>>>>
>>>> Changes since RFC v2:
>>>> - Merged patch 3 with patch 2
>>>> - Added OF interfaces of_changeset_add_prop_* and use them to create
>>>>    properties.
>>>> - Added '#address-cells', '#size-cells' and 'ranges' properties.
>>>>
>>>> Changes since RFC v1:
>>>> - Added one patch to create basic properties.
>>>> - To move DT related code out of PCI subsystem, replaced of_node_alloc()
>>>>    with of_create_node()/of_destroy_node()
>>>>
>>>> Lizhi Hou (3):
>>>>    of: dynamic: Add interfaces for creating device node dynamically
>>>>    PCI: Create device tree node for selected devices
>>>>    PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
>>>>
>>>>   drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
>>>>   drivers/pci/Kconfig         |  12 ++
>>>>   drivers/pci/Makefile        |   1 +
>>>>   drivers/pci/bus.c           |   2 +
>>>>   drivers/pci/msi/irqdomain.c |   6 +-
>>>>   drivers/pci/of.c            |  71 ++++++++++++
>>>>   drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
>>>>   drivers/pci/pci-driver.c    |   3 +-
>>>>   drivers/pci/pci.h           |  19 ++++
>>>>   drivers/pci/quirks.c        |  11 ++
>>>>   drivers/pci/remove.c        |   1 +
>>>>   include/linux/of.h          |  24 ++++
>>>>   12 files changed, 556 insertions(+), 3 deletions(-)
>>>>   create mode 100644 drivers/pci/of_property.c
>>>>
  
Frank Rowand Feb. 26, 2023, 10:38 p.m. UTC | #6
Hi Clément, Hi Lizhi,

On 1/19/23 21:02, Lizhi Hou wrote:
> This patch series introduces OF overlay support for PCI devices which
> primarily addresses two use cases. First, it provides a data driven method
> to describe hardware peripherals that are present in a PCI endpoint and
> hence can be accessed by the PCI host. Second, it allows reuse of a OF
> compatible driver -- often used in SoC platforms -- in a PCI host based
> system.
> 
> There are 2 series devices rely on this patch:
> 
>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>   2) Microchip LAN9662 Ethernet Controller
> 

Digging back through some history:

>      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/

(I am selectively pulling two fragments, see the above link for the
full email.)

Includes the following:

   A driver using this support was added and can be seen at [3]. This
   driver embeds a builtin overlay and applies it to the live tree using
   of_overlay_fdt_apply_to_node(). An interrupt driver is also included and

and

   This series was tested on a x86 kernel using CONFIG_OF under a virtual
   machine using PCI passthrough.

   Link: [1] https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
   Link: [2] https://lore.kernel.org/lkml/20220408174841.34458529@fixe.home/T/
   Link: [3] https://github.com/clementleger/linux/tree/lan966x/of_support

Following link 3 to see how the driver implemented the concept, I arrived
at a git tree, with the commit be42efa "mfd: lan966x: add pci driver",
and have been looking at the code there.

Clément, is this still the best example of a driver implementation that
would use the framework proposed in the "[PATCH V7 0/3] Generate device
tree node for pci devices" patch series? And this is the driver for the
device listed as item 2 above "2) Microchip LAN9662 Ethernet Controller"?

-Frank

> 
> Normally, the PCI core discovers PCI devices and their BARs using the
> PCI enumeration process. However, the process does not provide a way to
> discover the hardware peripherals that are present in a PCI device, and
> which can be accessed through the PCI BARs. Also, the enumeration process
> does not provide a way to associate MSI-X vectors of a PCI device with the
> hardware peripherals that are present in the device. PCI device drivers
> often use header files to describe the hardware peripherals and their
> resources as there is no standard data driven way to do so. This patch
> series proposes to use flattened device tree blob to describe the
> peripherals in a data driven way. Based on previous discussion, using
> device tree overlay is the best way to unflatten the blob and populate
> platform devices. To use device tree overlay, there are three obvious
> problems that need to be resolved.
> 
> First, we need to create a base tree for non-DT system such as x86_64. A
> patch series has been submitted for this:
> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
> 
> Second, a device tree node corresponding to the PCI endpoint is required
> for overlaying the flattened device tree blob for that PCI endpoint.
> Because PCI is a self-discoverable bus, a device tree node is usually not
> created for PCI devices. This series adds support to generate a device
> tree node for a PCI device which advertises itself using PCI quirks
> infrastructure.
> 
> Third, we need to generate device tree nodes for PCI bridges since a child
> PCI endpoint may choose to have a device tree node created.
> 
> This patch series is made up of three patches.

< snip >
  
Frank Rowand Feb. 27, 2023, 6:51 a.m. UTC | #7
On 1/19/23 21:02, Lizhi Hou wrote:
> This patch series introduces OF overlay support for PCI devices which
> primarily addresses two use cases. First, it provides a data driven method
> to describe hardware peripherals that are present in a PCI endpoint and
> hence can be accessed by the PCI host. Second, it allows reuse of a OF
> compatible driver -- often used in SoC platforms -- in a PCI host based
> system.
> 
> There are 2 series devices rely on this patch:
> 
>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>   2) Microchip LAN9662 Ethernet Controller
> 
>      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> 


> Normally, the PCI core discovers PCI devices and their BARs using the
> PCI enumeration process. However, the process does not provide a way to
> discover the hardware peripherals that are present in a PCI device, and
> which can be accessed through the PCI BARs. Also, the enumeration process

I'm confused.  The PCI Configuration Header Registers should describe the
hardware on the PCI card.

Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
themselves, so I would like to analyze that case separately), does the
second device, "Microchip LAN9662 Ethernet Controller" properly implement
the PCI Configuration Header Registers?  What additional information is
needed that is not provided in those registers?

-Frank

> does not provide a way to associate MSI-X vectors of a PCI device with the
> hardware peripherals that are present in the device. PCI device drivers
> often use header files to describe the hardware peripherals and their
> resources as there is no standard data driven way to do so. This patch
> series proposes to use flattened device tree blob to describe the
> peripherals in a data driven way. Based on previous discussion, using
> device tree overlay is the best way to unflatten the blob and populate
> platform devices. To use device tree overlay, there are three obvious
> problems that need to be resolved.
> 
> First, we need to create a base tree for non-DT system such as x86_64. A
> patch series has been submitted for this:
> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
> 
> Second, a device tree node corresponding to the PCI endpoint is required
> for overlaying the flattened device tree blob for that PCI endpoint.
> Because PCI is a self-discoverable bus, a device tree node is usually not
> created for PCI devices. This series adds support to generate a device
> tree node for a PCI device which advertises itself using PCI quirks
> infrastructure.
> 
> Third, we need to generate device tree nodes for PCI bridges since a child
> PCI endpoint may choose to have a device tree node created.
> 
> This patch series is made up of three patches.
> 
> The first patch is adding OF interface to create or destroy OF node
> dynamically.
> 
> The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
> When the option is turned on, the kernel will generate device tree nodes
> for all PCI bridges unconditionally. The patch also shows how to use the
> PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
> tree node for a device. Specifically, the patch generates a device tree
> node for Xilinx Alveo U50 PCIe accelerator device. The generated device
> tree nodes do not have any property.
> 
> The third patch adds basic properties ('reg', 'compatible' and
> 'device_type') to the dynamically generated device tree nodes. More
> properties can be added in the future.
> 
> Here is the example of device tree nodes generated within the ARM64 QEMU.
> # lspci -t    
> -[0000:00]-+-00.0
>            +-01.0-[01]--
>            +-01.1-[02]----00.0
>            +-01.2-[03]----00.0
>            +-01.3-[04]----00.0
>            +-01.4-[05]----00.0
>            +-01.5-[06]--
>            +-01.6-[07]--
>            +-01.7-[08]--
>            +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
>            |                                            \-00.1
>            +-02.1-[0c]--
>            \-03.0-[0d-0e]----00.0-[0e]----01.0
> 
> # tree /sys/firmware/devicetree/base/pcie\@10000000
> /sys/firmware/devicetree/base/pcie@10000000
> |-- #address-cells
> |-- #interrupt-cells
> |-- #size-cells
> |-- bus-range
> |-- compatible
> |-- device_type
> |-- dma-coherent
> |-- interrupt-map
> |-- interrupt-map-mask
> |-- linux,pci-domain
> |-- msi-parent
> |-- name
> |-- pci@1,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,1
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,2
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,3
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,4
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,5
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,6
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,7
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@2,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- pci@0,0
> |   |   |-- #address-cells
> |   |   |-- #size-cells
> |   |   |-- compatible
> |   |   |-- device_type
> |   |   |-- pci@0,0
> |   |   |   |-- #address-cells
> |   |   |   |-- #size-cells
> |   |   |   |-- compatible
> |   |   |   |-- dev@0,0
> |   |   |   |   |-- compatible
> |   |   |   |   `-- reg
> |   |   |   |-- dev@0,1
> |   |   |   |   |-- compatible
> |   |   |   |   `-- reg
> |   |   |   |-- device_type
> |   |   |   |-- ranges
> |   |   |   `-- reg
> |   |   |-- ranges
> |   |   `-- reg
> |   |-- ranges
> |   `-- reg
> |-- pci@2,1
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@3,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- pci@0,0
> |   |   |-- #address-cells
> |   |   |-- #size-cells
> |   |   |-- compatible
> |   |   |-- device_type
> |   |   |-- ranges
> |   |   `-- reg
> |   |-- ranges
> |   `-- reg
> |-- ranges
> `-- reg
> 
> Changes since v6:
> - Removed single line wrapper functions
> - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
> 
> Changes since v5:
> - Fixed code review comments
> - Fixed incorrect 'ranges' and 'reg' properties and verified address
>   translation.
> 
> Changes since RFC v4:
> - Fixed code review comments
> 
> Changes since RFC v3:
> - Split the Xilinx Alveo U50 PCI quirk to a separate patch
> - Minor changes in commit description and code comment
> 
> Changes since RFC v2:
> - Merged patch 3 with patch 2
> - Added OF interfaces of_changeset_add_prop_* and use them to create
>   properties.
> - Added '#address-cells', '#size-cells' and 'ranges' properties.
> 
> Changes since RFC v1:
> - Added one patch to create basic properties.
> - To move DT related code out of PCI subsystem, replaced of_node_alloc()
>   with of_create_node()/of_destroy_node()
> 
> Lizhi Hou (3):
>   of: dynamic: Add interfaces for creating device node dynamically
>   PCI: Create device tree node for selected devices
>   PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
> 
>  drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
>  drivers/pci/Kconfig         |  12 ++
>  drivers/pci/Makefile        |   1 +
>  drivers/pci/bus.c           |   2 +
>  drivers/pci/msi/irqdomain.c |   6 +-
>  drivers/pci/of.c            |  71 ++++++++++++
>  drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
>  drivers/pci/pci-driver.c    |   3 +-
>  drivers/pci/pci.h           |  19 ++++
>  drivers/pci/quirks.c        |  11 ++
>  drivers/pci/remove.c        |   1 +
>  include/linux/of.h          |  24 ++++
>  12 files changed, 556 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/pci/of_property.c
>
  
Clément Léger Feb. 27, 2023, 10:22 a.m. UTC | #8
Le Sun, 26 Feb 2023 16:38:58 -0600,
Frank Rowand <frowand.list@gmail.com> a écrit :

> Hi Clément, Hi Lizhi,
> 
> On 1/19/23 21:02, Lizhi Hou wrote:
> > This patch series introduces OF overlay support for PCI devices which
> > primarily addresses two use cases. First, it provides a data driven method
> > to describe hardware peripherals that are present in a PCI endpoint and
> > hence can be accessed by the PCI host. Second, it allows reuse of a OF
> > compatible driver -- often used in SoC platforms -- in a PCI host based
> > system.
> > 
> > There are 2 series devices rely on this patch:
> > 
> >   1) Xilinx Alveo Accelerator cards (FPGA based device)
> >   2) Microchip LAN9662 Ethernet Controller
> >   
> 
> Digging back through some history:
> 
> >      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/  
> 
> (I am selectively pulling two fragments, see the above link for the
> full email.)
> 
> Includes the following:
> 
>    A driver using this support was added and can be seen at [3]. This
>    driver embeds a builtin overlay and applies it to the live tree using
>    of_overlay_fdt_apply_to_node(). An interrupt driver is also included and
> 
> and
> 
>    This series was tested on a x86 kernel using CONFIG_OF under a virtual
>    machine using PCI passthrough.
> 
>    Link: [1] https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
>    Link: [2] https://lore.kernel.org/lkml/20220408174841.34458529@fixe.home/T/
>    Link: [3] https://github.com/clementleger/linux/tree/lan966x/of_support
> 
> Following link 3 to see how the driver implemented the concept, I arrived
> at a git tree, with the commit be42efa "mfd: lan966x: add pci driver",
> and have been looking at the code there.
> 
> Clément, is this still the best example of a driver implementation that
> would use the framework proposed in the "[PATCH V7 0/3] Generate device
> tree node for pci devices" patch series? And this is the driver for the
> device listed as item 2 above "2) Microchip LAN9662 Ethernet Controller"?

Hi Frank,

The driver has slightly evolved to be based on Lizhi Patches and the
interrupt driver was reworked to be a standard platform driver. I'll
clean that up and push a new branch based on this work.

This driver is indeed the driver for the LAN9662 Ethernet Controller
which allows using the 2 SFPs ports and 2 RJ45 ports successfully (which
involves multiple subsystem and drivers).

While doing this work, I found multiple of_noderefcount issues which I
fixed and that are currently being reviewed. I won't be surprised if
there are other lying around in various part of the kernel. Just saying
so you know there is actually effort to make that more robust.

Clément

> 
> -Frank
> 
> > 
> > Normally, the PCI core discovers PCI devices and their BARs using the
> > PCI enumeration process. However, the process does not provide a way to
> > discover the hardware peripherals that are present in a PCI device, and
> > which can be accessed through the PCI BARs. Also, the enumeration process
> > does not provide a way to associate MSI-X vectors of a PCI device with the
> > hardware peripherals that are present in the device. PCI device drivers
> > often use header files to describe the hardware peripherals and their
> > resources as there is no standard data driven way to do so. This patch
> > series proposes to use flattened device tree blob to describe the
> > peripherals in a data driven way. Based on previous discussion, using
> > device tree overlay is the best way to unflatten the blob and populate
> > platform devices. To use device tree overlay, there are three obvious
> > problems that need to be resolved.
> > 
> > First, we need to create a base tree for non-DT system such as x86_64. A
> > patch series has been submitted for this:
> > https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
> > https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
> > 
> > Second, a device tree node corresponding to the PCI endpoint is required
> > for overlaying the flattened device tree blob for that PCI endpoint.
> > Because PCI is a self-discoverable bus, a device tree node is usually not
> > created for PCI devices. This series adds support to generate a device
> > tree node for a PCI device which advertises itself using PCI quirks
> > infrastructure.
> > 
> > Third, we need to generate device tree nodes for PCI bridges since a child
> > PCI endpoint may choose to have a device tree node created.
> > 
> > This patch series is made up of three patches.  
> 
> < snip >
>
  
Clément Léger Feb. 27, 2023, 10:31 a.m. UTC | #9
Le Mon, 27 Feb 2023 00:51:29 -0600,
Frank Rowand <frowand.list@gmail.com> a écrit :

> On 1/19/23 21:02, Lizhi Hou wrote:
> > This patch series introduces OF overlay support for PCI devices which
> > primarily addresses two use cases. First, it provides a data driven method
> > to describe hardware peripherals that are present in a PCI endpoint and
> > hence can be accessed by the PCI host. Second, it allows reuse of a OF
> > compatible driver -- often used in SoC platforms -- in a PCI host based
> > system.
> > 
> > There are 2 series devices rely on this patch:
> > 
> >   1) Xilinx Alveo Accelerator cards (FPGA based device)
> >   2) Microchip LAN9662 Ethernet Controller
> > 
> >      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> >   
> 
> 
> > Normally, the PCI core discovers PCI devices and their BARs using the
> > PCI enumeration process. However, the process does not provide a way to
> > discover the hardware peripherals that are present in a PCI device, and
> > which can be accessed through the PCI BARs. Also, the enumeration process  
> 
> I'm confused.  The PCI Configuration Header Registers should describe the
> hardware on the PCI card.
> 
> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
> themselves, so I would like to analyze that case separately), does the
> second device, "Microchip LAN9662 Ethernet Controller" properly implement
> the PCI Configuration Header Registers?  What additional information is
> needed that is not provided in those registers?

Hi Frank,

I guess Lizhi wanted to say that it does not provide a way to describe
all the "platform" devices that are exposed by this PCI device. Which
is of course the whole point of the work we are doing right now. But
all the BARs are correctly described by the LAN9662 PCI card.

Clément

> 
> -Frank
> 
> > does not provide a way to associate MSI-X vectors of a PCI device with the
> > hardware peripherals that are present in the device. PCI device drivers
> > often use header files to describe the hardware peripherals and their
> > resources as there is no standard data driven way to do so. This patch
> > series proposes to use flattened device tree blob to describe the
> > peripherals in a data driven way. Based on previous discussion, using
> > device tree overlay is the best way to unflatten the blob and populate
> > platform devices. To use device tree overlay, there are three obvious
> > problems that need to be resolved.
> > 
> > First, we need to create a base tree for non-DT system such as x86_64. A
> > patch series has been submitted for this:
> > https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
> > https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
> > 
> > Second, a device tree node corresponding to the PCI endpoint is required
> > for overlaying the flattened device tree blob for that PCI endpoint.
> > Because PCI is a self-discoverable bus, a device tree node is usually not
> > created for PCI devices. This series adds support to generate a device
> > tree node for a PCI device which advertises itself using PCI quirks
> > infrastructure.
> > 
> > Third, we need to generate device tree nodes for PCI bridges since a child
> > PCI endpoint may choose to have a device tree node created.
> > 
> > This patch series is made up of three patches.
> > 
> > The first patch is adding OF interface to create or destroy OF node
> > dynamically.
> > 
> > The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
> > When the option is turned on, the kernel will generate device tree nodes
> > for all PCI bridges unconditionally. The patch also shows how to use the
> > PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
> > tree node for a device. Specifically, the patch generates a device tree
> > node for Xilinx Alveo U50 PCIe accelerator device. The generated device
> > tree nodes do not have any property.
> > 
> > The third patch adds basic properties ('reg', 'compatible' and
> > 'device_type') to the dynamically generated device tree nodes. More
> > properties can be added in the future.
> > 
> > Here is the example of device tree nodes generated within the ARM64 QEMU.
> > # lspci -t    
> > -[0000:00]-+-00.0
> >            +-01.0-[01]--
> >            +-01.1-[02]----00.0
> >            +-01.2-[03]----00.0
> >            +-01.3-[04]----00.0
> >            +-01.4-[05]----00.0
> >            +-01.5-[06]--
> >            +-01.6-[07]--
> >            +-01.7-[08]--
> >            +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
> >            |                                            \-00.1
> >            +-02.1-[0c]--
> >            \-03.0-[0d-0e]----00.0-[0e]----01.0
> > 
> > # tree /sys/firmware/devicetree/base/pcie\@10000000
> > /sys/firmware/devicetree/base/pcie@10000000
> > |-- #address-cells
> > |-- #interrupt-cells
> > |-- #size-cells
> > |-- bus-range
> > |-- compatible
> > |-- device_type
> > |-- dma-coherent
> > |-- interrupt-map
> > |-- interrupt-map-mask
> > |-- linux,pci-domain
> > |-- msi-parent
> > |-- name
> > |-- pci@1,0
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@1,1
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@1,2
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@1,3
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@1,4
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@1,5
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@1,6
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@1,7
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@2,0
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- pci@0,0
> > |   |   |-- #address-cells
> > |   |   |-- #size-cells
> > |   |   |-- compatible
> > |   |   |-- device_type
> > |   |   |-- pci@0,0
> > |   |   |   |-- #address-cells
> > |   |   |   |-- #size-cells
> > |   |   |   |-- compatible
> > |   |   |   |-- dev@0,0
> > |   |   |   |   |-- compatible
> > |   |   |   |   `-- reg
> > |   |   |   |-- dev@0,1
> > |   |   |   |   |-- compatible
> > |   |   |   |   `-- reg
> > |   |   |   |-- device_type
> > |   |   |   |-- ranges
> > |   |   |   `-- reg
> > |   |   |-- ranges
> > |   |   `-- reg
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@2,1
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- ranges
> > |   `-- reg
> > |-- pci@3,0
> > |   |-- #address-cells
> > |   |-- #size-cells
> > |   |-- compatible
> > |   |-- device_type
> > |   |-- pci@0,0
> > |   |   |-- #address-cells
> > |   |   |-- #size-cells
> > |   |   |-- compatible
> > |   |   |-- device_type
> > |   |   |-- ranges
> > |   |   `-- reg
> > |   |-- ranges
> > |   `-- reg
> > |-- ranges
> > `-- reg
> > 
> > Changes since v6:
> > - Removed single line wrapper functions
> > - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
> > 
> > Changes since v5:
> > - Fixed code review comments
> > - Fixed incorrect 'ranges' and 'reg' properties and verified address
> >   translation.
> > 
> > Changes since RFC v4:
> > - Fixed code review comments
> > 
> > Changes since RFC v3:
> > - Split the Xilinx Alveo U50 PCI quirk to a separate patch
> > - Minor changes in commit description and code comment
> > 
> > Changes since RFC v2:
> > - Merged patch 3 with patch 2
> > - Added OF interfaces of_changeset_add_prop_* and use them to create
> >   properties.
> > - Added '#address-cells', '#size-cells' and 'ranges' properties.
> > 
> > Changes since RFC v1:
> > - Added one patch to create basic properties.
> > - To move DT related code out of PCI subsystem, replaced of_node_alloc()
> >   with of_create_node()/of_destroy_node()
> > 
> > Lizhi Hou (3):
> >   of: dynamic: Add interfaces for creating device node dynamically
> >   PCI: Create device tree node for selected devices
> >   PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
> > 
> >  drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
> >  drivers/pci/Kconfig         |  12 ++
> >  drivers/pci/Makefile        |   1 +
> >  drivers/pci/bus.c           |   2 +
> >  drivers/pci/msi/irqdomain.c |   6 +-
> >  drivers/pci/of.c            |  71 ++++++++++++
> >  drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
> >  drivers/pci/pci-driver.c    |   3 +-
> >  drivers/pci/pci.h           |  19 ++++
> >  drivers/pci/quirks.c        |  11 ++
> >  drivers/pci/remove.c        |   1 +
> >  include/linux/of.h          |  24 ++++
> >  12 files changed, 556 insertions(+), 3 deletions(-)
> >  create mode 100644 drivers/pci/of_property.c
> >   
>
  
Frank Rowand March 3, 2023, 11:42 p.m. UTC | #10
On 2/27/23 04:31, Clément Léger wrote:
> Le Mon, 27 Feb 2023 00:51:29 -0600,
> Frank Rowand <frowand.list@gmail.com> a écrit :
> 
>> On 1/19/23 21:02, Lizhi Hou wrote:
>>> This patch series introduces OF overlay support for PCI devices which
>>> primarily addresses two use cases. First, it provides a data driven method
>>> to describe hardware peripherals that are present in a PCI endpoint and
>>> hence can be accessed by the PCI host. Second, it allows reuse of a OF
>>> compatible driver -- often used in SoC platforms -- in a PCI host based
>>> system.
>>>
>>> There are 2 series devices rely on this patch:
>>>
>>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>>>   2) Microchip LAN9662 Ethernet Controller
>>>
>>>      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>>   
>>
>>
>>> Normally, the PCI core discovers PCI devices and their BARs using the
>>> PCI enumeration process. However, the process does not provide a way to
>>> discover the hardware peripherals that are present in a PCI device, and
>>> which can be accessed through the PCI BARs. Also, the enumeration process  
>>
>> I'm confused.  The PCI Configuration Header Registers should describe the
>> hardware on the PCI card.
>>
>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
>> themselves, so I would like to analyze that case separately), does the
>> second device, "Microchip LAN9662 Ethernet Controller" properly implement
>> the PCI Configuration Header Registers?  What additional information is
>> needed that is not provided in those registers?
> 
> Hi Frank,
> 
> I guess Lizhi wanted to say that it does not provide a way to describe
> all the "platform" devices that are exposed by this PCI device. Which
> is of course the whole point of the work we are doing right now. But
> all the BARs are correctly described by the LAN9662 PCI card.
> 
> Clément

I remain confused.

[RFC 00/10] add support for fwnode in i2c mux system and sfp
https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/

  references a PCIe driver:
  [2] https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c

So there is a PCIe driver that works.

However, the RFC patch series was proposing adding fwnode support to the driver.  My first
surface reading (just part of that one email, not the entire series or the replies yet),
notes:

  ... However, when
  plugged in a PCIe slot (on a x86), there is no device-tree support and
  the peripherals that are present must be described in some other way.

I am assuming that the peripherals are what you mentioned above as '"platform"
devices'.  This is where my current confusion lies.  Are the "platform"
devices accessed via the PCI bus or is there some other electrical connection
between the host system and the PCIe card?

If the "platform" devices are accessed via the PCI bus, then I would expect them
to be described by PCI configuration header registers.  Are the PCI configuration
registers to describe the "platform" devices not present?

I'll read through the fwnode RFC thread to add to see what happened to the proposal.

-Frank

> 
>>
>> -Frank
>>
>>> does not provide a way to associate MSI-X vectors of a PCI device with the
>>> hardware peripherals that are present in the device. PCI device drivers
>>> often use header files to describe the hardware peripherals and their
>>> resources as there is no standard data driven way to do so. This patch
>>> series proposes to use flattened device tree blob to describe the
>>> peripherals in a data driven way. Based on previous discussion, using
>>> device tree overlay is the best way to unflatten the blob and populate
>>> platform devices. To use device tree overlay, there are three obvious
>>> problems that need to be resolved.
>>>
>>> First, we need to create a base tree for non-DT system such as x86_64. A
>>> patch series has been submitted for this:
>>> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
>>> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
>>>
>>> Second, a device tree node corresponding to the PCI endpoint is required
>>> for overlaying the flattened device tree blob for that PCI endpoint.
>>> Because PCI is a self-discoverable bus, a device tree node is usually not
>>> created for PCI devices. This series adds support to generate a device
>>> tree node for a PCI device which advertises itself using PCI quirks
>>> infrastructure.
>>>
>>> Third, we need to generate device tree nodes for PCI bridges since a child
>>> PCI endpoint may choose to have a device tree node created.
>>>
>>> This patch series is made up of three patches.
>>>
>>> The first patch is adding OF interface to create or destroy OF node
>>> dynamically.
>>>
>>> The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
>>> When the option is turned on, the kernel will generate device tree nodes
>>> for all PCI bridges unconditionally. The patch also shows how to use the
>>> PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
>>> tree node for a device. Specifically, the patch generates a device tree
>>> node for Xilinx Alveo U50 PCIe accelerator device. The generated device
>>> tree nodes do not have any property.
>>>
>>> The third patch adds basic properties ('reg', 'compatible' and
>>> 'device_type') to the dynamically generated device tree nodes. More
>>> properties can be added in the future.
>>>
>>> Here is the example of device tree nodes generated within the ARM64 QEMU.
>>> # lspci -t    
>>> -[0000:00]-+-00.0
>>>            +-01.0-[01]--
>>>            +-01.1-[02]----00.0
>>>            +-01.2-[03]----00.0
>>>            +-01.3-[04]----00.0
>>>            +-01.4-[05]----00.0
>>>            +-01.5-[06]--
>>>            +-01.6-[07]--
>>>            +-01.7-[08]--
>>>            +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
>>>            |                                            \-00.1
>>>            +-02.1-[0c]--
>>>            \-03.0-[0d-0e]----00.0-[0e]----01.0
>>>
>>> # tree /sys/firmware/devicetree/base/pcie\@10000000
>>> /sys/firmware/devicetree/base/pcie@10000000
>>> |-- #address-cells
>>> |-- #interrupt-cells
>>> |-- #size-cells
>>> |-- bus-range
>>> |-- compatible
>>> |-- device_type
>>> |-- dma-coherent
>>> |-- interrupt-map
>>> |-- interrupt-map-mask
>>> |-- linux,pci-domain
>>> |-- msi-parent
>>> |-- name
>>> |-- pci@1,0
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,1
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,2
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,3
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,4
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,5
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,6
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@1,7
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@2,0
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- pci@0,0
>>> |   |   |-- #address-cells
>>> |   |   |-- #size-cells
>>> |   |   |-- compatible
>>> |   |   |-- device_type
>>> |   |   |-- pci@0,0
>>> |   |   |   |-- #address-cells
>>> |   |   |   |-- #size-cells
>>> |   |   |   |-- compatible
>>> |   |   |   |-- dev@0,0
>>> |   |   |   |   |-- compatible
>>> |   |   |   |   `-- reg
>>> |   |   |   |-- dev@0,1
>>> |   |   |   |   |-- compatible
>>> |   |   |   |   `-- reg
>>> |   |   |   |-- device_type
>>> |   |   |   |-- ranges
>>> |   |   |   `-- reg
>>> |   |   |-- ranges
>>> |   |   `-- reg
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@2,1
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- pci@3,0
>>> |   |-- #address-cells
>>> |   |-- #size-cells
>>> |   |-- compatible
>>> |   |-- device_type
>>> |   |-- pci@0,0
>>> |   |   |-- #address-cells
>>> |   |   |-- #size-cells
>>> |   |   |-- compatible
>>> |   |   |-- device_type
>>> |   |   |-- ranges
>>> |   |   `-- reg
>>> |   |-- ranges
>>> |   `-- reg
>>> |-- ranges
>>> `-- reg
>>>
>>> Changes since v6:
>>> - Removed single line wrapper functions
>>> - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
>>>
>>> Changes since v5:
>>> - Fixed code review comments
>>> - Fixed incorrect 'ranges' and 'reg' properties and verified address
>>>   translation.
>>>
>>> Changes since RFC v4:
>>> - Fixed code review comments
>>>
>>> Changes since RFC v3:
>>> - Split the Xilinx Alveo U50 PCI quirk to a separate patch
>>> - Minor changes in commit description and code comment
>>>
>>> Changes since RFC v2:
>>> - Merged patch 3 with patch 2
>>> - Added OF interfaces of_changeset_add_prop_* and use them to create
>>>   properties.
>>> - Added '#address-cells', '#size-cells' and 'ranges' properties.
>>>
>>> Changes since RFC v1:
>>> - Added one patch to create basic properties.
>>> - To move DT related code out of PCI subsystem, replaced of_node_alloc()
>>>   with of_create_node()/of_destroy_node()
>>>
>>> Lizhi Hou (3):
>>>   of: dynamic: Add interfaces for creating device node dynamically
>>>   PCI: Create device tree node for selected devices
>>>   PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
>>>
>>>  drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
>>>  drivers/pci/Kconfig         |  12 ++
>>>  drivers/pci/Makefile        |   1 +
>>>  drivers/pci/bus.c           |   2 +
>>>  drivers/pci/msi/irqdomain.c |   6 +-
>>>  drivers/pci/of.c            |  71 ++++++++++++
>>>  drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
>>>  drivers/pci/pci-driver.c    |   3 +-
>>>  drivers/pci/pci.h           |  19 ++++
>>>  drivers/pci/quirks.c        |  11 ++
>>>  drivers/pci/remove.c        |   1 +
>>>  include/linux/of.h          |  24 ++++
>>>  12 files changed, 556 insertions(+), 3 deletions(-)
>>>  create mode 100644 drivers/pci/of_property.c
>>>   
>>
> 
> 
>
  
Clément Léger March 6, 2023, 8:35 a.m. UTC | #11
Le 2023-03-04 00:42, Frank Rowand a écrit :
> On 2/27/23 04:31, Clément Léger wrote:
>> Le Mon, 27 Feb 2023 00:51:29 -0600,
>> Frank Rowand <frowand.list@gmail.com> a écrit :
>> 
>>> On 1/19/23 21:02, Lizhi Hou wrote:
>>>> This patch series introduces OF overlay support for PCI devices 
>>>> which
>>>> primarily addresses two use cases. First, it provides a data driven 
>>>> method
>>>> to describe hardware peripherals that are present in a PCI endpoint 
>>>> and
>>>> hence can be accessed by the PCI host. Second, it allows reuse of a 
>>>> OF
>>>> compatible driver -- often used in SoC platforms -- in a PCI host 
>>>> based
>>>> system.
>>>> 
>>>> There are 2 series devices rely on this patch:
>>>> 
>>>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>>>>   2) Microchip LAN9662 Ethernet Controller
>>>> 
>>>>      Please see: 
>>>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>>> 
>>> 
>>> 
>>>> Normally, the PCI core discovers PCI devices and their BARs using 
>>>> the
>>>> PCI enumeration process. However, the process does not provide a way 
>>>> to
>>>> discover the hardware peripherals that are present in a PCI device, 
>>>> and
>>>> which can be accessed through the PCI BARs. Also, the enumeration 
>>>> process
>>> 
>>> I'm confused.  The PCI Configuration Header Registers should describe 
>>> the
>>> hardware on the PCI card.
>>> 
>>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
>>> themselves, so I would like to analyze that case separately), does 
>>> the
>>> second device, "Microchip LAN9662 Ethernet Controller" properly 
>>> implement
>>> the PCI Configuration Header Registers?  What additional information 
>>> is
>>> needed that is not provided in those registers?
>> 
>> Hi Frank,
>> 
>> I guess Lizhi wanted to say that it does not provide a way to describe
>> all the "platform" devices that are exposed by this PCI device. Which
>> is of course the whole point of the work we are doing right now. But
>> all the BARs are correctly described by the LAN9662 PCI card.
>> 
>> Clément
> 
> I remain confused.
> 
> [RFC 00/10] add support for fwnode in i2c mux system and sfp
> https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
> 
>   references a PCIe driver:
>   [2]
> https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c
> 
> So there is a PCIe driver that works.
> 
> However, the RFC patch series was proposing adding fwnode support to
> the driver.  My first
> surface reading (just part of that one email, not the entire series or
> the replies yet),
> notes:
> 
>   ... However, when
>   plugged in a PCIe slot (on a x86), there is no device-tree support 
> and
>   the peripherals that are present must be described in some other way.
> 
> I am assuming that the peripherals are what you mentioned above as 
> '"platform"
> devices'.  This is where my current confusion lies.  Are the "platform"
> devices accessed via the PCI bus or is there some other electrical 
> connection
> between the host system and the PCIe card?

Hi Frank,

The platform devices exposed by this PCIe card are available via some 
BAR using PCI memory mapped areas, so it's totally standard PCI stuff.

> 
> If the "platform" devices are accessed via the PCI bus, then I would 
> expect them
> to be described by PCI configuration header registers.  Are the PCI
> configuration
> registers to describe the "platform" devices not present?

I'm not sure to understand what you mean here. PCI configuration headers
only provides some basic registers allowing to identify the PCI device
(vendor/product) and some memory areas that are exposed (BAR). They do
not provides the "list" of peripherals that are exposed by the devices,
only some BARs that can be mapped and that allows to access.

In the case of the lan9662 cnetwork controller, BAR 0 and 1 exposes
multiples devices that are located at some subranges of this BAR. For
instance (not accurate), we have the I2C controller located at BAR 0
+ offset 0X1000, then the flexcom controller exposed in BAR 0 at offset
  0x20000, etc. This list of peripheral is not exposed at all by the PCI
configuration headers (since it is not the purpose of course). All of
these peripherals have already existing platform drivers which can then
be reused thanks to the PCI device-tree overlay series.

> 
> I'll read through the fwnode RFC thread to add to see what happened to
> the proposal.

You can probably read the cover letter which described the use case in 
details. However, don't spend too much time reading the patchset, we
discarded them for many good reason (way too much modifications in
subsystems, no standardization of software node bindings, etc).

Clément
  
Frank Rowand March 6, 2023, 9:24 p.m. UTC | #12
On 3/6/23 02:35, clement.leger@bootlin.com wrote:
> Le 2023-03-04 00:42, Frank Rowand a écrit :
>> On 2/27/23 04:31, Clément Léger wrote:
>>> Le Mon, 27 Feb 2023 00:51:29 -0600,
>>> Frank Rowand <frowand.list@gmail.com> a écrit :
>>>
>>>> On 1/19/23 21:02, Lizhi Hou wrote:
>>>>> This patch series introduces OF overlay support for PCI devices 
>>>>> which
>>>>> primarily addresses two use cases. First, it provides a data driven 
>>>>> method
>>>>> to describe hardware peripherals that are present in a PCI endpoint 
>>>>> and
>>>>> hence can be accessed by the PCI host. Second, it allows reuse of a 
>>>>> OF
>>>>> compatible driver -- often used in SoC platforms -- in a PCI host 
>>>>> based
>>>>> system.
>>>>>
>>>>> There are 2 series devices rely on this patch:
>>>>>
>>>>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>>>>>   2) Microchip LAN9662 Ethernet Controller
>>>>>
>>>>>      Please see: 
>>>>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>>>>
>>>>
>>>>
>>>>> Normally, the PCI core discovers PCI devices and their BARs using 
>>>>> the
>>>>> PCI enumeration process. However, the process does not provide a way 
>>>>> to
>>>>> discover the hardware peripherals that are present in a PCI device, 
>>>>> and
>>>>> which can be accessed through the PCI BARs. Also, the enumeration 
>>>>> process
>>>>
>>>> I'm confused.  The PCI Configuration Header Registers should describe 
>>>> the
>>>> hardware on the PCI card.
>>>>
>>>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
>>>> themselves, so I would like to analyze that case separately), does 
>>>> the
>>>> second device, "Microchip LAN9662 Ethernet Controller" properly 
>>>> implement
>>>> the PCI Configuration Header Registers?  What additional information 
>>>> is
>>>> needed that is not provided in those registers?
>>>
>>> Hi Frank,
>>>
>>> I guess Lizhi wanted to say that it does not provide a way to describe
>>> all the "platform" devices that are exposed by this PCI device. Which
>>> is of course the whole point of the work we are doing right now. But
>>> all the BARs are correctly described by the LAN9662 PCI card.
>>>
>>> Clément
>>
>> I remain confused.
>>
>> [RFC 00/10] add support for fwnode in i2c mux system and sfp
>> https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
>>
>>   references a PCIe driver:
>>   [2]
>> https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c
>>
>> So there is a PCIe driver that works.
>>
>> However, the RFC patch series was proposing adding fwnode support to
>> the driver.  My first
>> surface reading (just part of that one email, not the entire series or
>> the replies yet),
>> notes:
>>
>>   ... However, when
>>   plugged in a PCIe slot (on a x86), there is no device-tree support 
>> and
>>   the peripherals that are present must be described in some other way.
>>
>> I am assuming that the peripherals are what you mentioned above as 
>> '"platform"
>> devices'.  This is where my current confusion lies.  Are the "platform"
>> devices accessed via the PCI bus or is there some other electrical 
>> connection
>> between the host system and the PCIe card?
> 
> Hi Frank,
> 
> The platform devices exposed by this PCIe card are available via some 
> BAR using PCI memory mapped areas, so it's totally standard PCI stuff.
> 
>>
>> If the "platform" devices are accessed via the PCI bus, then I would 
>> expect them
>> to be described by PCI configuration header registers.  Are the PCI
>> configuration
>> registers to describe the "platform" devices not present?
> 
> I'm not sure to understand what you mean here. PCI configuration headers
> only provides some basic registers allowing to identify the PCI device
> (vendor/product) and some memory areas that are exposed (BAR). They do
> not provides the "list" of peripherals that are exposed by the devices,
> only some BARs that can be mapped and that allows to access.

Yes, "identify the PCI device (vendor/product) and some memory areas".
The driver for the (vendor/product) 'knows' what peripherals are exposed
by the device and where within the BAR to find the registers for each
of the devices.

A normal PCI driver would contain this information.  If I understand the
proposal of this patch series, of_pci_make_dev_node() adds a node to
the devicetree, when invoked via a PCI quirk for certain specific
vendor/product cards.  This node must exist for the flattened device
tree (FDT) overlay for the card to be loaded.  The driver for the card
will get the overlay FDT from the card and load it into the kernel.
The driver will use the information that then exists in the devicetree
describing the card, instead of using information from the PCI configuration
headers from the card.

The intent is to be able to re-use devicetree based drivers instead of
having the driver be a native PCI driver.

This goes against historical Linux practice.  The idea of taking a driver
from another environment (eg Windows, HP Unix, Sun Unix, IBM Unix, etc)
and adding a shim layer to translate between Linux and the other
environment has been rejected.  Ironically, in this case, the other
environment is Linux (more specifically the Linux OF implementation).
Even thought the other environment is Linux, this is still adding a
shim layer to translate between that other environment and the native
Linux PCI environment for which the driver would normally be written.

In other words, this is not acceptable.  Normal alternatives would be
something like (1) add the PCI awareness to the existing drivers,
(2) split the devicetree aware and PCI aware portions of the driver
to common code that would be invoked from separate devicetree and PCI
drivers, (3) write entirely separate devicetree and PCI drivers, or
(4) some other creative solution.

Am I mis-interpretting or misunderstanding anything crucial here?

-Frank

> 
> In the case of the lan9662 cnetwork controller, BAR 0 and 1 exposes
> multiples devices that are located at some subranges of this BAR. For
> instance (not accurate), we have the I2C controller located at BAR 0
> + offset 0X1000, then the flexcom controller exposed in BAR 0 at offset
>   0x20000, etc. This list of peripheral is not exposed at all by the PCI
> configuration headers (since it is not the purpose of course). All of
> these peripherals have already existing platform drivers which can then
> be reused thanks to the PCI device-tree overlay series.
> 
>>
>> I'll read through the fwnode RFC thread to add to see what happened to
>> the proposal.
> 
> You can probably read the cover letter which described the use case in 
> details. However, don't spend too much time reading the patchset, we
> discarded them for many good reason (way too much modifications in
> subsystems, no standardization of software node bindings, etc).
> 
> Clément
>
  
Rob Herring March 7, 2023, 12:52 a.m. UTC | #13
On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/6/23 02:35, clement.leger@bootlin.com wrote:
> > Le 2023-03-04 00:42, Frank Rowand a écrit :
> >> On 2/27/23 04:31, Clément Léger wrote:
> >>> Le Mon, 27 Feb 2023 00:51:29 -0600,
> >>> Frank Rowand <frowand.list@gmail.com> a écrit :
> >>>
> >>>> On 1/19/23 21:02, Lizhi Hou wrote:
> >>>>> This patch series introduces OF overlay support for PCI devices
> >>>>> which
> >>>>> primarily addresses two use cases. First, it provides a data driven
> >>>>> method
> >>>>> to describe hardware peripherals that are present in a PCI endpoint
> >>>>> and
> >>>>> hence can be accessed by the PCI host. Second, it allows reuse of a
> >>>>> OF
> >>>>> compatible driver -- often used in SoC platforms -- in a PCI host
> >>>>> based
> >>>>> system.
> >>>>>
> >>>>> There are 2 series devices rely on this patch:
> >>>>>
> >>>>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
> >>>>>   2) Microchip LAN9662 Ethernet Controller
> >>>>>
> >>>>>      Please see:
> >>>>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> >>>>>
> >>>>
> >>>>
> >>>>> Normally, the PCI core discovers PCI devices and their BARs using
> >>>>> the
> >>>>> PCI enumeration process. However, the process does not provide a way
> >>>>> to
> >>>>> discover the hardware peripherals that are present in a PCI device,
> >>>>> and
> >>>>> which can be accessed through the PCI BARs. Also, the enumeration
> >>>>> process
> >>>>
> >>>> I'm confused.  The PCI Configuration Header Registers should describe
> >>>> the
> >>>> hardware on the PCI card.
> >>>>
> >>>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
> >>>> themselves, so I would like to analyze that case separately), does
> >>>> the
> >>>> second device, "Microchip LAN9662 Ethernet Controller" properly
> >>>> implement
> >>>> the PCI Configuration Header Registers?  What additional information
> >>>> is
> >>>> needed that is not provided in those registers?
> >>>
> >>> Hi Frank,
> >>>
> >>> I guess Lizhi wanted to say that it does not provide a way to describe
> >>> all the "platform" devices that are exposed by this PCI device. Which
> >>> is of course the whole point of the work we are doing right now. But
> >>> all the BARs are correctly described by the LAN9662 PCI card.
> >>>
> >>> Clément
> >>
> >> I remain confused.
> >>
> >> [RFC 00/10] add support for fwnode in i2c mux system and sfp
> >> https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
> >>
> >>   references a PCIe driver:
> >>   [2]
> >> https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c
> >>
> >> So there is a PCIe driver that works.
> >>
> >> However, the RFC patch series was proposing adding fwnode support to
> >> the driver.  My first
> >> surface reading (just part of that one email, not the entire series or
> >> the replies yet),
> >> notes:
> >>
> >>   ... However, when
> >>   plugged in a PCIe slot (on a x86), there is no device-tree support
> >> and
> >>   the peripherals that are present must be described in some other way.
> >>
> >> I am assuming that the peripherals are what you mentioned above as
> >> '"platform"
> >> devices'.  This is where my current confusion lies.  Are the "platform"
> >> devices accessed via the PCI bus or is there some other electrical
> >> connection
> >> between the host system and the PCIe card?
> >
> > Hi Frank,
> >
> > The platform devices exposed by this PCIe card are available via some
> > BAR using PCI memory mapped areas, so it's totally standard PCI stuff.
> >
> >>
> >> If the "platform" devices are accessed via the PCI bus, then I would
> >> expect them
> >> to be described by PCI configuration header registers.  Are the PCI
> >> configuration
> >> registers to describe the "platform" devices not present?
> >
> > I'm not sure to understand what you mean here. PCI configuration headers
> > only provides some basic registers allowing to identify the PCI device
> > (vendor/product) and some memory areas that are exposed (BAR). They do
> > not provides the "list" of peripherals that are exposed by the devices,
> > only some BARs that can be mapped and that allows to access.
>
> Yes, "identify the PCI device (vendor/product) and some memory areas".
> The driver for the (vendor/product) 'knows' what peripherals are exposed
> by the device and where within the BAR to find the registers for each
> of the devices.
>
> A normal PCI driver would contain this information.  If I understand the
> proposal of this patch series, of_pci_make_dev_node() adds a node to
> the devicetree, when invoked via a PCI quirk for certain specific
> vendor/product cards.  This node must exist for the flattened device
> tree (FDT) overlay for the card to be loaded.  The driver for the card
> will get the overlay FDT from the card and load it into the kernel.
> The driver will use the information that then exists in the devicetree
> describing the card, instead of using information from the PCI configuration
> headers from the card.

How would all the sub devices be defined by the PCI config space other
than a VID/PID implies *everything*. That's the same as the pre-DT
world where the ARM machine ID number (from RMK's registry) implied
everything. These days, we can have an entire SoC exposed behind a PCI
BAR which I think is pretty much Clement's usecase. Putting an SoC
behind a PCI BAR is no more discoverable than a "normal" SoC.

>
> The intent is to be able to re-use devicetree based drivers instead of
> having the driver be a native PCI driver.

Not instead of. There's the PCI driver for the FPGA or SoC bus with
multiple unrelated devices behind it. The PCI driver is just a bus
driver much like we have for various custom SoC bus drivers.

> This goes against historical Linux practice.  The idea of taking a driver
> from another environment (eg Windows, HP Unix, Sun Unix, IBM Unix, etc)
> and adding a shim layer to translate between Linux and the other
> environment has been rejected.  Ironically, in this case, the other
> environment is Linux (more specifically the Linux OF implementation).

I don't see how your example relates to this in any way whatsoever.
We're talking about different discovery mechanisms, not different
driver models/environments.

> Even thought the other environment is Linux, this is still adding a
> shim layer to translate between that other environment and the native
> Linux PCI environment for which the driver would normally be written.
>
> In other words, this is not acceptable.  Normal alternatives would be
> something like
> (1) add the PCI awareness to the existing drivers,

The downstream devices don't have their own PCI config space. That
won't work. PCI drivers expect a device with PCI config space. Devices
to drivers are always 1:1, so we couldn't share the config space among
multiple drivers or something. For devices which are not discoverable
like these are, our choices are DT, ACPI or s/w nodes (aka
platform_data 2.0).

> (2) split the devicetree aware and PCI aware portions of the driver
> to common code that would be invoked from separate devicetree and PCI
> drivers,

That only makes sense for something that is a single driver. Not the
case here. For the FPGA, the devices are not known up front.

> (3) write entirely separate devicetree and PCI drivers, or

For the same reason as 1, that simply won't work.

> (4) some other creative solution.
>
> Am I mis-interpretting or misunderstanding anything crucial here?

Yes...

We now have 3 different use cases all needing the same thing. The
3rd[1] is the recent test infrastructure change to have test devices
added. They all have non-discoverable devices downstream of a PCI
device. We need a solution here.

Rob

[1] https://lore.kernel.org/lkml/20230120-simple-mfd-pci-v1-1-c46b3d6601ef@axis.com/
  
Stefan Roese March 7, 2023, 7:54 a.m. UTC | #14
On 3/7/23 01:52, Rob Herring wrote:
> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 3/6/23 02:35, clement.leger@bootlin.com wrote:
>>> Le 2023-03-04 00:42, Frank Rowand a écrit :
>>>> On 2/27/23 04:31, Clément Léger wrote:
>>>>> Le Mon, 27 Feb 2023 00:51:29 -0600,
>>>>> Frank Rowand <frowand.list@gmail.com> a écrit :
>>>>>
>>>>>> On 1/19/23 21:02, Lizhi Hou wrote:
>>>>>>> This patch series introduces OF overlay support for PCI devices
>>>>>>> which
>>>>>>> primarily addresses two use cases. First, it provides a data driven
>>>>>>> method
>>>>>>> to describe hardware peripherals that are present in a PCI endpoint
>>>>>>> and
>>>>>>> hence can be accessed by the PCI host. Second, it allows reuse of a
>>>>>>> OF
>>>>>>> compatible driver -- often used in SoC platforms -- in a PCI host
>>>>>>> based
>>>>>>> system.
>>>>>>>
>>>>>>> There are 2 series devices rely on this patch:
>>>>>>>
>>>>>>>    1) Xilinx Alveo Accelerator cards (FPGA based device)
>>>>>>>    2) Microchip LAN9662 Ethernet Controller
>>>>>>>
>>>>>>>       Please see:
>>>>>>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>>>>>>
>>>>>>
>>>>>>
>>>>>>> Normally, the PCI core discovers PCI devices and their BARs using
>>>>>>> the
>>>>>>> PCI enumeration process. However, the process does not provide a way
>>>>>>> to
>>>>>>> discover the hardware peripherals that are present in a PCI device,
>>>>>>> and
>>>>>>> which can be accessed through the PCI BARs. Also, the enumeration
>>>>>>> process
>>>>>>
>>>>>> I'm confused.  The PCI Configuration Header Registers should describe
>>>>>> the
>>>>>> hardware on the PCI card.
>>>>>>
>>>>>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
>>>>>> themselves, so I would like to analyze that case separately), does
>>>>>> the
>>>>>> second device, "Microchip LAN9662 Ethernet Controller" properly
>>>>>> implement
>>>>>> the PCI Configuration Header Registers?  What additional information
>>>>>> is
>>>>>> needed that is not provided in those registers?
>>>>>
>>>>> Hi Frank,
>>>>>
>>>>> I guess Lizhi wanted to say that it does not provide a way to describe
>>>>> all the "platform" devices that are exposed by this PCI device. Which
>>>>> is of course the whole point of the work we are doing right now. But
>>>>> all the BARs are correctly described by the LAN9662 PCI card.
>>>>>
>>>>> Clément
>>>>
>>>> I remain confused.
>>>>
>>>> [RFC 00/10] add support for fwnode in i2c mux system and sfp
>>>> https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
>>>>
>>>>    references a PCIe driver:
>>>>    [2]
>>>> https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c
>>>>
>>>> So there is a PCIe driver that works.
>>>>
>>>> However, the RFC patch series was proposing adding fwnode support to
>>>> the driver.  My first
>>>> surface reading (just part of that one email, not the entire series or
>>>> the replies yet),
>>>> notes:
>>>>
>>>>    ... However, when
>>>>    plugged in a PCIe slot (on a x86), there is no device-tree support
>>>> and
>>>>    the peripherals that are present must be described in some other way.
>>>>
>>>> I am assuming that the peripherals are what you mentioned above as
>>>> '"platform"
>>>> devices'.  This is where my current confusion lies.  Are the "platform"
>>>> devices accessed via the PCI bus or is there some other electrical
>>>> connection
>>>> between the host system and the PCIe card?
>>>
>>> Hi Frank,
>>>
>>> The platform devices exposed by this PCIe card are available via some
>>> BAR using PCI memory mapped areas, so it's totally standard PCI stuff.
>>>
>>>>
>>>> If the "platform" devices are accessed via the PCI bus, then I would
>>>> expect them
>>>> to be described by PCI configuration header registers.  Are the PCI
>>>> configuration
>>>> registers to describe the "platform" devices not present?
>>>
>>> I'm not sure to understand what you mean here. PCI configuration headers
>>> only provides some basic registers allowing to identify the PCI device
>>> (vendor/product) and some memory areas that are exposed (BAR). They do
>>> not provides the "list" of peripherals that are exposed by the devices,
>>> only some BARs that can be mapped and that allows to access.
>>
>> Yes, "identify the PCI device (vendor/product) and some memory areas".
>> The driver for the (vendor/product) 'knows' what peripherals are exposed
>> by the device and where within the BAR to find the registers for each
>> of the devices.
>>
>> A normal PCI driver would contain this information.  If I understand the
>> proposal of this patch series, of_pci_make_dev_node() adds a node to
>> the devicetree, when invoked via a PCI quirk for certain specific
>> vendor/product cards.  This node must exist for the flattened device
>> tree (FDT) overlay for the card to be loaded.  The driver for the card
>> will get the overlay FDT from the card and load it into the kernel.
>> The driver will use the information that then exists in the devicetree
>> describing the card, instead of using information from the PCI configuration
>> headers from the card.
> 
> How would all the sub devices be defined by the PCI config space other
> than a VID/PID implies *everything*. That's the same as the pre-DT
> world where the ARM machine ID number (from RMK's registry) implied
> everything. These days, we can have an entire SoC exposed behind a PCI
> BAR which I think is pretty much Clement's usecase. Putting an SoC
> behind a PCI BAR is no more discoverable than a "normal" SoC.
> 
>>
>> The intent is to be able to re-use devicetree based drivers instead of
>> having the driver be a native PCI driver.
> 
> Not instead of. There's the PCI driver for the FPGA or SoC bus with
> multiple unrelated devices behind it. The PCI driver is just a bus
> driver much like we have for various custom SoC bus drivers.
> 
>> This goes against historical Linux practice.  The idea of taking a driver
>> from another environment (eg Windows, HP Unix, Sun Unix, IBM Unix, etc)
>> and adding a shim layer to translate between Linux and the other
>> environment has been rejected.  Ironically, in this case, the other
>> environment is Linux (more specifically the Linux OF implementation).
> 
> I don't see how your example relates to this in any way whatsoever.
> We're talking about different discovery mechanisms, not different
> driver models/environments.
> 
>> Even thought the other environment is Linux, this is still adding a
>> shim layer to translate between that other environment and the native
>> Linux PCI environment for which the driver would normally be written.
>>
>> In other words, this is not acceptable.  Normal alternatives would be
>> something like
>> (1) add the PCI awareness to the existing drivers,
> 
> The downstream devices don't have their own PCI config space. That
> won't work. PCI drivers expect a device with PCI config space. Devices
> to drivers are always 1:1, so we couldn't share the config space among
> multiple drivers or something. For devices which are not discoverable
> like these are, our choices are DT, ACPI or s/w nodes (aka
> platform_data 2.0).
> 
>> (2) split the devicetree aware and PCI aware portions of the driver
>> to common code that would be invoked from separate devicetree and PCI
>> drivers,
> 
> That only makes sense for something that is a single driver. Not the
> case here. For the FPGA, the devices are not known up front.
> 
>> (3) write entirely separate devicetree and PCI drivers, or
> 
> For the same reason as 1, that simply won't work.
> 
>> (4) some other creative solution.
>>
>> Am I mis-interpretting or misunderstanding anything crucial here?
> 
> Yes...
> 
> We now have 3 different use cases all needing the same thing. The
> 3rd[1] is the recent test infrastructure change to have test devices
> added. They all have non-discoverable devices downstream of a PCI
> device. We need a solution here.

Thanks Rob for this explanation. I would like to second that, as I've
been looking for a "solution" for exact this situation for a few years
now in multiple system configurations. The setup mostly being identical:
An FPGA connected via PCIe to the host CPU, embedded in the FPGA misc
devices like NS16550 UART, GPIO controller, etc which often have
existing drivers in the Kernel. Using this dynamic device tree node
approach for the PCIe EP with the possibility to describe the FPGA-
internal devices via device tree seems to be the most elegant solution
IMHO.

Thanks,
Stefan

> Rob
> 
> [1] https://lore.kernel.org/lkml/20230120-simple-mfd-pci-v1-1-c46b3d6601ef@axis.com/
  
Clément Léger March 7, 2023, 8:47 a.m. UTC | #15
Le Mon, 6 Mar 2023 18:52:42 -0600,
Rob Herring <robh@kernel.org> a écrit :

> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >
> > On 3/6/23 02:35, clement.leger@bootlin.com wrote:  
> > > Le 2023-03-04 00:42, Frank Rowand a écrit :  
> > >> On 2/27/23 04:31, Clément Léger wrote:  
> > >>> Le Mon, 27 Feb 2023 00:51:29 -0600,
> > >>> Frank Rowand <frowand.list@gmail.com> a écrit :
> > >>>  
> > >>>> On 1/19/23 21:02, Lizhi Hou wrote:  
> > >>>>> This patch series introduces OF overlay support for PCI devices
> > >>>>> which
> > >>>>> primarily addresses two use cases. First, it provides a data driven
> > >>>>> method
> > >>>>> to describe hardware peripherals that are present in a PCI endpoint
> > >>>>> and
> > >>>>> hence can be accessed by the PCI host. Second, it allows reuse of a
> > >>>>> OF
> > >>>>> compatible driver -- often used in SoC platforms -- in a PCI host
> > >>>>> based
> > >>>>> system.
> > >>>>>
> > >>>>> There are 2 series devices rely on this patch:
> > >>>>>
> > >>>>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
> > >>>>>   2) Microchip LAN9662 Ethernet Controller
> > >>>>>
> > >>>>>      Please see:
> > >>>>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> > >>>>>  
> > >>>>
> > >>>>  
> > >>>>> Normally, the PCI core discovers PCI devices and their BARs using
> > >>>>> the
> > >>>>> PCI enumeration process. However, the process does not provide a way
> > >>>>> to
> > >>>>> discover the hardware peripherals that are present in a PCI device,
> > >>>>> and
> > >>>>> which can be accessed through the PCI BARs. Also, the enumeration
> > >>>>> process  
> > >>>>
> > >>>> I'm confused.  The PCI Configuration Header Registers should describe
> > >>>> the
> > >>>> hardware on the PCI card.
> > >>>>
> > >>>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
> > >>>> themselves, so I would like to analyze that case separately), does
> > >>>> the
> > >>>> second device, "Microchip LAN9662 Ethernet Controller" properly
> > >>>> implement
> > >>>> the PCI Configuration Header Registers?  What additional information
> > >>>> is
> > >>>> needed that is not provided in those registers?  
> > >>>
> > >>> Hi Frank,
> > >>>
> > >>> I guess Lizhi wanted to say that it does not provide a way to describe
> > >>> all the "platform" devices that are exposed by this PCI device. Which
> > >>> is of course the whole point of the work we are doing right now. But
> > >>> all the BARs are correctly described by the LAN9662 PCI card.
> > >>>
> > >>> Clément  
> > >>
> > >> I remain confused.
> > >>
> > >> [RFC 00/10] add support for fwnode in i2c mux system and sfp
> > >> https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
> > >>
> > >>   references a PCIe driver:
> > >>   [2]
> > >> https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c
> > >>
> > >> So there is a PCIe driver that works.
> > >>
> > >> However, the RFC patch series was proposing adding fwnode support to
> > >> the driver.  My first
> > >> surface reading (just part of that one email, not the entire series or
> > >> the replies yet),
> > >> notes:
> > >>
> > >>   ... However, when
> > >>   plugged in a PCIe slot (on a x86), there is no device-tree support
> > >> and
> > >>   the peripherals that are present must be described in some other way.
> > >>
> > >> I am assuming that the peripherals are what you mentioned above as
> > >> '"platform"
> > >> devices'.  This is where my current confusion lies.  Are the "platform"
> > >> devices accessed via the PCI bus or is there some other electrical
> > >> connection
> > >> between the host system and the PCIe card?  
> > >
> > > Hi Frank,
> > >
> > > The platform devices exposed by this PCIe card are available via some
> > > BAR using PCI memory mapped areas, so it's totally standard PCI stuff.
> > >  
> > >>
> > >> If the "platform" devices are accessed via the PCI bus, then I would
> > >> expect them
> > >> to be described by PCI configuration header registers.  Are the PCI
> > >> configuration
> > >> registers to describe the "platform" devices not present?  
> > >
> > > I'm not sure to understand what you mean here. PCI configuration headers
> > > only provides some basic registers allowing to identify the PCI device
> > > (vendor/product) and some memory areas that are exposed (BAR). They do
> > > not provides the "list" of peripherals that are exposed by the devices,
> > > only some BARs that can be mapped and that allows to access.  
> >
> > Yes, "identify the PCI device (vendor/product) and some memory areas".
> > The driver for the (vendor/product) 'knows' what peripherals are exposed
> > by the device and where within the BAR to find the registers for each
> > of the devices.
> >
> > A normal PCI driver would contain this information.  If I understand the
> > proposal of this patch series, of_pci_make_dev_node() adds a node to
> > the devicetree, when invoked via a PCI quirk for certain specific
> > vendor/product cards.  This node must exist for the flattened device
> > tree (FDT) overlay for the card to be loaded.  The driver for the card
> > will get the overlay FDT from the card and load it into the kernel.
> > The driver will use the information that then exists in the devicetree
> > describing the card, instead of using information from the PCI configuration
> > headers from the card.  
> 
> How would all the sub devices be defined by the PCI config space other
> than a VID/PID implies *everything*. That's the same as the pre-DT
> world where the ARM machine ID number (from RMK's registry) implied
> everything. These days, we can have an entire SoC exposed behind a PCI
> BAR which I think is pretty much Clement's usecase. Putting an SoC
> behind a PCI BAR is no more discoverable than a "normal" SoC.

Thanks Rob for re-explaining all of that, I thought the cover letter
at [1] explained that This is *exactly* my usecase. the lan9662 SoC can
either be used to run Linux and uses a device-tree description, or can
be configured as a PCIe endpoint card and plugged on, a PCI port, in
which case all the SoC IO memories are exposed through BAR 0 & 1.

> 
> >
> > The intent is to be able to re-use devicetree based drivers instead of
> > having the driver be a native PCI driver.  
> 
> Not instead of. There's the PCI driver for the FPGA or SoC bus with
> multiple unrelated devices behind it. The PCI driver is just a bus
> driver much like we have for various custom SoC bus drivers.
> 
> > This goes against historical Linux practice.  The idea of taking a driver
> > from another environment (eg Windows, HP Unix, Sun Unix, IBM Unix, etc)
> > and adding a shim layer to translate between Linux and the other
> > environment has been rejected.  Ironically, in this case, the other
> > environment is Linux (more specifically the Linux OF implementation).
> 
> I don't see how your example relates to this in any way whatsoever.
> We're talking about different discovery mechanisms, not different
> driver models/environments.
> 
> > Even thought the other environment is Linux, this is still adding a
> > shim layer to translate between that other environment and the native
> > Linux PCI environment for which the driver would normally be written.

Since there is an entire SoC described behind that PCI card, we need to
link all the devcies together. So it's not as simple as saying "I want
a driver for each device to be probed", we also need to describe the
whole hierarchy & links between the devices. PCI "itself" does not
describe how to define that, only a way to access the memory and
identify the PCI device.

> >
> > In other words, this is not acceptable.  Normal alternatives would be
> > something like
> > (1) add the PCI awareness to the existing drivers,  
> 
> The downstream devices don't have their own PCI config space. That
> won't work. PCI drivers expect a device with PCI config space. Devices
> to drivers are always 1:1, so we couldn't share the config space among
> multiple drivers or something. For devices which are not discoverable
> like these are, our choices are DT, ACPI or s/w nodes (aka
> platform_data 2.0).

Exactly, and even though it would be possible to share the the config
space, it would mean that each driver would need to be modified to
support PCI and all the OF layer that allows to link the devcuie
together and configure the device would need to be modified in some way
to allows passing arguments, that would be going back to paltform_data
stuff.

> 
> > (2) split the devicetree aware and PCI aware portions of the driver
> > to common code that would be invoked from separate devicetree and PCI
> > drivers,  
> 
> That only makes sense for something that is a single driver. Not the
> case here. For the FPGA, the devices are not known up front.
> 
> > (3) write entirely separate devicetree and PCI drivers, or  
> 
> For the same reason as 1, that simply won't work.
> 
> > (4) some other creative solution.
> >
> > Am I mis-interpretting or misunderstanding anything crucial here?  
> 
> Yes...
> 
> We now have 3 different use cases all needing the same thing. The
> 3rd[1] is the recent test infrastructure change to have test devices
> added. They all have non-discoverable devices downstream of a PCI
> device. We need a solution here.

We have been going back and forth for about more than a year now and we
tested several things (software nodes, ACPI, MFD cells,
device-tree overlays). The most elegant solution (at least from the
ones we though of) has proven to be the device-tree overlays with
dynamic PCI of nodes for various reasons:

- Minimal amount of code modified
- Reusability of the existing SoC device-tree
- Already available bindings (no need to define a new standard for
  system description)
- Works on all platforms (x86, ARM, ARM64, etc)
- Ease of use for the end-user

No other solution was providing so much pros (see [1] for the history).
Of course there are some cons such as the *not so clear* status about OF
overlays statiblity when loaded/unloaded but we are clearly working
toward a better support.

I even think that a common driver to handle much of these use cases
could exists and allow to load an overlay based on the PCI VID/PID and
apply some ranges remapping depending on it, allowing to reduce the
amount of specific complex drivers for handling these usecases.

Clément

[1]
https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
  
Frank Rowand March 8, 2023, 7:29 a.m. UTC | #16
On 3/7/23 02:47, Clément Léger wrote:
> Le Mon, 6 Mar 2023 18:52:42 -0600,
> Rob Herring <robh@kernel.org> a écrit :
> 
>> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>
>>> On 3/6/23 02:35, clement.leger@bootlin.com wrote:  
>>>> Le 2023-03-04 00:42, Frank Rowand a écrit :  
>>>>> On 2/27/23 04:31, Clément Léger wrote:  
>>>>>> Le Mon, 27 Feb 2023 00:51:29 -0600,
>>>>>> Frank Rowand <frowand.list@gmail.com> a écrit :
>>>>>>  
>>>>>>> On 1/19/23 21:02, Lizhi Hou wrote:  
>>>>>>>> This patch series introduces OF overlay support for PCI devices
>>>>>>>> which
>>>>>>>> primarily addresses two use cases. First, it provides a data driven
>>>>>>>> method
>>>>>>>> to describe hardware peripherals that are present in a PCI endpoint
>>>>>>>> and
>>>>>>>> hence can be accessed by the PCI host. Second, it allows reuse of a
>>>>>>>> OF
>>>>>>>> compatible driver -- often used in SoC platforms -- in a PCI host
>>>>>>>> based
>>>>>>>> system.
>>>>>>>>
>>>>>>>> There are 2 series devices rely on this patch:
>>>>>>>>
>>>>>>>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>>>>>>>>   2) Microchip LAN9662 Ethernet Controller
>>>>>>>>
>>>>>>>>      Please see:

First off, if I had paid gone back and finished reading through this link:

>>>>>>>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/

I would have found out that the Microchip LAN9662 Ethernet Controller is
_not_ just an ethernet controller (as I was assuming), but is instead also
contains ARM64 cores.  (Actually I am sure that I had noted that an SoC
was involved, but lost track of that info when trying to wrap my head
around this patch series.)

Even this information does not explain where the ethernet controller is
located and how it is accessed.  My most likely guess without looking at
chip specification is that the ethernet controller interfaces to the
processor bus and/or an IOMMU is probably involved.  What is very unclear
to me is the interconnection between the card PCI connector (which will be
plugged into the host system where we are running Linux for this patch
series).  This will feed into my misunderstanding below.

For future versions of this patch series, please include the info that the
LAN9662 Ethernet Controller is an SoC, as was noted in the above link.

>>>>>>>>  
>>>>>>>
>>>>>>>  
>>>>>>>> Normally, the PCI core discovers PCI devices and their BARs using
>>>>>>>> the
>>>>>>>> PCI enumeration process. However, the process does not provide a way
>>>>>>>> to
>>>>>>>> discover the hardware peripherals that are present in a PCI device,
>>>>>>>> and
>>>>>>>> which can be accessed through the PCI BARs. Also, the enumeration
>>>>>>>> process  
>>>>>>>
>>>>>>> I'm confused.  The PCI Configuration Header Registers should describe
>>>>>>> the
>>>>>>> hardware on the PCI card.
>>>>>>>
>>>>>>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
>>>>>>> themselves, so I would like to analyze that case separately), does
>>>>>>> the
>>>>>>> second device, "Microchip LAN9662 Ethernet Controller" properly
>>>>>>> implement
>>>>>>> the PCI Configuration Header Registers?  What additional information
>>>>>>> is
>>>>>>> needed that is not provided in those registers?  
>>>>>>
>>>>>> Hi Frank,
>>>>>>
>>>>>> I guess Lizhi wanted to say that it does not provide a way to describe
>>>>>> all the "platform" devices that are exposed by this PCI device. Which
>>>>>> is of course the whole point of the work we are doing right now. But
>>>>>> all the BARs are correctly described by the LAN9662 PCI card.
>>>>>>
>>>>>> Clément  
>>>>>
>>>>> I remain confused.
>>>>>
>>>>> [RFC 00/10] add support for fwnode in i2c mux system and sfp
>>>>> https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
>>>>>
>>>>>   references a PCIe driver:
>>>>>   [2]
>>>>> https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c
>>>>>
>>>>> So there is a PCIe driver that works.
>>>>>
>>>>> However, the RFC patch series was proposing adding fwnode support to
>>>>> the driver.  My first
>>>>> surface reading (just part of that one email, not the entire series or
>>>>> the replies yet),
>>>>> notes:
>>>>>
>>>>>   ... However, when
>>>>>   plugged in a PCIe slot (on a x86), there is no device-tree support
>>>>> and
>>>>>   the peripherals that are present must be described in some other way.
>>>>>
>>>>> I am assuming that the peripherals are what you mentioned above as
>>>>> '"platform"
>>>>> devices'.  This is where my current confusion lies.  Are the "platform"
>>>>> devices accessed via the PCI bus or is there some other electrical
>>>>> connection
>>>>> between the host system and the PCIe card?  
>>>>>>>> Hi Frank,
>>>>

From Clément:

>>>> The platform devices exposed by this PCIe card are available via some
>>>> BAR using PCI memory mapped areas, so it's totally standard PCI stuff.
>>>>  
>>>>>

I had been assuming that each device that is visible via the PCI connector
had a set of PCI configuration header registers that
provided the information required to use that device.

The above paragraph reinforced my understanding, saying "so it's totally
standard PCI stuff".  (Apparently my understanding was wrong, see further
below...)

I had earlier asked this:

>>>>> If the "platform" devices are accessed via the PCI bus, then I would
>>>>> expect them
>>>>> to be described by PCI configuration header registers.  Are the PCI
>>>>> configuration
>>>>> registers to describe the "platform" devices not present?  

Clément replied:

>>>>
>>>> I'm not sure to understand what you mean here. PCI configuration headers
>>>> only provides some basic registers allowing to identify the PCI device
>>>> (vendor/product) and some memory areas that are exposed (BAR). They do
>>>> not provides the "list" of peripherals that are exposed by the devices,
>>>> only some BARs that can be mapped and that allows to access.  
>>>

To which I replied:

>>> Yes, "identify the PCI device (vendor/product) and some memory areas".
>>> The driver for the (vendor/product) 'knows' what peripherals are exposed
>>> by the device and where within the BAR to find the registers for each
>>> of the devices.
>>>
>>> A normal PCI driver would contain this information.  If I understand the
>>> proposal of this patch series, of_pci_make_dev_node() adds a node to
>>> the devicetree, when invoked via a PCI quirk for certain specific
>>> vendor/product cards.  This node must exist for the flattened device
>>> tree (FDT) overlay for the card to be loaded.  The driver for the card
>>> will get the overlay FDT from the card and load it into the kernel.
>>> The driver will use the information that then exists in the devicetree
>>> describing the card, instead of using information from the PCI configuration
>>> headers from the card.  
>>

Clément had further replied (this seems to have gotten lost in Rob's reply to me:

   In the case of the lan9662 cnetwork controller, BAR 0 and 1 exposes
   multiples devices that are located at some subranges of this BAR. For
   instance (not accurate), we have the I2C controller located at BAR 0
   + offset 0X1000, then the flexcom controller exposed in BAR 0 at offset
     0x20000, etc. This list of peripheral is not exposed at all by the PCI
   configuration headers (since it is not the purpose of course). All of
   these peripherals have already existing platform drivers which can then
   be reused thanks to the PCI device-tree overlay series.

At this point my understanding (or misunderstanding as we will see below)
remains intact, with the slight addition that the devices on the PCI card
are multi-function.  I've never worked with multi-function devices on a
PCI card so I don't know how they are supposed to be described, but this
new information seems reasonable.

The (vendor/product) would provide the information to find a driver
that understands that this PCI device contains both the I2C controller
and the flexcom controller.

The phrase "This list of peripheral is not exposed at all by the PCI
configuration headers" is confusing to me.  A major function of the
configuration headers it to list exactly what devices are available on
the card, as identified by (vendor/product) in each configuration
header.  The (vendor/product) "exposes" the multi-function device,
also known as "the list of peripheral".


Rob's "re-explaining":

>> How would all the sub devices be defined by the PCI config space other
>> than a VID/PID implies *everything*. That's the same as the pre-DT
>> world where the ARM machine ID number (from RMK's registry) implied
>> everything. These days, we can have an entire SoC exposed behind a PCI
>> BAR which I think is pretty much Clement's usecase. Putting an SoC
>> behind a PCI BAR is no more discoverable than a "normal" SoC.

That is a PCI device problem, not a non-discoverable device on the
system non-discoverable busses.  PCI devices are supposed to be
discoverable on the PCI bus.

This series claimed to be about (1) a method to "describe hardware devices
that are present in a PCI endpoint" and (2) "allows reuse of a OF
compatible driver".

(1) is normally implemented directly in a PCI driver

(2) is using a shim to make a single driver able to access a device that
is either directly on the system bus (the driver "native" environment)
or located on a PCI card (in other words on the other side of the PCI
bus interface on the PCI card).

"Putting an SoC behind a PCI BAR" is not different to me than any
other device on a PCI card.  And I do not _even_ know what it means
to put an SoC behind a PCI BAR (there are many ways that could be
implemented), which is why I was asking Clément questions, trying
to understand the problem space.

PCI devices are more than a BAR, there is a set of PCI Configuration
Header Registers to provide the requisite information to locate the
proper driver for the device and provides the dynamic information
specific to the device in this card (eg address ranges, interrupt
info, MSI, MSI-X, etc).  The driver is expected to contain all
further knowledge about how to control the device.


Clément replies:

> Thanks Rob for re-explaining all of that, I thought the cover letter
> at [1] explained that This is *exactly* my usecase. the lan9662 SoC can
> either be used to run Linux and uses a device-tree description, or can

This is a clue for me:

> be configured as a PCIe endpoint card and plugged on, a PCI port, in
> which case all the SoC IO memories are exposed through BAR 0 & 1.

I had been assuming that each SoC supplied device that is visible via
the PCI connector had a set of PCI Configuration Header Registers that
provided the information required to use that device.

I think this is where my misunderstanding jumps out.

Please correct and explain as necessary anything in the block the
I write here:

The clue above appears to say that one portion of "SoC IO memories" are
exposed through BAR 0 and a second portion of "SoC IO memories" are
exposed through BAR 1.

This would be consistent with Rob's description saying the entire
SoC is exposed by a single Configuration Header Register set.

Caveat: I've never worked with an IOMMU on ARM64 (only on other very
different architectures) so I have no clue as to what the specific
ARM64 IOMMU architecture is or any related nuances.

I don't know what "SoC IO memories" are.  But given the context of a
driver accessing addresses, I am guessing these are addresses in the
context of the processor bus.  The precise description probably does
not matter if I can simply assume they are addresses the driver will
use, and the addresses are properly mapped into the addresses listed
in the BARs.

Frank's assumption: the relevant devices, such as the ethernet controller
are located directly on the SoC processor bus (is the proper term "memory
bus"? - addresses that are accessed via SoC load and store instructions).

So are the entire set of SoC processor bus addresses mapped via the BARs?
Or even a significant subset?

Further Frank's assumptions: Some sort of firmware is running on the SoC
that is on the PCI card that does things like control clocks, do power
management, set up the interrupts from devices to be visible via the PCI
connector, as described by the PCI Configuration Header Registers (possibly
including the MSI and/or MSI-X registers), any IOMMU setup if needed, etc.
The host system that the PCI card is plugged into is insulated from that
set of activities by the PCI interface.  <----- This assumption may turn
out to be wrong, as I ask more questions further below.


> 
>>
>>>
>>> The intent is to be able to re-use devicetree based drivers instead of
>>> having the driver be a native PCI driver.  
>>
>> Not instead of. There's the PCI driver for the FPGA or SoC bus with
>> multiple unrelated devices behind it. The PCI driver is just a bus
>> driver much like we have for various custom SoC bus drivers.
>>
>>> This goes against historical Linux practice.  The idea of taking a driver
>>> from another environment (eg Windows, HP Unix, Sun Unix, IBM Unix, etc)
>>> and adding a shim layer to translate between Linux and the other
>>> environment has been rejected.  Ironically, in this case, the other
>>> environment is Linux (more specifically the Linux OF implementation).
>>
>> I don't see how your example relates to this in any way whatsoever.
>> We're talking about different discovery mechanisms, not different
>> driver models/environments.
>>
>>> Even thought the other environment is Linux, this is still adding a
>>> shim layer to translate between that other environment and the native
>>> Linux PCI environment for which the driver would normally be written.
> 

Clément replies:

> Since there is an entire SoC described behind that PCI card, we need to
> link all the devcies together. So it's not as simple as saying "I want
> a driver for each device to be probed", we also need to describe the
> whole hierarchy & links between the devices. PCI "itself" does not
> describe how to define that, only a way to access the memory and
> identify the PCI device.

This makes me guess that the assumption I made above in this email
is not correct.  This seems to be saying that the host system needs
to understand related devices in the card's SoC, and possibly control
them.  An example of this could be that the PCI Configuration Header
Register do _not_ describe the interrupt information for e.g. the
ethernet controller, and maybe the host system needs to understand
and/or program an interrupt controller in the card's SoC.  Is this
conceptually correct (that is maybe not this specific hardware, but
a similar concept)?


> 
>>>
>>> In other words, this is not acceptable.  Normal alternatives would be
>>> something like
>>> (1) add the PCI awareness to the existing drivers,  
>>

Rob said:

>> The downstream devices don't have their own PCI config space. That
>> won't work. PCI drivers expect a device with PCI config space. Devices
>> to drivers are always 1:1, so we couldn't share the config space among

A PCI device can be a multifunction device.  Whether the host OS supports
this via a single driver or multiple drivers is OS dependent.

>> multiple drivers or something. For devices which are not discoverable
>> like these are, our choices are DT, ACPI or s/w nodes (aka
>> platform_data 2.0).

The devices _are_ discoverable.  The PCI Configuration Header Register
(vendor/product) says what the devices are.

> 

Clément replies:
> Exactly, and even though it would be possible to share the the config
> space, it would mean that each driver would need to be modified to
> support PCI and all the OF layer that allows to link the devcuie
> together and configure the device would need to be modified in some way
> to allows passing arguments, that would be going back to paltform_data
> stuff.

To me this implies that (as an example, using the ethernet controller)
the host system will be configuring the resources and/or infrastructure
on the SoC that is used by the ethernet controller.

Do I understand that correctly?

My previous expectation is the the ethernet controller driver on the
host system would only be touching registers mapped to the ethernet
controller driver to do things like send commands, read status results,
specify buffer addresses that would map back to flow through the PCI
connector, etc.

> 
>>
>>> (2) split the devicetree aware and PCI aware portions of the driver
>>> to common code that would be invoked from separate devicetree and PCI
>>> drivers,  
>>
>> That only makes sense for something that is a single driver. Not the
>> case here. For the FPGA, the devices are not known up front.
>>
>>> (3) write entirely separate devicetree and PCI drivers, or  
>>
>> For the same reason as 1, that simply won't work.
>>
>>> (4) some other creative solution.
>>>
>>> Am I mis-interpretting or misunderstanding anything crucial here?  
>>
>> Yes...
>>
>> We now have 3 different use cases all needing the same thing. The
>> 3rd[1] is the recent test infrastructure change to have test devices
>> added. They all have non-discoverable devices downstream of a PCI
>> device. We need a solution here.
> 
> We have been going back and forth for about more than a year now and we
> tested several things (software nodes, ACPI, MFD cells,
> device-tree overlays). The most elegant solution (at least from the
> ones we though of) has proven to be the device-tree overlays with
> dynamic PCI of nodes for various reasons:

I can't really respond to this section and below because I still do
not understand the specific problem space of the SoC on the PCI card.

Hopefully your response to this email will move me further along
the path to properly understand the PCI card implementation.

-Frank

> 
> - Minimal amount of code modified
> - Reusability of the existing SoC device-tree
> - Already available bindings (no need to define a new standard for
>   system description)
> - Works on all platforms (x86, ARM, ARM64, etc)
> - Ease of use for the end-user
> 
> No other solution was providing so much pros (see [1] for the history).
> Of course there are some cons such as the *not so clear* status about OF
> overlays statiblity when loaded/unloaded but we are clearly working
> toward a better support.
> 
> I even think that a common driver to handle much of these use cases
> could exists and allow to load an overlay based on the PCI VID/PID and
> apply some ranges remapping depending on it, allowing to reduce the
> amount of specific complex drivers for handling these usecases.
> 
> Clément
> 
> [1]
> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
  
Frank Rowand March 8, 2023, 7:31 a.m. UTC | #17
On 3/6/23 18:52, Rob Herring wrote:
> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>

< snip >

Hi Rob,

I am in no position to comment intelligently on your comments until I
understand the SoC on PCI card model I am asking to be described in
this subthread.

I'll try to remember to get back to this email once my understanding
is more complete.

-Frank
  
Frank Rowand March 8, 2023, 7:38 a.m. UTC | #18
On 3/7/23 01:54, Stefan Roese wrote:
> On 3/7/23 01:52, Rob Herring wrote:
>> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>
>>> On 3/6/23 02:35, clement.leger@bootlin.com wrote:
>>>> Le 2023-03-04 00:42, Frank Rowand a écrit :
>>>>> On 2/27/23 04:31, Clément Léger wrote:
>>>>>> Le Mon, 27 Feb 2023 00:51:29 -0600,
>>>>>> Frank Rowand <frowand.list@gmail.com> a écrit :
>>>>>>
>>>>>>> On 1/19/23 21:02, Lizhi Hou wrote:
>>>>>>>> This patch series introduces OF overlay support for PCI devices
>>>>>>>> which
>>>>>>>> primarily addresses two use cases. First, it provides a data driven
>>>>>>>> method
>>>>>>>> to describe hardware peripherals that are present in a PCI endpoint
>>>>>>>> and
>>>>>>>> hence can be accessed by the PCI host. Second, it allows reuse of a
>>>>>>>> OF
>>>>>>>> compatible driver -- often used in SoC platforms -- in a PCI host
>>>>>>>> based
>>>>>>>> system.
>>>>>>>>
>>>>>>>> There are 2 series devices rely on this patch:
>>>>>>>>
>>>>>>>>    1) Xilinx Alveo Accelerator cards (FPGA based device)
>>>>>>>>    2) Microchip LAN9662 Ethernet Controller
>>>>>>>>
>>>>>>>>       Please see:
>>>>>>>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> Normally, the PCI core discovers PCI devices and their BARs using
>>>>>>>> the
>>>>>>>> PCI enumeration process. However, the process does not provide a way
>>>>>>>> to
>>>>>>>> discover the hardware peripherals that are present in a PCI device,
>>>>>>>> and
>>>>>>>> which can be accessed through the PCI BARs. Also, the enumeration
>>>>>>>> process
>>>>>>>
>>>>>>> I'm confused.  The PCI Configuration Header Registers should describe
>>>>>>> the
>>>>>>> hardware on the PCI card.
>>>>>>>
>>>>>>> Ignoring case 1 above _for the moment_ (FPGA devices are a world unto
>>>>>>> themselves, so I would like to analyze that case separately), does
>>>>>>> the
>>>>>>> second device, "Microchip LAN9662 Ethernet Controller" properly
>>>>>>> implement
>>>>>>> the PCI Configuration Header Registers?  What additional information
>>>>>>> is
>>>>>>> needed that is not provided in those registers?
>>>>>>
>>>>>> Hi Frank,
>>>>>>
>>>>>> I guess Lizhi wanted to say that it does not provide a way to describe
>>>>>> all the "platform" devices that are exposed by this PCI device. Which
>>>>>> is of course the whole point of the work we are doing right now. But
>>>>>> all the BARs are correctly described by the LAN9662 PCI card.
>>>>>>
>>>>>> Clément
>>>>>
>>>>> I remain confused.
>>>>>
>>>>> [RFC 00/10] add support for fwnode in i2c mux system and sfp
>>>>> https://lore.kernel.org/lkml/YhQHqDJvahgriDZK@lunn.ch/t/
>>>>>
>>>>>    references a PCIe driver:
>>>>>    [2]
>>>>> https://github.com/clementleger/linux/blob/fwnode_support/drivers/mfd/lan966x_pci_mfd.c
>>>>>
>>>>> So there is a PCIe driver that works.
>>>>>
>>>>> However, the RFC patch series was proposing adding fwnode support to
>>>>> the driver.  My first
>>>>> surface reading (just part of that one email, not the entire series or
>>>>> the replies yet),
>>>>> notes:
>>>>>
>>>>>    ... However, when
>>>>>    plugged in a PCIe slot (on a x86), there is no device-tree support
>>>>> and
>>>>>    the peripherals that are present must be described in some other way.
>>>>>
>>>>> I am assuming that the peripherals are what you mentioned above as
>>>>> '"platform"
>>>>> devices'.  This is where my current confusion lies.  Are the "platform"
>>>>> devices accessed via the PCI bus or is there some other electrical
>>>>> connection
>>>>> between the host system and the PCIe card?
>>>>
>>>> Hi Frank,
>>>>
>>>> The platform devices exposed by this PCIe card are available via some
>>>> BAR using PCI memory mapped areas, so it's totally standard PCI stuff.
>>>>
>>>>>
>>>>> If the "platform" devices are accessed via the PCI bus, then I would
>>>>> expect them
>>>>> to be described by PCI configuration header registers.  Are the PCI
>>>>> configuration
>>>>> registers to describe the "platform" devices not present?
>>>>
>>>> I'm not sure to understand what you mean here. PCI configuration headers
>>>> only provides some basic registers allowing to identify the PCI device
>>>> (vendor/product) and some memory areas that are exposed (BAR). They do
>>>> not provides the "list" of peripherals that are exposed by the devices,
>>>> only some BARs that can be mapped and that allows to access.
>>>
>>> Yes, "identify the PCI device (vendor/product) and some memory areas".
>>> The driver for the (vendor/product) 'knows' what peripherals are exposed
>>> by the device and where within the BAR to find the registers for each
>>> of the devices.
>>>
>>> A normal PCI driver would contain this information.  If I understand the
>>> proposal of this patch series, of_pci_make_dev_node() adds a node to
>>> the devicetree, when invoked via a PCI quirk for certain specific
>>> vendor/product cards.  This node must exist for the flattened device
>>> tree (FDT) overlay for the card to be loaded.  The driver for the card
>>> will get the overlay FDT from the card and load it into the kernel.
>>> The driver will use the information that then exists in the devicetree
>>> describing the card, instead of using information from the PCI configuration
>>> headers from the card.
>>
>> How would all the sub devices be defined by the PCI config space other
>> than a VID/PID implies *everything*. That's the same as the pre-DT
>> world where the ARM machine ID number (from RMK's registry) implied
>> everything. These days, we can have an entire SoC exposed behind a PCI
>> BAR which I think is pretty much Clement's usecase. Putting an SoC
>> behind a PCI BAR is no more discoverable than a "normal" SoC.
>>
>>>
>>> The intent is to be able to re-use devicetree based drivers instead of
>>> having the driver be a native PCI driver.
>>
>> Not instead of. There's the PCI driver for the FPGA or SoC bus with
>> multiple unrelated devices behind it. The PCI driver is just a bus
>> driver much like we have for various custom SoC bus drivers.
>>
>>> This goes against historical Linux practice.  The idea of taking a driver
>>> from another environment (eg Windows, HP Unix, Sun Unix, IBM Unix, etc)
>>> and adding a shim layer to translate between Linux and the other
>>> environment has been rejected.  Ironically, in this case, the other
>>> environment is Linux (more specifically the Linux OF implementation).
>>
>> I don't see how your example relates to this in any way whatsoever.
>> We're talking about different discovery mechanisms, not different
>> driver models/environments.
>>
>>> Even thought the other environment is Linux, this is still adding a
>>> shim layer to translate between that other environment and the native
>>> Linux PCI environment for which the driver would normally be written.
>>>
>>> In other words, this is not acceptable.  Normal alternatives would be
>>> something like
>>> (1) add the PCI awareness to the existing drivers,
>>
>> The downstream devices don't have their own PCI config space. That
>> won't work. PCI drivers expect a device with PCI config space. Devices
>> to drivers are always 1:1, so we couldn't share the config space among
>> multiple drivers or something. For devices which are not discoverable
>> like these are, our choices are DT, ACPI or s/w nodes (aka
>> platform_data 2.0).
>>
>>> (2) split the devicetree aware and PCI aware portions of the driver
>>> to common code that would be invoked from separate devicetree and PCI
>>> drivers,
>>
>> That only makes sense for something that is a single driver. Not the
>> case here. For the FPGA, the devices are not known up front.
>>
>>> (3) write entirely separate devicetree and PCI drivers, or
>>
>> For the same reason as 1, that simply won't work.
>>
>>> (4) some other creative solution.
>>>
>>> Am I mis-interpretting or misunderstanding anything crucial here?
>>
>> Yes...
>>
>> We now have 3 different use cases all needing the same thing. The
>> 3rd[1] is the recent test infrastructure change to have test devices
>> added. They all have non-discoverable devices downstream of a PCI
>> device. We need a solution here.
> 
> Thanks Rob for this explanation. I would like to second that, as I've
> been looking for a "solution" for exact this situation for a few years
> now in multiple system configurations. The setup mostly being identical:

Please don't hijack this subthread.

I am really struggling to get a correct understanding of the issues
specifically related to the example of the Microchip LAN9662 Ethernet
Controller on a PCI card.  I want this subthread to remain focused
on that, as much as possible.

The below info will be useful in the bigger picture discussion, and
the main thread discussion, but I want to keep this subthread focused
on the one specific example item of hardware.

Thanks (and thanks for all your useful participation over many years),

Frank

> An FPGA connected via PCIe to the host CPU, embedded in the FPGA misc
> devices like NS16550 UART, GPIO controller, etc which often have
> existing drivers in the Kernel. Using this dynamic device tree node
> approach for the PCIe EP with the possibility to describe the FPGA-
> internal devices via device tree seems to be the most elegant solution
> IMHO.
> 
> Thanks,
> Stefan
> 
>> Rob
>>
>> [1] https://lore.kernel.org/lkml/20230120-simple-mfd-pci-v1-1-c46b3d6601ef@axis.com/
  
Frank Rowand March 9, 2023, 5:52 a.m. UTC | #19
On 1/19/23 21:02, Lizhi Hou wrote:
> This patch series introduces OF overlay support for PCI devices which
> primarily addresses two use cases. First, it provides a data driven method
> to describe hardware peripherals that are present in a PCI endpoint and
> hence can be accessed by the PCI host. Second, it allows reuse of a OF
> compatible driver -- often used in SoC platforms -- in a PCI host based
> system.
> 
> There are 2 series devices rely on this patch:
> 
>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>   2) Microchip LAN9662 Ethernet Controller

Can someone please provide me a link to both:

  - a high level specification sheet
  - a detailed data sheet/programming manual

-Frank


> 
>      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> 
> Normally, the PCI core discovers PCI devices and their BARs using the
> PCI enumeration process. However, the process does not provide a way to
> discover the hardware peripherals that are present in a PCI device, and
> which can be accessed through the PCI BARs. Also, the enumeration process
> does not provide a way to associate MSI-X vectors of a PCI device with the
> hardware peripherals that are present in the device. PCI device drivers
> often use header files to describe the hardware peripherals and their
> resources as there is no standard data driven way to do so. This patch
> series proposes to use flattened device tree blob to describe the
> peripherals in a data driven way. Based on previous discussion, using
> device tree overlay is the best way to unflatten the blob and populate
> platform devices. To use device tree overlay, there are three obvious
> problems that need to be resolved.
> 
> First, we need to create a base tree for non-DT system such as x86_64. A
> patch series has been submitted for this:
> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
> 
> Second, a device tree node corresponding to the PCI endpoint is required
> for overlaying the flattened device tree blob for that PCI endpoint.
> Because PCI is a self-discoverable bus, a device tree node is usually not
> created for PCI devices. This series adds support to generate a device
> tree node for a PCI device which advertises itself using PCI quirks
> infrastructure.
> 
> Third, we need to generate device tree nodes for PCI bridges since a child
> PCI endpoint may choose to have a device tree node created.
> 
> This patch series is made up of three patches.
> 
> The first patch is adding OF interface to create or destroy OF node
> dynamically.
> 
> The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
> When the option is turned on, the kernel will generate device tree nodes
> for all PCI bridges unconditionally. The patch also shows how to use the
> PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
> tree node for a device. Specifically, the patch generates a device tree
> node for Xilinx Alveo U50 PCIe accelerator device. The generated device
> tree nodes do not have any property.
> 
> The third patch adds basic properties ('reg', 'compatible' and
> 'device_type') to the dynamically generated device tree nodes. More
> properties can be added in the future.
> 
> Here is the example of device tree nodes generated within the ARM64 QEMU.
> # lspci -t    
> -[0000:00]-+-00.0
>            +-01.0-[01]--
>            +-01.1-[02]----00.0
>            +-01.2-[03]----00.0
>            +-01.3-[04]----00.0
>            +-01.4-[05]----00.0
>            +-01.5-[06]--
>            +-01.6-[07]--
>            +-01.7-[08]--
>            +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
>            |                                            \-00.1
>            +-02.1-[0c]--
>            \-03.0-[0d-0e]----00.0-[0e]----01.0
> 
> # tree /sys/firmware/devicetree/base/pcie\@10000000
> /sys/firmware/devicetree/base/pcie@10000000
> |-- #address-cells
> |-- #interrupt-cells
> |-- #size-cells
> |-- bus-range
> |-- compatible
> |-- device_type
> |-- dma-coherent
> |-- interrupt-map
> |-- interrupt-map-mask
> |-- linux,pci-domain
> |-- msi-parent
> |-- name
> |-- pci@1,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,1
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,2
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,3
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,4
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,5
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,6
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@1,7
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@2,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- pci@0,0
> |   |   |-- #address-cells
> |   |   |-- #size-cells
> |   |   |-- compatible
> |   |   |-- device_type
> |   |   |-- pci@0,0
> |   |   |   |-- #address-cells
> |   |   |   |-- #size-cells
> |   |   |   |-- compatible
> |   |   |   |-- dev@0,0
> |   |   |   |   |-- compatible
> |   |   |   |   `-- reg
> |   |   |   |-- dev@0,1
> |   |   |   |   |-- compatible
> |   |   |   |   `-- reg
> |   |   |   |-- device_type
> |   |   |   |-- ranges
> |   |   |   `-- reg
> |   |   |-- ranges
> |   |   `-- reg
> |   |-- ranges
> |   `-- reg
> |-- pci@2,1
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- ranges
> |   `-- reg
> |-- pci@3,0
> |   |-- #address-cells
> |   |-- #size-cells
> |   |-- compatible
> |   |-- device_type
> |   |-- pci@0,0
> |   |   |-- #address-cells
> |   |   |-- #size-cells
> |   |   |-- compatible
> |   |   |-- device_type
> |   |   |-- ranges
> |   |   `-- reg
> |   |-- ranges
> |   `-- reg
> |-- ranges
> `-- reg
> 
> Changes since v6:
> - Removed single line wrapper functions
> - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
> 
> Changes since v5:
> - Fixed code review comments
> - Fixed incorrect 'ranges' and 'reg' properties and verified address
>   translation.
> 
> Changes since RFC v4:
> - Fixed code review comments
> 
> Changes since RFC v3:
> - Split the Xilinx Alveo U50 PCI quirk to a separate patch
> - Minor changes in commit description and code comment
> 
> Changes since RFC v2:
> - Merged patch 3 with patch 2
> - Added OF interfaces of_changeset_add_prop_* and use them to create
>   properties.
> - Added '#address-cells', '#size-cells' and 'ranges' properties.
> 
> Changes since RFC v1:
> - Added one patch to create basic properties.
> - To move DT related code out of PCI subsystem, replaced of_node_alloc()
>   with of_create_node()/of_destroy_node()
> 
> Lizhi Hou (3):
>   of: dynamic: Add interfaces for creating device node dynamically
>   PCI: Create device tree node for selected devices
>   PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
> 
>  drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
>  drivers/pci/Kconfig         |  12 ++
>  drivers/pci/Makefile        |   1 +
>  drivers/pci/bus.c           |   2 +
>  drivers/pci/msi/irqdomain.c |   6 +-
>  drivers/pci/of.c            |  71 ++++++++++++
>  drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
>  drivers/pci/pci-driver.c    |   3 +-
>  drivers/pci/pci.h           |  19 ++++
>  drivers/pci/quirks.c        |  11 ++
>  drivers/pci/remove.c        |   1 +
>  include/linux/of.h          |  24 ++++
>  12 files changed, 556 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/pci/of_property.c
>
  
Frank Rowand March 9, 2023, 5:56 a.m. UTC | #20
On 3/8/23 23:52, Frank Rowand wrote:
> On 1/19/23 21:02, Lizhi Hou wrote:
>> This patch series introduces OF overlay support for PCI devices which
>> primarily addresses two use cases. First, it provides a data driven method
>> to describe hardware peripherals that are present in a PCI endpoint and
>> hence can be accessed by the PCI host. Second, it allows reuse of a OF
>> compatible driver -- often used in SoC platforms -- in a PCI host based
>> system.
>>
>> There are 2 series devices rely on this patch:
>>
>>   1) Xilinx Alveo Accelerator cards (FPGA based device)
>>   2) Microchip LAN9662 Ethernet Controller
> 

Sorry, hit send too quickly...

> Can someone please provide me a link to both:
> 
>   - a high level specification sheet
>   - a detailed data sheet/programming manual

for the lan9662 PCIe card

-Frank

> 
> -Frank
> 
> 
>>
>>      Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>>
>> Normally, the PCI core discovers PCI devices and their BARs using the
>> PCI enumeration process. However, the process does not provide a way to
>> discover the hardware peripherals that are present in a PCI device, and
>> which can be accessed through the PCI BARs. Also, the enumeration process
>> does not provide a way to associate MSI-X vectors of a PCI device with the
>> hardware peripherals that are present in the device. PCI device drivers
>> often use header files to describe the hardware peripherals and their
>> resources as there is no standard data driven way to do so. This patch
>> series proposes to use flattened device tree blob to describe the
>> peripherals in a data driven way. Based on previous discussion, using
>> device tree overlay is the best way to unflatten the blob and populate
>> platform devices. To use device tree overlay, there are three obvious
>> problems that need to be resolved.
>>
>> First, we need to create a base tree for non-DT system such as x86_64. A
>> patch series has been submitted for this:
>> https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/
>> https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/
>>
>> Second, a device tree node corresponding to the PCI endpoint is required
>> for overlaying the flattened device tree blob for that PCI endpoint.
>> Because PCI is a self-discoverable bus, a device tree node is usually not
>> created for PCI devices. This series adds support to generate a device
>> tree node for a PCI device which advertises itself using PCI quirks
>> infrastructure.
>>
>> Third, we need to generate device tree nodes for PCI bridges since a child
>> PCI endpoint may choose to have a device tree node created.
>>
>> This patch series is made up of three patches.
>>
>> The first patch is adding OF interface to create or destroy OF node
>> dynamically.
>>
>> The second patch introduces a kernel option, CONFIG_DYNAMIC_PCI_OF_NODEX.
>> When the option is turned on, the kernel will generate device tree nodes
>> for all PCI bridges unconditionally. The patch also shows how to use the
>> PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device
>> tree node for a device. Specifically, the patch generates a device tree
>> node for Xilinx Alveo U50 PCIe accelerator device. The generated device
>> tree nodes do not have any property.
>>
>> The third patch adds basic properties ('reg', 'compatible' and
>> 'device_type') to the dynamically generated device tree nodes. More
>> properties can be added in the future.
>>
>> Here is the example of device tree nodes generated within the ARM64 QEMU.
>> # lspci -t    
>> -[0000:00]-+-00.0
>>            +-01.0-[01]--
>>            +-01.1-[02]----00.0
>>            +-01.2-[03]----00.0
>>            +-01.3-[04]----00.0
>>            +-01.4-[05]----00.0
>>            +-01.5-[06]--
>>            +-01.6-[07]--
>>            +-01.7-[08]--
>>            +-02.0-[09-0b]----00.0-[0a-0b]----00.0-[0b]--+-00.0
>>            |                                            \-00.1
>>            +-02.1-[0c]--
>>            \-03.0-[0d-0e]----00.0-[0e]----01.0
>>
>> # tree /sys/firmware/devicetree/base/pcie\@10000000
>> /sys/firmware/devicetree/base/pcie@10000000
>> |-- #address-cells
>> |-- #interrupt-cells
>> |-- #size-cells
>> |-- bus-range
>> |-- compatible
>> |-- device_type
>> |-- dma-coherent
>> |-- interrupt-map
>> |-- interrupt-map-mask
>> |-- linux,pci-domain
>> |-- msi-parent
>> |-- name
>> |-- pci@1,0
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,1
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,2
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,3
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,4
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,5
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,6
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@1,7
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@2,0
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- pci@0,0
>> |   |   |-- #address-cells
>> |   |   |-- #size-cells
>> |   |   |-- compatible
>> |   |   |-- device_type
>> |   |   |-- pci@0,0
>> |   |   |   |-- #address-cells
>> |   |   |   |-- #size-cells
>> |   |   |   |-- compatible
>> |   |   |   |-- dev@0,0
>> |   |   |   |   |-- compatible
>> |   |   |   |   `-- reg
>> |   |   |   |-- dev@0,1
>> |   |   |   |   |-- compatible
>> |   |   |   |   `-- reg
>> |   |   |   |-- device_type
>> |   |   |   |-- ranges
>> |   |   |   `-- reg
>> |   |   |-- ranges
>> |   |   `-- reg
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@2,1
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- ranges
>> |   `-- reg
>> |-- pci@3,0
>> |   |-- #address-cells
>> |   |-- #size-cells
>> |   |-- compatible
>> |   |-- device_type
>> |   |-- pci@0,0
>> |   |   |-- #address-cells
>> |   |   |-- #size-cells
>> |   |   |-- compatible
>> |   |   |-- device_type
>> |   |   |-- ranges
>> |   |   `-- reg
>> |   |-- ranges
>> |   `-- reg
>> |-- ranges
>> `-- reg
>>
>> Changes since v6:
>> - Removed single line wrapper functions
>> - Added Signed-off-by Clément Léger <clement.leger@bootlin.com>
>>
>> Changes since v5:
>> - Fixed code review comments
>> - Fixed incorrect 'ranges' and 'reg' properties and verified address
>>   translation.
>>
>> Changes since RFC v4:
>> - Fixed code review comments
>>
>> Changes since RFC v3:
>> - Split the Xilinx Alveo U50 PCI quirk to a separate patch
>> - Minor changes in commit description and code comment
>>
>> Changes since RFC v2:
>> - Merged patch 3 with patch 2
>> - Added OF interfaces of_changeset_add_prop_* and use them to create
>>   properties.
>> - Added '#address-cells', '#size-cells' and 'ranges' properties.
>>
>> Changes since RFC v1:
>> - Added one patch to create basic properties.
>> - To move DT related code out of PCI subsystem, replaced of_node_alloc()
>>   with of_create_node()/of_destroy_node()
>>
>> Lizhi Hou (3):
>>   of: dynamic: Add interfaces for creating device node dynamically
>>   PCI: Create device tree node for selected devices
>>   PCI: Add PCI quirks to generate device tree node for Xilinx Alveo U50
>>
>>  drivers/of/dynamic.c        | 197 +++++++++++++++++++++++++++++++++
>>  drivers/pci/Kconfig         |  12 ++
>>  drivers/pci/Makefile        |   1 +
>>  drivers/pci/bus.c           |   2 +
>>  drivers/pci/msi/irqdomain.c |   6 +-
>>  drivers/pci/of.c            |  71 ++++++++++++
>>  drivers/pci/of_property.c   | 212 ++++++++++++++++++++++++++++++++++++
>>  drivers/pci/pci-driver.c    |   3 +-
>>  drivers/pci/pci.h           |  19 ++++
>>  drivers/pci/quirks.c        |  11 ++
>>  drivers/pci/remove.c        |   1 +
>>  include/linux/of.h          |  24 ++++
>>  12 files changed, 556 insertions(+), 3 deletions(-)
>>  create mode 100644 drivers/pci/of_property.c
>>
>
  
Clément Léger March 9, 2023, 8:45 a.m. UTC | #21
Le Wed, 8 Mar 2023 01:31:52 -0600,
Frank Rowand <frowand.list@gmail.com> a écrit :

> On 3/6/23 18:52, Rob Herring wrote:
> > On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:  
> >>  
> 
> < snip >
> 
> Hi Rob,
> 
> I am in no position to comment intelligently on your comments until I
> understand the SoC on PCI card model I am asking to be described in
> this subthread.

Hi Frank,

Rather than answering all of the assumptions that were made in the upper
thread (that are probably doing a bit too much of inference), I will
re-explain that from scratch.

Our usecase involves the lan966x SoCs. These SoCs are mainly targeting
networking application and offers multiple SFP and RGMII interfaces.
This Soc can be used in two exclusive modes (at least for the intended
usage):

SoC mode:
   The device runs Linux by itself, on ARM64 cores included in the
   SoC. This use-case of the lan966x is currently almost upstreamed,
   using a traditional Device Tree representation of the lan996x HW
   blocks [1] A number of drivers for the different IPs of the SoC have
   already been merged in upstream Linux (see
   arch/arm/boot/dts/lan966x.dtsi)

PCI mode:
  The lan966x SoC is configured as a PCIe endpoint (PCI card),
  connected to a separate platform that acts as the PCIe root complex.
  In this case, all the IO memories that are exposed by the devices
  embedded on this SoC are exposed through PCI BARs 0 & 1 and the ARM64
  cores of the SoC are not used. Since this is a PCIe card, it can be
  plugged on any platform, of any architecture supporting PCIe.

This work only focus on the *PCI mode* usage. In this mode, we have the
following prerequisites:
- Should work on all architectures (x86, ARM64, etc)
- Should be self-contained in the driver
- Should be able to reuse all existing platform drivers

In PCI mode, the card runs a firmware (not that it matters at all by
the way) which configure the card in PCI mode at boot time. In this
mode, it exposes a single PCI physical function associated with
vendor/product 0x1055/0x9660. This is not a multi-function PCI device !
This means that all the IO memories (peripheral memories, device
memories, registers, whatever you call them) are accessible using
standard readl()/writel() on the BARs that have been remapped. For
instance (not accurate), in the BAR 0, we will have this kind of memory
map:

           BAR0
   0x0 ┌───────────┐
       │           │
       ├───────────┤
       │   Clock   │
       │ controller│
       ├───────────┤
       │           │
       ├───────────┤
       │   I2C     │
       │ controller│
       ├───────────┤
       │           │
       ├───────────┤
       │   MDIO    │
       │ Controller│
       ├───────────┤
       │           │
       ├───────────┤
       │  Switch   │
       │ Controller│
       ├───────────┤
       │           │
       │   ...     │
          

It also exposes either a single interrupt via the legacy interrupt
(which can then be demuxed by reading the SoC internal interrupt
controller registers), or multiple interrupts using MSI interrupts.

As stated before, all these peripherals are already supported in SoC
mode and thus, there are aleready existing platform drivers for each of
them. For more information about the devices that are exposed please
see link [1] which is the device-tree overlay used to describe the
lan9662 card.

In order to use the ethernet switch, we must configure everything that
lies around this ethernet controller, here are a few amongst all of
them:
- MDIO bus
- I2C controller for SFP modules access
- Clock controller
- Ethernet controller
- Syscon

Since all the platform drivers already exist for these devices, we
want to reuse them. Multiple solutions were thought of (fwnode, mfd,
ACPI, device-tree) and eventually ruled out for some of them and efforts
were made to try to tackle that (using fwnode [2], device-tree [3])

One way to do so is to use a device-tree overlay description that is
loaded dynamically on the PCI device OF node. This can be done using the
various device-tree series series that have been proposed (included
this one). On systems that do not provide a device-tree of_root, create
an empty of_root node (see [4]). Then during PCI enumeration, create
device-tree node matching the PCI tree that was enumerated (See [5]).
This is needed since the PCI card can be plugged on whatever port the
user wants and thus it can not be statically described using a fixed
"target-path" property in the overlay.

Finally, to glue everything together, we add a PCI driver for the
VID/PID of the PCI card (See [6]). This driver is responsible of adding
the "ranges" property in the device-tree PCI node to remap the child
nodes "reg" property to the PCI memory map. This is needed because the
PCI memory addresses differ between platform, enumeration order and so
on.Finally, the driver will load the device-tree overlay (See [1]) to
the PCI device-tree node. Eventually, a call to
of_platform_default_populate() will probe the nodes and platform
drivers.

I hope this will help you understanding what is going on here. In the
meantime, I'm also trying to obtain public documentation about the
lan966x SoC.

[1]
https://github.com/clementleger/linux/blob/bf9b4ef803d86c4ae59a4ca195a4152b0d5c3cea/drivers/mfd/lan966x_pci.dts
[2]
https://lore.kernel.org/netdev/YhPSkz8+BIcdb72R@smile.fi.intel.com/T/
[3]
https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
[4]
https://lore.kernel.org/lkml/20230223213418.891942-1-frowand.list@gmail.com/
[5]
https://lore.kernel.org/lkml/1674183732-5157-1-git-send-email-lizhi.hou@amd.com/
[6]
https://github.com/clementleger/linux/blob/bf9b4ef803d86c4ae59a4ca195a4152b0d5c3cea/drivers/mfd/lan966x_pci_of.c
  
Christian Gmeiner March 21, 2023, 8:44 a.m. UTC | #22
Hi all

Am Do., 9. März 2023 um 09:52 Uhr schrieb Clément Léger
<clement.leger@bootlin.com>:
>
> Le Wed, 8 Mar 2023 01:31:52 -0600,
> Frank Rowand <frowand.list@gmail.com> a écrit :
>
> > On 3/6/23 18:52, Rob Herring wrote:
> > > On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > >>
> >
> > < snip >
> >
> > Hi Rob,
> >
> > I am in no position to comment intelligently on your comments until I
> > understand the SoC on PCI card model I am asking to be described in
> > this subthread.
>
> Hi Frank,
>
> Rather than answering all of the assumptions that were made in the upper
> thread (that are probably doing a bit too much of inference), I will
> re-explain that from scratch.
>
> Our usecase involves the lan966x SoCs. These SoCs are mainly targeting
> networking application and offers multiple SFP and RGMII interfaces.
> This Soc can be used in two exclusive modes (at least for the intended
> usage):
>
> SoC mode:
>    The device runs Linux by itself, on ARM64 cores included in the
>    SoC. This use-case of the lan966x is currently almost upstreamed,
>    using a traditional Device Tree representation of the lan996x HW
>    blocks [1] A number of drivers for the different IPs of the SoC have
>    already been merged in upstream Linux (see
>    arch/arm/boot/dts/lan966x.dtsi)
>
> PCI mode:
>   The lan966x SoC is configured as a PCIe endpoint (PCI card),
>   connected to a separate platform that acts as the PCIe root complex.
>   In this case, all the IO memories that are exposed by the devices
>   embedded on this SoC are exposed through PCI BARs 0 & 1 and the ARM64
>   cores of the SoC are not used. Since this is a PCIe card, it can be
>   plugged on any platform, of any architecture supporting PCIe.
>
> This work only focus on the *PCI mode* usage. In this mode, we have the
> following prerequisites:
> - Should work on all architectures (x86, ARM64, etc)
> - Should be self-contained in the driver
> - Should be able to reuse all existing platform drivers
>
> In PCI mode, the card runs a firmware (not that it matters at all by
> the way) which configure the card in PCI mode at boot time. In this
> mode, it exposes a single PCI physical function associated with
> vendor/product 0x1055/0x9660. This is not a multi-function PCI device !
> This means that all the IO memories (peripheral memories, device
> memories, registers, whatever you call them) are accessible using
> standard readl()/writel() on the BARs that have been remapped. For
> instance (not accurate), in the BAR 0, we will have this kind of memory
> map:
>
>            BAR0
>    0x0 ┌───────────┐
>        │           │
>        ├───────────┤
>        │   Clock   │
>        │ controller│
>        ├───────────┤
>        │           │
>        ├───────────┤
>        │   I2C     │
>        │ controller│
>        ├───────────┤
>        │           │
>        ├───────────┤
>        │   MDIO    │
>        │ Controller│
>        ├───────────┤
>        │           │
>        ├───────────┤
>        │  Switch   │
>        │ Controller│
>        ├───────────┤
>        │           │
>        │   ...     │
>
>
> It also exposes either a single interrupt via the legacy interrupt
> (which can then be demuxed by reading the SoC internal interrupt
> controller registers), or multiple interrupts using MSI interrupts.
>
> As stated before, all these peripherals are already supported in SoC
> mode and thus, there are aleready existing platform drivers for each of
> them. For more information about the devices that are exposed please
> see link [1] which is the device-tree overlay used to describe the
> lan9662 card.
>
> In order to use the ethernet switch, we must configure everything that
> lies around this ethernet controller, here are a few amongst all of
> them:
> - MDIO bus
> - I2C controller for SFP modules access
> - Clock controller
> - Ethernet controller
> - Syscon
>
> Since all the platform drivers already exist for these devices, we
> want to reuse them. Multiple solutions were thought of (fwnode, mfd,
> ACPI, device-tree) and eventually ruled out for some of them and efforts
> were made to try to tackle that (using fwnode [2], device-tree [3])
>
> One way to do so is to use a device-tree overlay description that is
> loaded dynamically on the PCI device OF node. This can be done using the
> various device-tree series series that have been proposed (included
> this one). On systems that do not provide a device-tree of_root, create
> an empty of_root node (see [4]). Then during PCI enumeration, create
> device-tree node matching the PCI tree that was enumerated (See [5]).
> This is needed since the PCI card can be plugged on whatever port the
> user wants and thus it can not be statically described using a fixed
> "target-path" property in the overlay.
>
> Finally, to glue everything together, we add a PCI driver for the
> VID/PID of the PCI card (See [6]). This driver is responsible of adding
> the "ranges" property in the device-tree PCI node to remap the child
> nodes "reg" property to the PCI memory map. This is needed because the
> PCI memory addresses differ between platform, enumeration order and so
> on.Finally, the driver will load the device-tree overlay (See [1]) to
> the PCI device-tree node. Eventually, a call to
> of_platform_default_populate() will probe the nodes and platform
> drivers.
>
> I hope this will help you understanding what is going on here. In the
> meantime, I'm also trying to obtain public documentation about the
> lan966x SoC.
>
> [1]
> https://github.com/clementleger/linux/blob/bf9b4ef803d86c4ae59a4ca195a4152b0d5c3cea/drivers/mfd/lan966x_pci.dts
> [2]
> https://lore.kernel.org/netdev/YhPSkz8+BIcdb72R@smile.fi.intel.com/T/
> [3]
> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> [4]
> https://lore.kernel.org/lkml/20230223213418.891942-1-frowand.list@gmail.com/
> [5]
> https://lore.kernel.org/lkml/1674183732-5157-1-git-send-email-lizhi.hou@amd.com/
> [6]
> https://github.com/clementleger/linux/blob/bf9b4ef803d86c4ae59a4ca195a4152b0d5c3cea/drivers/mfd/lan966x_pci_of.c
>
> --
> Clément Léger,
> Embedded Linux and Kernel engineer at Bootlin
> https://bootlin.com

What is missing to move on with this patch set?
  
Frank Rowand March 27, 2023, 3:01 a.m. UTC | #23
On 3/21/23 03:44, Christian Gmeiner wrote:
> Hi all
> 
> Am Do., 9. März 2023 um 09:52 Uhr schrieb Clément Léger
> <clement.leger@bootlin.com>:
>>
>> Le Wed, 8 Mar 2023 01:31:52 -0600,
>> Frank Rowand <frowand.list@gmail.com> a écrit :
>>
>>> On 3/6/23 18:52, Rob Herring wrote:
>>>> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>
>>>
>>> < snip >
>>>
>>> Hi Rob,
>>>
>>> I am in no position to comment intelligently on your comments until I
>>> understand the SoC on PCI card model I am asking to be described in
>>> this subthread.
>>
>> Hi Frank,
>>
>> Rather than answering all of the assumptions that were made in the upper
>> thread (that are probably doing a bit too much of inference), I will
>> re-explain that from scratch.
>>
>> Our usecase involves the lan966x SoCs. These SoCs are mainly targeting
>> networking application and offers multiple SFP and RGMII interfaces.
>> This Soc can be used in two exclusive modes (at least for the intended
>> usage):
>>
>> SoC mode:
>>    The device runs Linux by itself, on ARM64 cores included in the
>>    SoC. This use-case of the lan966x is currently almost upstreamed,
>>    using a traditional Device Tree representation of the lan996x HW
>>    blocks [1] A number of drivers for the different IPs of the SoC have
>>    already been merged in upstream Linux (see
>>    arch/arm/boot/dts/lan966x.dtsi)
>>
>> PCI mode:
>>   The lan966x SoC is configured as a PCIe endpoint (PCI card),
>>   connected to a separate platform that acts as the PCIe root complex.
>>   In this case, all the IO memories that are exposed by the devices
>>   embedded on this SoC are exposed through PCI BARs 0 & 1 and the ARM64
>>   cores of the SoC are not used. Since this is a PCIe card, it can be
>>   plugged on any platform, of any architecture supporting PCIe.
>>
>> This work only focus on the *PCI mode* usage. In this mode, we have the
>> following prerequisites:
>> - Should work on all architectures (x86, ARM64, etc)
>> - Should be self-contained in the driver
>> - Should be able to reuse all existing platform drivers
>>
>> In PCI mode, the card runs a firmware (not that it matters at all by
>> the way) which configure the card in PCI mode at boot time. In this
>> mode, it exposes a single PCI physical function associated with
>> vendor/product 0x1055/0x9660. This is not a multi-function PCI device !
>> This means that all the IO memories (peripheral memories, device
>> memories, registers, whatever you call them) are accessible using
>> standard readl()/writel() on the BARs that have been remapped. For
>> instance (not accurate), in the BAR 0, we will have this kind of memory
>> map:
>>
>>            BAR0
>>    0x0 ┌───────────┐
>>        │           │
>>        ├───────────┤
>>        │   Clock   │
>>        │ controller│
>>        ├───────────┤
>>        │           │
>>        ├───────────┤
>>        │   I2C     │
>>        │ controller│
>>        ├───────────┤
>>        │           │
>>        ├───────────┤
>>        │   MDIO    │
>>        │ Controller│
>>        ├───────────┤
>>        │           │
>>        ├───────────┤
>>        │  Switch   │
>>        │ Controller│
>>        ├───────────┤
>>        │           │
>>        │   ...     │
>>
>>
>> It also exposes either a single interrupt via the legacy interrupt
>> (which can then be demuxed by reading the SoC internal interrupt
>> controller registers), or multiple interrupts using MSI interrupts.
>>
>> As stated before, all these peripherals are already supported in SoC
>> mode and thus, there are aleready existing platform drivers for each of
>> them. For more information about the devices that are exposed please
>> see link [1] which is the device-tree overlay used to describe the
>> lan9662 card.
>>
>> In order to use the ethernet switch, we must configure everything that
>> lies around this ethernet controller, here are a few amongst all of
>> them:
>> - MDIO bus
>> - I2C controller for SFP modules access
>> - Clock controller
>> - Ethernet controller
>> - Syscon
>>
>> Since all the platform drivers already exist for these devices, we
>> want to reuse them. Multiple solutions were thought of (fwnode, mfd,
>> ACPI, device-tree) and eventually ruled out for some of them and efforts
>> were made to try to tackle that (using fwnode [2], device-tree [3])
>>
>> One way to do so is to use a device-tree overlay description that is
>> loaded dynamically on the PCI device OF node. This can be done using the
>> various device-tree series series that have been proposed (included
>> this one). On systems that do not provide a device-tree of_root, create
>> an empty of_root node (see [4]). Then during PCI enumeration, create
>> device-tree node matching the PCI tree that was enumerated (See [5]).
>> This is needed since the PCI card can be plugged on whatever port the
>> user wants and thus it can not be statically described using a fixed
>> "target-path" property in the overlay.
>>
>> Finally, to glue everything together, we add a PCI driver for the
>> VID/PID of the PCI card (See [6]). This driver is responsible of adding
>> the "ranges" property in the device-tree PCI node to remap the child
>> nodes "reg" property to the PCI memory map. This is needed because the
>> PCI memory addresses differ between platform, enumeration order and so
>> on.Finally, the driver will load the device-tree overlay (See [1]) to
>> the PCI device-tree node. Eventually, a call to
>> of_platform_default_populate() will probe the nodes and platform
>> drivers.
>>
>> I hope this will help you understanding what is going on here. In the
>> meantime, I'm also trying to obtain public documentation about the
>> lan966x SoC.
>>
>> [1]
>> https://github.com/clementleger/linux/blob/bf9b4ef803d86c4ae59a4ca195a4152b0d5c3cea/drivers/mfd/lan966x_pci.dts
>> [2]
>> https://lore.kernel.org/netdev/YhPSkz8+BIcdb72R@smile.fi.intel.com/T/
>> [3]
>> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
>> [4]
>> https://lore.kernel.org/lkml/20230223213418.891942-1-frowand.list@gmail.com/
>> [5]
>> https://lore.kernel.org/lkml/1674183732-5157-1-git-send-email-lizhi.hou@amd.com/
>> [6]
>> https://github.com/clementleger/linux/blob/bf9b4ef803d86c4ae59a4ca195a4152b0d5c3cea/drivers/mfd/lan966x_pci_of.c
>>
>> --
>> Clément Léger,
>> Embedded Linux and Kernel engineer at Bootlin
>> https://bootlin.com
> 
> What is missing to move on with this patch set?

I need to evaluate what Clément Léger wrote in the email you replied to.  I had overlooked this
reply to my questions.  From a quick scan, it looks like he _probably_ provided the context I
was looking for to understand the architecture of the proposal.  But it is late Sunday night,
so I won't get to this tonight.

-Frank

>
  
Frank Rowand March 29, 2023, 4:50 p.m. UTC | #24
On 3/9/23 02:45, Clément Léger wrote:
> Le Wed, 8 Mar 2023 01:31:52 -0600,
> Frank Rowand <frowand.list@gmail.com> a écrit :
> 
>> On 3/6/23 18:52, Rob Herring wrote:
>>> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:  
>>>>  
>>
>> < snip >
>>
>> Hi Rob,
>>
>> I am in no position to comment intelligently on your comments until I
>> understand the SoC on PCI card model I am asking to be described in
>> this subthread.
> 
> Hi Frank,
> 
> Rather than answering all of the assumptions that were made in the upper
> thread (that are probably doing a bit too much of inference), I will
> re-explain that from scratch.

Thanks!  The below answers a lot of my questions.
> 
> Our usecase involves the lan966x SoCs. These SoCs are mainly targeting
> networking application and offers multiple SFP and RGMII interfaces.
> This Soc can be used in two exclusive modes (at least for the intended
> usage):
> 
> SoC mode:
>    The device runs Linux by itself, on ARM64 cores included in the
>    SoC. This use-case of the lan966x is currently almost upstreamed,
>    using a traditional Device Tree representation of the lan996x HW
>    blocks [1] A number of drivers for the different IPs of the SoC have
>    already been merged in upstream Linux (see
>    arch/arm/boot/dts/lan966x.dtsi)
> 
> PCI mode:
>   The lan966x SoC is configured as a PCIe endpoint (PCI card),
>   connected to a separate platform that acts as the PCIe root complex.
>   In this case, all the IO memories that are exposed by the devices
>   embedded on this SoC are exposed through PCI BARs 0 & 1 and the ARM64
>   cores of the SoC are not used. Since this is a PCIe card, it can be
>   plugged on any platform, of any architecture supporting PCIe.
> 
> This work only focus on the *PCI mode* usage. In this mode, we have the
> following prerequisites:
> - Should work on all architectures (x86, ARM64, etc)
> - Should be self-contained in the driver

> - Should be able to reuse all existing platform drivers

I've said before (in different words) that using an existing platform
driver for hardware on a PCI card requires shims, which have been
strongly rejected by the Linux kernel.

> 
> In PCI mode, the card runs a firmware (not that it matters at all by
> the way) which configure the card in PCI mode at boot time. In this
> mode, it exposes a single PCI physical function associated with
> vendor/product 0x1055/0x9660. This is not a multi-function PCI device !
> This means that all the IO memories (peripheral memories, device
> memories, registers, whatever you call them) are accessible using
> standard readl()/writel() on the BARs that have been remapped. For
> instance (not accurate), in the BAR 0, we will have this kind of memory
> map:
> 
>            BAR0
>    0x0 ┌───────────┐
>        │           │
>        ├───────────┤
>        │   Clock   │
>        │ controller│
>        ├───────────┤
>        │           │
>        ├───────────┤
>        │   I2C     │
>        │ controller│
>        ├───────────┤
>        │           │
>        ├───────────┤
>        │   MDIO    │
>        │ Controller│
>        ├───────────┤
>        │           │
>        ├───────────┤
>        │  Switch   │
>        │ Controller│
>        ├───────────┤
>        │           │
>        │   ...     │
>           
> 
> It also exposes either a single interrupt via the legacy interrupt
> (which can then be demuxed by reading the SoC internal interrupt
> controller registers), or multiple interrupts using MSI interrupts.
> 
> As stated before, all these peripherals are already supported in SoC
> mode and thus, there are aleready existing platform drivers for each of
> them. For more information about the devices that are exposed please
> see link [1] which is the device-tree overlay used to describe the
> lan9662 card.
> 
> In order to use the ethernet switch, we must configure everything that
> lies around this ethernet controller, here are a few amongst all of
> them:
> - MDIO bus
> - I2C controller for SFP modules access
> - Clock controller
> - Ethernet controller
> - Syscon
> 
> Since all the platform drivers already exist for these devices, we
> want to reuse them. Multiple solutions were thought of (fwnode, mfd,
> ACPI, device-tree) and eventually ruled out for some of them and efforts
> were made to try to tackle that (using fwnode [2], device-tree [3])
> 

> One way to do so is to use a device-tree overlay description that is
> loaded dynamically on the PCI device OF node. This can be done using the
> various device-tree series series that have been proposed (included
> this one). On systems that do not provide a device-tree of_root, create
> an empty of_root node (see [4]). Then during PCI enumeration, create
> device-tree node matching the PCI tree that was enumerated (See [5]).
> This is needed since the PCI card can be plugged on whatever port the
> user wants and thus it can not be statically described using a fixed
> "target-path" property in the overlay.

I understand that this is a use case and a desire to implement a
solution for the use case.  But this is a very non-standard model.
The proposal exposes a bunch of hardware beyond the pci interface
in a non-pci method.

No, just no.  Respect the pci interface boundary and do not drag
devicetree into an effort to pierce and straddle that boundary
(by adding information about the card, beyond the PCI controller,
into the system devicetree).  Information about dynamically
discoverable hardware does not belong in the devicetree.

-Frank

> 
> Finally, to glue everything together, we add a PCI driver for the
> VID/PID of the PCI card (See [6]). This driver is responsible of adding
> the "ranges" property in the device-tree PCI node to remap the child
> nodes "reg" property to the PCI memory map. This is needed because the
> PCI memory addresses differ between platform, enumeration order and so
> on.Finally, the driver will load the device-tree overlay (See [1]) to
> the PCI device-tree node. Eventually, a call to
> of_platform_default_populate() will probe the nodes and platform
> drivers.
> 
> I hope this will help you understanding what is going on here. In the
> meantime, I'm also trying to obtain public documentation about the
> lan966x SoC.
> 
> [1]
> https://github.com/clementleger/linux/blob/bf9b4ef803d86c4ae59a4ca195a4152b0d5c3cea/drivers/mfd/lan966x_pci.dts
> [2]
> https://lore.kernel.org/netdev/YhPSkz8+BIcdb72R@smile.fi.intel.com/T/
> [3]
> https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/
> [4]
> https://lore.kernel.org/lkml/20230223213418.891942-1-frowand.list@gmail.com/
> [5]
> https://lore.kernel.org/lkml/1674183732-5157-1-git-send-email-lizhi.hou@amd.com/
> [6]
> https://github.com/clementleger/linux/blob/bf9b4ef803d86c4ae59a4ca195a4152b0d5c3cea/drivers/mfd/lan966x_pci_of.c
>
  
Rob Herring March 30, 2023, 3:19 p.m. UTC | #25
On Wed, Mar 29, 2023 at 11:51 AM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/9/23 02:45, Clément Léger wrote:
> > Le Wed, 8 Mar 2023 01:31:52 -0600,
> > Frank Rowand <frowand.list@gmail.com> a écrit :
> >
> >> On 3/6/23 18:52, Rob Herring wrote:
> >>> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>
> >>
> >> < snip >
> >>
> >> Hi Rob,
> >>
> >> I am in no position to comment intelligently on your comments until I
> >> understand the SoC on PCI card model I am asking to be described in
> >> this subthread.
> >
> > Hi Frank,
> >
> > Rather than answering all of the assumptions that were made in the upper
> > thread (that are probably doing a bit too much of inference), I will
> > re-explain that from scratch.
>
> Thanks!  The below answers a lot of my questions.
> >
> > Our usecase involves the lan966x SoCs. These SoCs are mainly targeting
> > networking application and offers multiple SFP and RGMII interfaces.
> > This Soc can be used in two exclusive modes (at least for the intended
> > usage):
> >
> > SoC mode:
> >    The device runs Linux by itself, on ARM64 cores included in the
> >    SoC. This use-case of the lan966x is currently almost upstreamed,
> >    using a traditional Device Tree representation of the lan996x HW
> >    blocks [1] A number of drivers for the different IPs of the SoC have
> >    already been merged in upstream Linux (see
> >    arch/arm/boot/dts/lan966x.dtsi)
> >
> > PCI mode:
> >   The lan966x SoC is configured as a PCIe endpoint (PCI card),
> >   connected to a separate platform that acts as the PCIe root complex.
> >   In this case, all the IO memories that are exposed by the devices
> >   embedded on this SoC are exposed through PCI BARs 0 & 1 and the ARM64
> >   cores of the SoC are not used. Since this is a PCIe card, it can be
> >   plugged on any platform, of any architecture supporting PCIe.
> >
> > This work only focus on the *PCI mode* usage. In this mode, we have the
> > following prerequisites:
> > - Should work on all architectures (x86, ARM64, etc)
> > - Should be self-contained in the driver
>
> > - Should be able to reuse all existing platform drivers
>
> I've said before (in different words) that using an existing platform
> driver for hardware on a PCI card requires shims, which have been
> strongly rejected by the Linux kernel.

Do you have an example of what you are saying has been rejected
because I have no clue what you are referring to?

The kernel already has a way to divide up a PCI device into multiple
non-PCI sub-drivers. It's auxiliary_bus. Is that not a "shim"? So why
not use that? From the docs:

 * A key requirement for utilizing the auxiliary bus is that there is no
 * dependency on a physical bus, device, register accesses or regmap support.
 * These individual devices split from the core cannot live on the platform bus
 * as they are not physical devices that are controlled by DT/ACPI.  The same
 * argument applies for not using MFD in this scenario as MFD relies on
 * individual function devices being physical devices.


In the usecases here, they are physical devices because it's the same
devices when Linux is running on the SoC.

> >
> > In PCI mode, the card runs a firmware (not that it matters at all by
> > the way) which configure the card in PCI mode at boot time. In this
> > mode, it exposes a single PCI physical function associated with
> > vendor/product 0x1055/0x9660. This is not a multi-function PCI device !
> > This means that all the IO memories (peripheral memories, device
> > memories, registers, whatever you call them) are accessible using
> > standard readl()/writel() on the BARs that have been remapped. For
> > instance (not accurate), in the BAR 0, we will have this kind of memory
> > map:
> >
> >            BAR0
> >    0x0 ┌───────────┐
> >        │           │
> >        ├───────────┤
> >        │   Clock   │
> >        │ controller│
> >        ├───────────┤
> >        │           │
> >        ├───────────┤
> >        │   I2C     │
> >        │ controller│
> >        ├───────────┤
> >        │           │
> >        ├───────────┤
> >        │   MDIO    │
> >        │ Controller│
> >        ├───────────┤
> >        │           │
> >        ├───────────┤
> >        │  Switch   │
> >        │ Controller│
> >        ├───────────┤
> >        │           │
> >        │   ...     │
> >
> >
> > It also exposes either a single interrupt via the legacy interrupt
> > (which can then be demuxed by reading the SoC internal interrupt
> > controller registers), or multiple interrupts using MSI interrupts.
> >
> > As stated before, all these peripherals are already supported in SoC
> > mode and thus, there are aleready existing platform drivers for each of
> > them. For more information about the devices that are exposed please
> > see link [1] which is the device-tree overlay used to describe the
> > lan9662 card.
> >
> > In order to use the ethernet switch, we must configure everything that
> > lies around this ethernet controller, here are a few amongst all of
> > them:
> > - MDIO bus
> > - I2C controller for SFP modules access
> > - Clock controller
> > - Ethernet controller
> > - Syscon
> >
> > Since all the platform drivers already exist for these devices, we
> > want to reuse them. Multiple solutions were thought of (fwnode, mfd,
> > ACPI, device-tree) and eventually ruled out for some of them and efforts
> > were made to try to tackle that (using fwnode [2], device-tree [3])
> >
>
> > One way to do so is to use a device-tree overlay description that is
> > loaded dynamically on the PCI device OF node. This can be done using the
> > various device-tree series series that have been proposed (included
> > this one). On systems that do not provide a device-tree of_root, create
> > an empty of_root node (see [4]). Then during PCI enumeration, create
> > device-tree node matching the PCI tree that was enumerated (See [5]).
> > This is needed since the PCI card can be plugged on whatever port the
> > user wants and thus it can not be statically described using a fixed
> > "target-path" property in the overlay.
>
> I understand that this is a use case and a desire to implement a
> solution for the use case.  But this is a very non-standard model.
> The proposal exposes a bunch of hardware beyond the pci interface
> in a non-pci method.

It is not the proposal that exposes a bunch of hardware. This device
exposes a bunch of hardware. As you say, it is *beyond the PCI
interface*, so it has zero to do with PCI.

We already support non-discoverable h/w behind a PCI bus. It's called
ISA. There's powerpc DT files in the tree with ISA devices.

> No, just no.  Respect the pci interface boundary and do not drag
> devicetree into an effort to pierce and straddle that boundary
> (by adding information about the card, beyond the PCI controller,
> into the system devicetree).  Information about dynamically
> discoverable hardware does not belong in the devicetree.

What is discoverable? Nothing more than a VID/PID.

Your suggestion is simply use the VID/PID(s) and then the PCI driver
for the card will have all the details that implies. There's a name
for that: board files. Just like we had a single machine ID per board
registered with RMK and the kernel had to contain all the
configuration details for each machine ID. It's not just 1 card here.
This is a chip and I imagine what's used or not used or how the
downstream peripherals are configured all depend on the customer and
their specific designs.

If DT is not used here, then swnodes (DT bindings embedded in the
kernel or platform_data 2.0) will be. It's exactly the same structure
in the kernel. It's still going to be non-PCI drivers for all the sub
devices. The change in the drivers will not be making them PCI
drivers, but simply converting DT APIs to fwnode APIs which is
pointless churn IMO.

Finally, let's not forget the FPGA usecase. We already support DT
overlays for FPGAs. The fact that the FPGA sits behind a PCI interface
is pretty much irrelevant to the problem. There is simply no way the
kernel is going to contain information about what's within the FPGA
image. If not DT, how should we communicate the FPGA configuration to
the kernel?

Rob
  
Frank Rowand March 31, 2023, 9:56 p.m. UTC | #26
Hi Rob,

This is an off the cuff response, so anything that requires me to do research
(eg because it involves stuff that I am not educated about) will be an
incomplete answer, but I don't want to delay the discussion too much in
the meantime.

On 3/30/23 10:19, Rob Herring wrote:
> On Wed, Mar 29, 2023 at 11:51 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 3/9/23 02:45, Clément Léger wrote:
>>> Le Wed, 8 Mar 2023 01:31:52 -0600,
>>> Frank Rowand <frowand.list@gmail.com> a écrit :
>>>
>>>> On 3/6/23 18:52, Rob Herring wrote:
>>>>> On Mon, Mar 6, 2023 at 3:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>
>>>>
>>>> < snip >
>>>>
>>>> Hi Rob,
>>>>
>>>> I am in no position to comment intelligently on your comments until I
>>>> understand the SoC on PCI card model I am asking to be described in
>>>> this subthread.
>>>
>>> Hi Frank,
>>>
>>> Rather than answering all of the assumptions that were made in the upper
>>> thread (that are probably doing a bit too much of inference), I will
>>> re-explain that from scratch.
>>
>> Thanks!  The below answers a lot of my questions.
>>>
>>> Our usecase involves the lan966x SoCs. These SoCs are mainly targeting
>>> networking application and offers multiple SFP and RGMII interfaces.
>>> This Soc can be used in two exclusive modes (at least for the intended
>>> usage):
>>>
>>> SoC mode:
>>>    The device runs Linux by itself, on ARM64 cores included in the
>>>    SoC. This use-case of the lan966x is currently almost upstreamed,
>>>    using a traditional Device Tree representation of the lan996x HW
>>>    blocks [1] A number of drivers for the different IPs of the SoC have
>>>    already been merged in upstream Linux (see
>>>    arch/arm/boot/dts/lan966x.dtsi)
>>>
>>> PCI mode:
>>>   The lan966x SoC is configured as a PCIe endpoint (PCI card),
>>>   connected to a separate platform that acts as the PCIe root complex.
>>>   In this case, all the IO memories that are exposed by the devices
>>>   embedded on this SoC are exposed through PCI BARs 0 & 1 and the ARM64
>>>   cores of the SoC are not used. Since this is a PCIe card, it can be
>>>   plugged on any platform, of any architecture supporting PCIe.
>>>
>>> This work only focus on the *PCI mode* usage. In this mode, we have the
>>> following prerequisites:
>>> - Should work on all architectures (x86, ARM64, etc)
>>> - Should be self-contained in the driver
>>
>>> - Should be able to reuse all existing platform drivers
>>
>> I've said before (in different words) that using an existing platform
>> driver for hardware on a PCI card requires shims, which have been
>> strongly rejected by the Linux kernel.
> 
> Do you have an example of what you are saying has been rejected
> because I have no clue what you are referring to?

Ancient history.  Using windows drivers, proprietary unix drivers, etc
in the Linux kernel.  Is that enough to make clear what I'm talking about,
or do I need to provide specific pointers?

> 
> The kernel already has a way to divide up a PCI device into multiple
> non-PCI sub-drivers. It's auxiliary_bus. Is that not a "shim"? So why
> not use that? From the docs:

I don't know auxiliary_bus, so my response here is iffy.  But from what
you describe here, auxiliary_bus sounds like a potential fit.

> 
>  * A key requirement for utilizing the auxiliary bus is that there is no
>  * dependency on a physical bus, device, register accesses or regmap support.
>  * These individual devices split from the core cannot live on the platform bus
>  * as they are not physical devices that are controlled by DT/ACPI.  The same
>  * argument applies for not using MFD in this scenario as MFD relies on
>  * individual function devices being physical devices.
> 
> 
> In the usecases here, they are physical devices because it's the same
> devices when Linux is running on the SoC.
> 
>>>
>>> In PCI mode, the card runs a firmware (not that it matters at all by
>>> the way) which configure the card in PCI mode at boot time. In this
>>> mode, it exposes a single PCI physical function associated with
>>> vendor/product 0x1055/0x9660. This is not a multi-function PCI device !
>>> This means that all the IO memories (peripheral memories, device
>>> memories, registers, whatever you call them) are accessible using
>>> standard readl()/writel() on the BARs that have been remapped. For
>>> instance (not accurate), in the BAR 0, we will have this kind of memory
>>> map:
>>>
>>>            BAR0
>>>    0x0 ┌───────────┐
>>>        │           │
>>>        ├───────────┤
>>>        │   Clock   │
>>>        │ controller│
>>>        ├───────────┤
>>>        │           │
>>>        ├───────────┤
>>>        │   I2C     │
>>>        │ controller│
>>>        ├───────────┤
>>>        │           │
>>>        ├───────────┤
>>>        │   MDIO    │
>>>        │ Controller│
>>>        ├───────────┤
>>>        │           │
>>>        ├───────────┤
>>>        │  Switch   │
>>>        │ Controller│
>>>        ├───────────┤
>>>        │           │
>>>        │   ...     │
>>>
>>>
>>> It also exposes either a single interrupt via the legacy interrupt
>>> (which can then be demuxed by reading the SoC internal interrupt
>>> controller registers), or multiple interrupts using MSI interrupts.
>>>
>>> As stated before, all these peripherals are already supported in SoC
>>> mode and thus, there are aleready existing platform drivers for each of
>>> them. For more information about the devices that are exposed please
>>> see link [1] which is the device-tree overlay used to describe the
>>> lan9662 card.
>>>
>>> In order to use the ethernet switch, we must configure everything that
>>> lies around this ethernet controller, here are a few amongst all of
>>> them:
>>> - MDIO bus
>>> - I2C controller for SFP modules access
>>> - Clock controller
>>> - Ethernet controller
>>> - Syscon
>>>
>>> Since all the platform drivers already exist for these devices, we
>>> want to reuse them. Multiple solutions were thought of (fwnode, mfd,
>>> ACPI, device-tree) and eventually ruled out for some of them and efforts
>>> were made to try to tackle that (using fwnode [2], device-tree [3])
>>>
>>
>>> One way to do so is to use a device-tree overlay description that is
>>> loaded dynamically on the PCI device OF node. This can be done using the
>>> various device-tree series series that have been proposed (included
>>> this one). On systems that do not provide a device-tree of_root, create
>>> an empty of_root node (see [4]). Then during PCI enumeration, create
>>> device-tree node matching the PCI tree that was enumerated (See [5]).
>>> This is needed since the PCI card can be plugged on whatever port the
>>> user wants and thus it can not be statically described using a fixed
>>> "target-path" property in the overlay.
>>
>> I understand that this is a use case and a desire to implement a
>> solution for the use case.  But this is a very non-standard model.
>> The proposal exposes a bunch of hardware beyond the pci interface
>> in a non-pci method.
> 
> It is not the proposal that exposes a bunch of hardware. This device
> exposes a bunch of hardware. As you say, it is *beyond the PCI
> interface*, so it has zero to do with PCI.
> 
> We already support non-discoverable h/w behind a PCI bus. It's called
> ISA. There's powerpc DT files in the tree with ISA devices.

Thanks for the specific example.

Off the top of my head, I suspect ISA is a very different case.  Wasn't
ISA an example of a legacy bus that was expected to disappear?  (A long
time since I have thought about ISA.)  I'm not sure whether the powerpc
DT files with ISA devices is a good argument here or not - I would have
to look at them

> 
>> No, just no.  Respect the pci interface boundary and do not drag
>> devicetree into an effort to pierce and straddle that boundary
>> (by adding information about the card, beyond the PCI controller,
>> into the system devicetree).  Information about dynamically
>> discoverable hardware does not belong in the devicetree.
> 
> What is discoverable? Nothing more than a VID/PID.
> 
> Your suggestion is simply use the VID/PID(s) and then the PCI driver
> for the card will have all the details that implies. There's a name
> for that: board files. Just like we had a single machine ID per board

No, IMHO it is most definitely not analogous to board files.

Board files were a mix of configuration information and description
of hardware that was non-discoverable.  The hardware present could
vary quite a bit between different boards within a family.  PCI
device drivers have been able to contain the information needed to
access the PCI device, and to handle variations between versions
of the board during the board's evolving life.

> registered with RMK and the kernel had to contain all the
> configuration details for each machine ID. It's not just 1 card here.
> This is a chip and I imagine what's used or not used or how the
> downstream peripherals are configured all depend on the customer and
> their specific designs.

You are saying that this is a universal board, that will be used
different ways by different customers?

This is one of the things I've been trying to get explained (eg
why I've asked for a high level data sheet).  The use case as
presented in this patch series if for a specific PCI card that
has an SoC on it that can provide network functionality.  My
guess would be that the card has one of more physical network
connectors that make the board useful in a larger system.  The
SoC potentially has a lot of other functionality that could be
used if the proper physical connectors exist, but I don't know
because the information has not been provided. It has been
asserted that other logic blocks on the SoC need to be controlled
by other existing platform drivers - but is that because they
are needed to best use the network functionality (eg, as a
theoretical example, is some power management control desired
so the board doesn't consume excess power when the network is
idle).

> 
> If DT is not used here, then swnodes (DT bindings embedded in the
> kernel or platform_data 2.0) will be. It's exactly the same structure
> in the kernel. It's still going to be non-PCI drivers for all the sub
> devices. The change in the drivers will not be making them PCI
> drivers, but simply converting DT APIs to fwnode APIs which is
> pointless churn IMO.

Why can't there be PCI drivers for devices on the PCI board?

Why add all the complexity of doing an impedance conversion to
make a device on a PCI card look like it is on the system bus
(in other words, the driver uses the DT description and APIs
as if the device was located on a system bus instead of being
on the other side of the PCI socket)?

> 
> Finally, let's not forget the FPGA usecase. We already support DT
> overlays for FPGAs. The fact that the FPGA sits behind a PCI interface
> is pretty much irrelevant to the problem. There is simply no way the

I consider the PCI interface to be a big deal.  Whether for an FPGA
or for an SoC on a PCI card.

The PCI socket (or cable) insulates all hardware on the PCI card from
the host system.  The PCI controller "driver" is aware of details of
the host system (eg will have to arrange for system interrupts used
to implement the PCI interrupts), but PCI drivers do not manage
those details - they deal with PCI concepts (eg PCI interrupts).

Everything I've said above may be a little cloudy and incomplete.  That
could reflect lack of knowledge on my part and/or incomplete information
about the need and the use case.  I've clearly been struggling with
trying to understand the proposal, the architecture, and the mental
model involved.  It has been asserted that I don't need to dig into
the previous rejected alternative approaches, but I suspect that not
having done so is contributing to my confusion.

> kernel is going to contain information about what's within the FPGA
> image. If not DT, how should we communicate the FPGA configuration to
> the kernel?

If the FPGA creates and exposes non-discoverable devices on the system
busses (not behind the PCI socket boundary) then DT seems reasonable.
If overlays, then preferably static overlays that can be applied by
the bootloader before the kernel boots.  (Yes, I understand how restrictive
and unpopular that is for the FPGA community, but that also does not
negate the fact that overlay support has never been properly completed
for generic use.)

I don't want to re-debate the whole FPGA discussion. For this patch
series I have been trying to narrow the discussion to the SoC on a
PCI board issue.  I'm trying to either find a solution for the
SoC on a PCI board or not accept non-solutions for that.

> 
> Rob