[V5,3/3] PCI: xilinx-xdma: Add Xilinx XDMA Root Port driver
Commit Message
Add support for Xilinx XDMA Soft IP core as Root Port.
The Zynq UltraScale+ MPSoCs devices support XDMA soft IP module in
programmable logic.
The integrated XDMA soft IP block has integrated bridge function that
can act as PCIe Root Port.
Signed-off-by: Thippeswamy Havalige <thippeswamy.havalige@amd.com>
Signed-off-by: Bharat Kumar Gogada <bharat.kumar.gogada@amd.com>
|Reported-by: kernel test robot <lkp@intel.com>
|Reported-by: Dan Carpenter <error27@gmail.com>
|Closes: https://lore.kernel.org/r/202305261250.2cs1phTS-lkp@intel.com/
---
changes in v5:
- Added detailed comments for link_up check.
- Modified upper case hex values to lower case.
changes in v4:
- Fixed unsigned integer to integer.
- Fixed return type to EINVAL.
changes in v3:
- Changed license to 2023.
- Added bitfield header to avoid implicit warning.
- Fixed indentation issue.
- Fixed code-style.
changes in v2:
- Remove unnecessary inclusion of headerfiles.
- Added a subset of interrupt error bits to common header files.
- Added pci_is_root_bus function.
- Removed kerneldoc comments of private function.
- Modified of_get_next_child API to of_get_child_by_name.
- Modified of_address_to_resource API to platform_get_resource.
- Modified return value.
drivers/pci/controller/Kconfig | 11 +
drivers/pci/controller/Makefile | 1 +
drivers/pci/controller/pcie-xilinx-common.h | 1 +
drivers/pci/controller/pcie-xilinx-dma-pl.c | 803 ++++++++++++++++++++++++++++
4 files changed, 816 insertions(+)
create mode 100644 drivers/pci/controller/pcie-xilinx-dma-pl.c
Comments
On Wed, Jun 28, 2023 at 02:58:12PM +0530, Thippeswamy Havalige wrote:
> Add support for Xilinx XDMA Soft IP core as Root Port.
> ...
> |Reported-by: kernel test robot <lkp@intel.com>
> |Reported-by: Dan Carpenter <error27@gmail.com>
> |Closes: https://lore.kernel.org/r/202305261250.2cs1phTS-lkp@intel.com/
Remove these. I mentioned this before:
https://lore.kernel.org/r/ZHd/7AaLaGyr1jNA@bhelgaas
> + * struct pl_dma_pcie - PCIe port information
> + * @dev: Device pointer
> + * @reg_base: IO Mapped Register Base
> + * @irq: Interrupt number
> + * @cfg: Holds mappings of config space window
> + * @phys_reg_base: Physical address of reg base
> + * @intx_domain: Legacy IRQ domain pointer
> + * @pldma_domain: PL DMA IRQ domain pointer
> + * @resources: Bus Resources
> + * @msi: MSI information
> + * @irq_misc: Legacy and error interrupt number
> + * @intx_irq: legacy interrupt number
> + * @lock: lock protecting shared register access
Capitalize the intx_irq and lock descriptions so they match the others.
"Legacy and error interrupt number" and "legacy interrupt number"
sound like they overlap -- "legacy interrupt number" is part of both.
Is that an error?
> +static bool xilinx_pl_dma_pcie_valid_device(struct pci_bus *bus, unsigned int devfn)
> +{
> + struct pl_dma_pcie *port = bus->sysdata;
> +
> + /* Check if link is up when trying to access downstream ports */
> + if (!pci_is_root_bus(bus)) {
> + /*
> + * If the link goes down after we check for link-up, we have a problem:
> + * if a PIO request is initiated while link-down, the whole controller
> + * hangs, and even after link comes up again, previous PIO requests
> + * won't work, and a reset of the whole PCIe controller is needed.
> + * Henceforth we need link-up check here to avoid sending PIO request
> + * when link is down.
Wrap this comment so it fits in 80 columns like the rest of the file.
I think the comment was added because I pointed out that this is racy.
Obviously the comment doesn't *fix* the race, and it actually doesn't
even describe the race.
Even with the xilinx_pl_dma_pcie_link_up() check, this is racy because
xilinx_pl_dma_pcie_link_up() may tell you the link is up, but the link
may go down before the driver attempts the config transaction. THAT
is the race.
If the controller hangs in that situation, that's a hardware defect,
and from your comment, it sounds like it's unrecoverable.
> + */
> + if (!xilinx_pl_dma_pcie_link_up(port))
> + return false;
> +static int xilinx_pl_dma_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
> + irq_hw_number_t hwirq)
Wrap to fit in 80 columns like the rest of the file.
> +static struct irq_chip xilinx_msi_irq_chip = {
> + .name = "pl_dma_pciepcie:msi",
Why does this name have two copies of "pcie" in it? This driver has
four irq_chip structs; maybe the names could be more similar?
xilinx_leg_irq_chip INTx
xilinx_msi_irq_chip pl_dma_pciepcie:msi
xilinx_irq_chip Xilinx MSI
xilinx_pl_dma_pcie_event_irq_chip RC-Event
> + /* Plug the INTx chained handler */
> + irq_set_chained_handler_and_data(port->intx_irq,
> + xilinx_pl_dma_pcie_intx_flow, port);
> +
> + /* Plug the main event chained handler */
> + irq_set_chained_handler_and_data(port->irq,
> + xilinx_pl_dma_pcie_event_flow, port);
What's the reason for using chained IRQs? Can this be done without
them? I don't claim to understand all the issues here, but it seems
better to avoid chained IRQ handlers when possible:
https://lore.kernel.org/all/877csohcll.ffs@tglx/
> + /*set the Bridge enable bit */
Space before "Set". I mentioned this before at
https://lore.kernel.org/r/ZHd/7AaLaGyr1jNA@bhelgaas
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + dev_err(dev, "missing \"reg\" property\n");
All your other error messages are capitalized. Make this one match.
> + bridge->ops = (struct pci_ops *)&xilinx_pl_dma_pcie_ops.pci_ops;
I don't think this cast is needed.
Bjorn
Hi Bjorn,
> -----Original Message-----
> From: Bjorn Helgaas <helgaas@kernel.org>
> Sent: Saturday, July 1, 2023 4:49 AM
> To: Havalige, Thippeswamy <thippeswamy.havalige@amd.com>
> Cc: krzysztof.kozlowski@linaro.org; devicetree@vger.kernel.org; linux-
> pci@vger.kernel.org; linux-kernel@vger.kernel.org; robh+dt@kernel.org;
> bhelgaas@google.com; lorenzo.pieralisi@arm.com; linux-arm-
> kernel@lists.infradead.org; Gogada, Bharat Kumar
> <bharat.kumar.gogada@amd.com>; Simek, Michal
> <michal.simek@amd.com>
> Subject: Re: [PATCH V5 3/3] PCI: xilinx-xdma: Add Xilinx XDMA Root Port driver
>
> On Wed, Jun 28, 2023 at 02:58:12PM +0530, Thippeswamy Havalige wrote:
> > Add support for Xilinx XDMA Soft IP core as Root Port.
> > ...
>
> > |Reported-by: kernel test robot <lkp@intel.com>
> > |Reported-by: Dan Carpenter <error27@gmail.com>
> > |Closes:
> > |https://lore.kernel.org/r/202305261250.2cs1phTS-lkp@intel.com/
>
> Remove these. I mentioned this before:
> https://lore.kernel.org/r/ZHd/7AaLaGyr1jNA@bhelgaas
- Agreed, I'll remove this in next patch
> > + * struct pl_dma_pcie - PCIe port information
> > + * @dev: Device pointer
> > + * @reg_base: IO Mapped Register Base
> > + * @irq: Interrupt number
> > + * @cfg: Holds mappings of config space window
> > + * @phys_reg_base: Physical address of reg base
> > + * @intx_domain: Legacy IRQ domain pointer
> > + * @pldma_domain: PL DMA IRQ domain pointer
> > + * @resources: Bus Resources
> > + * @msi: MSI information
> > + * @irq_misc: Legacy and error interrupt number
> > + * @intx_irq: legacy interrupt number
> > + * @lock: lock protecting shared register access
>
> Capitalize the intx_irq and lock descriptions so they match the others.
- Agreed, I'll fix it in the next patch
> "Legacy and error interrupt number" and "legacy interrupt number"
> sound like they overlap -- "legacy interrupt number" is part of both.
> Is that an error?
- Agreed, I'll modify this comment to legacy interrupt number. (This irq line is for both legacy interrupts and error interrupt bits)
> > +static bool xilinx_pl_dma_pcie_valid_device(struct pci_bus *bus,
> > +unsigned int devfn) {
> > + struct pl_dma_pcie *port = bus->sysdata;
> > +
> > + /* Check if link is up when trying to access downstream ports */
> > + if (!pci_is_root_bus(bus)) {
> > + /*
> > + * If the link goes down after we check for link-up, we have a
> problem:
> > + * if a PIO request is initiated while link-down, the whole
> controller
> > + * hangs, and even after link comes up again, previous PIO
> requests
> > + * won't work, and a reset of the whole PCIe controller is
> needed.
> > + * Henceforth we need link-up check here to avoid sending
> PIO request
> > + * when link is down.
>
> Wrap this comment so it fits in 80 columns like the rest of the file.
>
> I think the comment was added because I pointed out that this is racy.
> Obviously the comment doesn't *fix* the race, and it actually doesn't even
> describe the race.
- Agreed, I'll add comments regarding race condition.
> Even with the xilinx_pl_dma_pcie_link_up() check, this is racy because
> xilinx_pl_dma_pcie_link_up() may tell you the link is up, but the link may go
> down before the driver attempts the config transaction. THAT is the race.
>
> If the controller hangs in that situation, that's a hardware defect, and from
> your comment, it sounds like it's unrecoverable.
>
> > + */
> > + if (!xilinx_pl_dma_pcie_link_up(port))
> > + return false;
>
> > +static int xilinx_pl_dma_pcie_intx_map(struct irq_domain *domain,
> unsigned int irq,
> > + irq_hw_number_t hwirq)
>
> Wrap to fit in 80 columns like the rest of the file.
>
> > +static struct irq_chip xilinx_msi_irq_chip = {
> > + .name = "pl_dma_pciepcie:msi",
>
> Why does this name have two copies of "pcie" in it? This driver has four
> irq_chip structs; maybe the names could be more similar?
- Agreed, I'll modify all irq_chip names this in next patch
Example:
static struct irq_chip xilinx_msi_irq_chip = {
.name = "pl_dma:PCIe MSI",
> xilinx_leg_irq_chip INTx
> xilinx_msi_irq_chip pl_dma_pciepcie:msi
> xilinx_irq_chip Xilinx MSI
> xilinx_pl_dma_pcie_event_irq_chip RC-Event
>
> > + /* Plug the INTx chained handler */
> > + irq_set_chained_handler_and_data(port->intx_irq,
> > + xilinx_pl_dma_pcie_intx_flow, port);
> > +
> > + /* Plug the main event chained handler */
> > + irq_set_chained_handler_and_data(port->irq,
> > + xilinx_pl_dma_pcie_event_flow,
> port);
>
> What's the reason for using chained IRQs? Can this be done without them? I
> don't claim to understand all the issues here, but it seems better to avoid
> chained IRQ handlers when possible:
- As per the comments in this https://lkml.kernel.org/lkml/alpine.DEB.2.20.1705232307330.2409@nanos/T/
"It is fine to have chained interrupts when bootloader, device tree and kernel under control. Only if BIOS/UEFI comes into
play the user is helpless against interrupt storm which will cause system to hangs."
We are using ARM embedded platform with Bootloader, Devicetree flow.
> https://lore.kernel.org/all/877csohcll.ffs@tglx/
>
> > + /*set the Bridge enable bit */
- Agreed, I ll modify it in next patch.
> Space before "Set". I mentioned this before at
> https://lore.kernel.org/r/ZHd/7AaLaGyr1jNA@bhelgaas
>
> > + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> > + if (!res) {
> > + dev_err(dev, "missing \"reg\" property\n");
>
> All your other error messages are capitalized. Make this one match.
>
> > + bridge->ops = (struct pci_ops *)&xilinx_pl_dma_pcie_ops.pci_ops;
>
> I don't think this cast is needed.
-Agreed, will modify it in next patch.
>
> Bjorn
[+cc Thomas in case he wants to comment on chained interrupts]
On Thu, Jul 20, 2023 at 06:37:03AM +0000, Havalige, Thippeswamy wrote:
> > From: Bjorn Helgaas <helgaas@kernel.org>
> > ...
> > On Wed, Jun 28, 2023 at 02:58:12PM +0530, Thippeswamy Havalige wrote:
> > > Add support for Xilinx XDMA Soft IP core as Root Port.
> > > ...
> > > + * struct pl_dma_pcie - PCIe port information
> > > + * @intx_domain: Legacy IRQ domain pointer
> > > + * @pldma_domain: PL DMA IRQ domain pointer
> > > + * @irq_misc: Legacy and error interrupt number
> > > + * @intx_irq: legacy interrupt number
> > "Legacy and error interrupt number" and "legacy interrupt number"
> > sound like they overlap -- "legacy interrupt number" is part of both.
> > Is that an error?
>
> - Agreed, I'll modify this comment to legacy interrupt number. (This
> irq line is for both legacy interrupts and error interrupt bits)
Does "legacy" mean "INTx" in this context? If so, I'd use "INTx"
because it's much more specific. "Legacy" really doesn't contain any
information other than "this is something retained for some kind of
backward compatibility."
If you have more detail about the "error interrupt," that would be
useful as well. Does this refer to an AER interrupt, a "System
Error", something else? I'm looking at the diagram in PCIe r6.0,
Figure 6-3, wondering if this is related to anything there. I suppose
likely it's some Xilinx-specific thing?
> > > + /* Plug the INTx chained handler */
> > > + irq_set_chained_handler_and_data(port->intx_irq,
> > > + xilinx_pl_dma_pcie_intx_flow, port);
> > > +
> > > + /* Plug the main event chained handler */
> > > + irq_set_chained_handler_and_data(port->irq,
> > > + xilinx_pl_dma_pcie_event_flow,
> > port);
> >
> > What's the reason for using chained IRQs? Can this be done
> > without them? I don't claim to understand all the issues here,
> > but it seems better to avoid chained IRQ handlers when possible:
> > https://lore.kernel.org/all/877csohcll.ffs@tglx/
> - As per the comments in this
> https://lkml.kernel.org/lkml/alpine.DEB.2.20.1705232307330.2409@nanos/T/
> "It is fine to have chained interrupts when bootloader, device tree
> and kernel under control. Only if BIOS/UEFI comes into play the user
> is helpless against interrupt storm which will cause system to
> hangs."
>
> We are using ARM embedded platform with Bootloader, Devicetree flow.
I read Thomas' comments as "in general it's better to use regular
interrupts, but we can live with chained interrupts if we have control
of bootloader, device tree, and kernel."
I guess my questions are more like:
- Could this be done with either chained interrupts or regular
interrupts?
- If so, what is the advantage to using chained interrupts?
Across the entire kernel, irq_set_chained_handler_and_data() is
relatively unusual, which makes me think it may be better to use the
more common path if it's possible.
Bjorn
Hi Bjorn,
> -----Original Message-----
> From: Bjorn Helgaas <helgaas@kernel.org>
> Sent: Thursday, July 20, 2023 8:54 PM
> To: Havalige, Thippeswamy <thippeswamy.havalige@amd.com>
> Cc: krzysztof.kozlowski@linaro.org; devicetree@vger.kernel.org; linux-
> pci@vger.kernel.org; linux-kernel@vger.kernel.org; robh+dt@kernel.org;
> bhelgaas@google.com; lorenzo.pieralisi@arm.com; linux-arm-
> kernel@lists.infradead.org; Gogada, Bharat Kumar
> <bharat.kumar.gogada@amd.com>; Simek, Michal
> <michal.simek@amd.com>; Thomas Gleixner <tglx@linutronix.de>
> Subject: Re: [PATCH V5 3/3] PCI: xilinx-xdma: Add Xilinx XDMA Root Port driver
>
> [+cc Thomas in case he wants to comment on chained interrupts]
>
> On Thu, Jul 20, 2023 at 06:37:03AM +0000, Havalige, Thippeswamy wrote:
> > > From: Bjorn Helgaas <helgaas@kernel.org> ...
> > > On Wed, Jun 28, 2023 at 02:58:12PM +0530, Thippeswamy Havalige wrote:
> > > > Add support for Xilinx XDMA Soft IP core as Root Port.
> > > > ...
>
> > > > + * struct pl_dma_pcie - PCIe port information
> > > > + * @intx_domain: Legacy IRQ domain pointer
> > > > + * @pldma_domain: PL DMA IRQ domain pointer
> > > > + * @irq_misc: Legacy and error interrupt number
> > > > + * @intx_irq: legacy interrupt number
>
> > > "Legacy and error interrupt number" and "legacy interrupt number"
> > > sound like they overlap -- "legacy interrupt number" is part of both.
> > > Is that an error?
> >
> > - Agreed, I'll modify this comment to legacy interrupt number. (This
> > irq line is for both legacy interrupts and error interrupt bits)
>
> Does "legacy" mean "INTx" in this context? If so, I'd use "INTx"
> because it's much more specific. "Legacy" really doesn't contain any
> information other than "this is something retained for some kind of backward
> compatibility."
>
> If you have more detail about the "error interrupt," that would be useful as
> well. Does this refer to an AER interrupt, a "System Error", something else?
> I'm looking at the diagram in PCIe r6.0, Figure 6-3, wondering if this is related
> to anything there. I suppose likely it's some Xilinx-specific thing?
- Agreed, ll modify Legacy to INTx, and regarding error interrupts these are Xilinx controller specific interrupts which are used to notify the user about errors such as cfg timeout, slave unsupported requests,Fatal and non fatal error.
> > > > + /* Plug the INTx chained handler */
> > > > + irq_set_chained_handler_and_data(port->intx_irq,
> > > > + xilinx_pl_dma_pcie_intx_flow, port);
> > > > +
> > > > + /* Plug the main event chained handler */
> > > > + irq_set_chained_handler_and_data(port->irq,
> > > > + xilinx_pl_dma_pcie_event_flow,
> > > port);
> > >
> > > What's the reason for using chained IRQs? Can this be done without
> > > them? I don't claim to understand all the issues here, but it seems
> > > better to avoid chained IRQ handlers when possible:
> > > https://lore.kernel.org/all/877csohcll.ffs@tglx/
>
> > - As per the comments in this
> > https://lkml.kernel.org/lkml/alpine.DEB.2.20.1705232307330.2409@nanos/
> > T/ "It is fine to have chained interrupts when bootloader, device tree
> > and kernel under control. Only if BIOS/UEFI comes into play the user
> > is helpless against interrupt storm which will cause system to hangs."
> >
> > We are using ARM embedded platform with Bootloader, Devicetree flow.
>
> I read Thomas' comments as "in general it's better to use regular interrupts,
> but we can live with chained interrupts if we have control of bootloader,
> device tree, and kernel."
>
> I guess my questions are more like:
>
> - Could this be done with either chained interrupts or regular
> interrupts?
> - If so, what is the advantage to using chained interrupts?
With regular interrupts, these interrupts are self-consumed interrupts (interrupt is handled within driver) but where as chained interrupts are not self consumed (interrupts are not handled within the driver, but forwarded to different driver for which the actual interrupt is raised) but these interrupts are demultiplexed and forwards interrupt to another subsystem by calling generic_handle_irq().
As, MSI generic handlers are consumed by Endpoints and end point drivers, chained handlers forward the interrupt to the specific EP driver (For example NVME subsystem or any other subsystem).
>
>
>
> Across the entire kernel, irq_set_chained_handler_and_data() is relatively
> unusual, which makes me think it may be better to use the more common
> path if it's possible.
>
> Bjorn
Regards,
Thippeswamy H
On Mon, Jul 24, 2023 at 06:40:58AM +0000, Havalige, Thippeswamy wrote:
> > From: Bjorn Helgaas <helgaas@kernel.org>
> > On Thu, Jul 20, 2023 at 06:37:03AM +0000, Havalige, Thippeswamy wrote:
> > > > From: Bjorn Helgaas <helgaas@kernel.org> ...
> > > > On Wed, Jun 28, 2023 at 02:58:12PM +0530, Thippeswamy Havalige wrote:
> > > > > Add support for Xilinx XDMA Soft IP core as Root Port.
> > > > > ...
> > If you have more detail about the "error interrupt," that would be
> > useful as well. Does this refer to an AER interrupt, a "System
> > Error", something else? I'm looking at the diagram in PCIe r6.0,
> > Figure 6-3, wondering if this is related to anything there. I
> > suppose likely it's some Xilinx-specific thing?
>
> - Agreed, ll modify Legacy to INTx, and regarding error interrupts
> these are Xilinx controller specific interrupts which are used to
> notify the user about errors such as cfg timeout, slave unsupported
> requests,Fatal and non fatal error.
This would be great material for comments and/or a revised commit log.
> > > > > + /* Plug the INTx chained handler */
> > > > > + irq_set_chained_handler_and_data(port->intx_irq,
> > > > > + xilinx_pl_dma_pcie_intx_flow, port);
> > > > > +
> > > > > + /* Plug the main event chained handler */
> > > > > + irq_set_chained_handler_and_data(port->irq,
> > > > > + xilinx_pl_dma_pcie_event_flow,
> > > > port);
> > > >
> > > > What's the reason for using chained IRQs? Can this be done without
> > > > them? I don't claim to understand all the issues here, but it seems
> > > > better to avoid chained IRQ handlers when possible:
> > > > https://lore.kernel.org/all/877csohcll.ffs@tglx/
> >
> > > - As per the comments in this
> > > https://lkml.kernel.org/lkml/alpine.DEB.2.20.1705232307330.2409@nanos/
> > > T/ "It is fine to have chained interrupts when bootloader, device tree
> > > and kernel under control. Only if BIOS/UEFI comes into play the user
> > > is helpless against interrupt storm which will cause system to hangs."
> > >
> > > We are using ARM embedded platform with Bootloader, Devicetree flow.
> >
> > I read Thomas' comments as "in general it's better to use regular
> > interrupts, but we can live with chained interrupts if we have
> > control of bootloader, device tree, and kernel."
> >
> > I guess my questions are more like:
> >
> > - Could this be done with either chained interrupts or regular
> > interrupts?
> > - If so, what is the advantage to using chained interrupts?
> With regular interrupts, these interrupts are self-consumed
> interrupts (interrupt is handled within driver) but where as chained
> interrupts are not self consumed (interrupts are not handled within
> the driver, but forwarded to different driver for which the actual
> interrupt is raised) but these interrupts are demultiplexed and
> forwards interrupt to another subsystem by calling
> generic_handle_irq().
>
> As, MSI generic handlers are consumed by Endpoints and end point
> drivers, chained handlers forward the interrupt to the specific EP
> driver (For example NVME subsystem or any other subsystem).
This doesn't really explain it for me, probably because of my IRQ
ignorance.
I compared xilinx_pl_dma (which uses chained interrupts) with
pci-aardvark.c (which does not).
- xilinx_pl_dma_pcie_setup_irq() calls platform_get_irq(0) once and
sets up xilinx_pl_dma_pcie_event_flow() as the handler.
- advk_pcie_probe() calls platform_get_irq(0) once and sets up
advk_pcie_irq_handler() as the handler.
- xilinx_pl_dma_pcie_event_flow() reads XILINX_PCIE_DMA_REG_IDR to
learn which interrupts are pending and calls
generic_handle_domain_irq() for each.
- advk_pcie_irq_handler() calls advk_pcie_handle_int(), which reads
PCIE_ISR0_REG and PCIE_ISR1_REG to learn which interrupts are
pending and calls generic_handle_domain_irq() for each.
It seems like both drivers do essentially the same thing, but
xilinx_pl_dma_pcie_event_flow() is a chained handler and
advk_pcie_irq_handler() is not.
Is there some underlying difference in the way the hardware works that
means xilinx_pl_dma needs a chained handler while aardvark does not?
Bjorn
Hi Bjorn,
> -----Original Message-----
> From: Bjorn Helgaas <helgaas@kernel.org>
> Sent: Friday, July 28, 2023 4:58 AM
> To: Havalige, Thippeswamy <thippeswamy.havalige@amd.com>
> Cc: krzysztof.kozlowski@linaro.org; devicetree@vger.kernel.org; linux-
> pci@vger.kernel.org; linux-kernel@vger.kernel.org; robh+dt@kernel.org;
> bhelgaas@google.com; lorenzo.pieralisi@arm.com; linux-arm-
> kernel@lists.infradead.org; Gogada, Bharat Kumar
> <bharat.kumar.gogada@amd.com>; Simek, Michal
> <michal.simek@amd.com>; Thomas Gleixner <tglx@linutronix.de>
> Subject: Re: [PATCH V5 3/3] PCI: xilinx-xdma: Add Xilinx XDMA Root Port driver
>
> On Mon, Jul 24, 2023 at 06:40:58AM +0000, Havalige, Thippeswamy wrote:
> > > From: Bjorn Helgaas <helgaas@kernel.org> On Thu, Jul 20, 2023 at
> > > 06:37:03AM +0000, Havalige, Thippeswamy wrote:
> > > > > From: Bjorn Helgaas <helgaas@kernel.org> ...
> > > > > On Wed, Jun 28, 2023 at 02:58:12PM +0530, Thippeswamy Havalige
> wrote:
> > > > > > Add support for Xilinx XDMA Soft IP core as Root Port.
> > > > > > ...
>
> > > If you have more detail about the "error interrupt," that would be
> > > useful as well. Does this refer to an AER interrupt, a "System
> > > Error", something else? I'm looking at the diagram in PCIe r6.0,
> > > Figure 6-3, wondering if this is related to anything there. I
> > > suppose likely it's some Xilinx-specific thing?
> > - Agreed, ll modify Legacy to INTx, and regarding error interrupts
> > these are Xilinx controller specific interrupts which are used to
> > notify the user about errors such as cfg timeout, slave unsupported
> > requests,Fatal and non fatal error.
>
> This would be great material for comments and/or a revised commit log.
- Agreed, I'll add this as a comment.
> > > > > > + /* Plug the INTx chained handler */
> > > > > > + irq_set_chained_handler_and_data(port->intx_irq,
> > > > > > +
> xilinx_pl_dma_pcie_intx_flow, port);
> > > > > > +
> > > > > > + /* Plug the main event chained handler */
> > > > > > + irq_set_chained_handler_and_data(port->irq,
> > > > > > +
> xilinx_pl_dma_pcie_event_flow,
> > > > > port);
> > > > >
> > > > > What's the reason for using chained IRQs? Can this be done
> > > > > without them? I don't claim to understand all the issues here,
> > > > > but it seems better to avoid chained IRQ handlers when possible:
> > > > > https://lore.kernel.org/all/877csohcll.ffs@tglx/
> > >
> > > > - As per the comments in this
> > > > https://lkml.kernel.org/lkml/alpine.DEB.2.20.1705232307330.2409@na
> > > > nos/ T/ "It is fine to have chained interrupts when bootloader,
> > > > device tree and kernel under control. Only if BIOS/UEFI comes into
> > > > play the user is helpless against interrupt storm which will cause
> > > > system to hangs."
> > > >
> > > > We are using ARM embedded platform with Bootloader, Devicetree
> flow.
> > >
> > > I read Thomas' comments as "in general it's better to use regular
> > > interrupts, but we can live with chained interrupts if we have
> > > control of bootloader, device tree, and kernel."
> > >
> > > I guess my questions are more like:
> > >
> > > - Could this be done with either chained interrupts or regular
> > > interrupts?
> > > - If so, what is the advantage to using chained interrupts?
>
> > With regular interrupts, these interrupts are self-consumed interrupts
> > (interrupt is handled within driver) but where as chained interrupts
> > are not self consumed (interrupts are not handled within the driver,
> > but forwarded to different driver for which the actual interrupt is
> > raised) but these interrupts are demultiplexed and forwards interrupt
> > to another subsystem by calling generic_handle_irq().
> >
> > As, MSI generic handlers are consumed by Endpoints and end point
> > drivers, chained handlers forward the interrupt to the specific EP
> > driver (For example NVME subsystem or any other subsystem).
>
> This doesn't really explain it for me, probably because of my IRQ ignorance.
>
> I compared xilinx_pl_dma (which uses chained interrupts) with pci-aardvark.c
> (which does not).
>
> - xilinx_pl_dma_pcie_setup_irq() calls platform_get_irq(0) once and
> sets up xilinx_pl_dma_pcie_event_flow() as the handler.
>
> - advk_pcie_probe() calls platform_get_irq(0) once and sets up
> advk_pcie_irq_handler() as the handler.
>
> - xilinx_pl_dma_pcie_event_flow() reads XILINX_PCIE_DMA_REG_IDR to
> learn which interrupts are pending and calls
> generic_handle_domain_irq() for each.
>
> - advk_pcie_irq_handler() calls advk_pcie_handle_int(), which reads
> PCIE_ISR0_REG and PCIE_ISR1_REG to learn which interrupts are
> pending and calls generic_handle_domain_irq() for each.
>
> It seems like both drivers do essentially the same thing, but
> xilinx_pl_dma_pcie_event_flow() is a chained handler and
> advk_pcie_irq_handler() is not.
>
> Is there some underlying difference in the way the hardware works that
> means xilinx_pl_dma needs a chained handler while aardvark does not?
- Hardware doesn't need to have only chained IRQ, we will try to implement
this feature with regular IRQ handlers and try to validate.
> Bjorn
@@ -325,6 +325,17 @@ config PCIE_XILINX
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
Host Bridge driver.
+config PCIE_XILINX_DMA_PL
+ bool "Xilinx DMA PL PCIe host bridge support"
+ depends on ARCH_ZYNQMP || COMPILE_TEST
+ depends on PCI_MSI
+ select PCI_HOST_COMMON
+ help
+ Add support for the Xilinx PL DMA PCIe host bridge,
+ The controller is soft IP which can act as Root Port.
+ If you know your system provides Xilinx PCIe host controller
+ bridge DMA as soft IP say Y; if you are not sure, say N.
+
config PCIE_XILINX_NWL
bool "Xilinx NWL PCIe controller"
depends on ARCH_ZYNQMP || COMPILE_TEST
@@ -17,6 +17,7 @@ obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o
obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o
obj-$(CONFIG_PCIE_XILINX_CPM) += pcie-xilinx-cpm.o
+obj-$(CONFIG_PCIE_XILINX_DMA_PL) += pcie-xilinx-dma-pl.o
obj-$(CONFIG_PCI_V3_SEMI) += pci-v3-semi.o
obj-$(CONFIG_PCI_XGENE) += pci-xgene.o
obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o
@@ -19,6 +19,7 @@
#define XILINX_PCIE_INTR_PME_TO_ACK_RCVD 15
#define XILINX_PCIE_INTR_INTX 16
#define XILINX_PCIE_INTR_PM_PME_RCVD 17
+#define XILINX_PCIE_INTR_MSI 17
#define XILINX_PCIE_INTR_SLV_UNSUPP 20
#define XILINX_PCIE_INTR_SLV_UNEXP 21
#define XILINX_PCIE_INTR_SLV_COMPL 22
new file mode 100644
@@ -0,0 +1,803 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * PCIe host controller driver for Xilinx XDMA PCIe Bridge
+ *
+ * Copyright (C) 2023 Xilinx, Inc. All rights reserved.
+ */
+#include <linux/bitfield.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/of_address.h>
+#include <linux/of_pci.h>
+
+#include "../pci.h"
+#include "pcie-xilinx-common.h"
+
+/* Register definitions */
+#define XILINX_PCIE_DMA_REG_IDR 0x00000138
+#define XILINX_PCIE_DMA_REG_IMR 0x0000013c
+#define XILINX_PCIE_DMA_REG_PSCR 0x00000144
+#define XILINX_PCIE_DMA_REG_RPSC 0x00000148
+#define XILINX_PCIE_DMA_REG_MSIBASE1 0x0000014c
+#define XILINX_PCIE_DMA_REG_MSIBASE2 0x00000150
+#define XILINX_PCIE_DMA_REG_RPEFR 0x00000154
+#define XILINX_PCIE_DMA_REG_IDRN 0x00000160
+#define XILINX_PCIE_DMA_REG_IDRN_MASK 0x00000164
+#define XILINX_PCIE_DMA_REG_MSI_LOW 0x00000170
+#define XILINX_PCIE_DMA_REG_MSI_HI 0x00000174
+#define XILINX_PCIE_DMA_REG_MSI_LOW_MASK 0x00000178
+#define XILINX_PCIE_DMA_REG_MSI_HI_MASK 0x0000017c
+
+#define IMR(x) BIT(XILINX_PCIE_INTR_ ##x)
+
+#define XILINX_PCIE_INTR_IMR_ALL_MASK \
+ ( \
+ IMR(LINK_DOWN) | \
+ IMR(HOT_RESET) | \
+ IMR(CFG_TIMEOUT) | \
+ IMR(CORRECTABLE) | \
+ IMR(NONFATAL) | \
+ IMR(FATAL) | \
+ IMR(INTX) | \
+ IMR(MSI) | \
+ IMR(SLV_UNSUPP) | \
+ IMR(SLV_UNEXP) | \
+ IMR(SLV_COMPL) | \
+ IMR(SLV_ERRP) | \
+ IMR(SLV_CMPABT) | \
+ IMR(SLV_ILLBUR) | \
+ IMR(MST_DECERR) | \
+ IMR(MST_SLVERR) | \
+ )
+
+#define XILINX_PCIE_DMA_IMR_ALL_MASK 0x0ff30fe9
+#define XILINX_PCIE_DMA_IDR_ALL_MASK 0xffffffff
+#define XILINX_PCIE_DMA_IDRN_MASK GENMASK(19, 16)
+
+/* Root Port Error Register definitions */
+#define XILINX_PCIE_DMA_RPEFR_ERR_VALID BIT(18)
+#define XILINX_PCIE_DMA_RPEFR_REQ_ID GENMASK(15, 0)
+#define XILINX_PCIE_DMA_RPEFR_ALL_MASK 0xffffffff
+
+/* Root Port Interrupt Register definitions */
+#define XILINX_PCIE_DMA_IDRN_SHIFT 16
+
+/* Root Port Status/control Register definitions */
+#define XILINX_PCIE_DMA_REG_RPSC_BEN BIT(0)
+
+/* Phy Status/Control Register definitions */
+#define XILINX_PCIE_DMA_REG_PSCR_LNKUP BIT(11)
+
+/* Number of MSI IRQs */
+#define XILINX_NUM_MSI_IRQS 64
+
+struct xilinx_msi {
+ struct irq_domain *msi_domain;
+ unsigned long *bitmap;
+ struct irq_domain *dev_domain;
+ struct mutex lock; /* protect bitmap variable */
+ int irq_msi0;
+ int irq_msi1;
+};
+
+/**
+ * struct pl_dma_pcie - PCIe port information
+ * @dev: Device pointer
+ * @reg_base: IO Mapped Register Base
+ * @irq: Interrupt number
+ * @cfg: Holds mappings of config space window
+ * @phys_reg_base: Physical address of reg base
+ * @intx_domain: Legacy IRQ domain pointer
+ * @pldma_domain: PL DMA IRQ domain pointer
+ * @resources: Bus Resources
+ * @msi: MSI information
+ * @irq_misc: Legacy and error interrupt number
+ * @intx_irq: legacy interrupt number
+ * @lock: lock protecting shared register access
+ */
+struct pl_dma_pcie {
+ struct device *dev;
+ void __iomem *reg_base;
+ int irq;
+ struct pci_config_window *cfg;
+ phys_addr_t phys_reg_base;
+ struct irq_domain *intx_domain;
+ struct irq_domain *pldma_domain;
+ struct list_head resources;
+ struct xilinx_msi msi;
+ int irq_misc;
+ int intx_irq;
+ raw_spinlock_t lock;
+};
+
+static inline u32 pcie_read(struct pl_dma_pcie *port, u32 reg)
+{
+ return readl(port->reg_base + reg);
+}
+
+static inline void pcie_write(struct pl_dma_pcie *port, u32 val, u32 reg)
+{
+ writel(val, port->reg_base + reg);
+}
+
+static inline bool xilinx_pl_dma_pcie_link_up(struct pl_dma_pcie *port)
+{
+ return (pcie_read(port, XILINX_PCIE_DMA_REG_PSCR) &
+ XILINX_PCIE_DMA_REG_PSCR_LNKUP) ? 1 : 0;
+}
+
+static void xilinx_pl_dma_pcie_clear_err_interrupts(struct pl_dma_pcie *port)
+{
+ unsigned long val = pcie_read(port, XILINX_PCIE_DMA_REG_RPEFR);
+
+ if (val & XILINX_PCIE_DMA_RPEFR_ERR_VALID) {
+ dev_dbg(port->dev, "Requester ID %lu\n",
+ val & XILINX_PCIE_DMA_RPEFR_REQ_ID);
+ pcie_write(port, XILINX_PCIE_DMA_RPEFR_ALL_MASK,
+ XILINX_PCIE_DMA_REG_RPEFR);
+ }
+}
+
+static bool xilinx_pl_dma_pcie_valid_device(struct pci_bus *bus, unsigned int devfn)
+{
+ struct pl_dma_pcie *port = bus->sysdata;
+
+ /* Check if link is up when trying to access downstream ports */
+ if (!pci_is_root_bus(bus)) {
+ /*
+ * If the link goes down after we check for link-up, we have a problem:
+ * if a PIO request is initiated while link-down, the whole controller
+ * hangs, and even after link comes up again, previous PIO requests
+ * won't work, and a reset of the whole PCIe controller is needed.
+ * Henceforth we need link-up check here to avoid sending PIO request
+ * when link is down.
+ */
+ if (!xilinx_pl_dma_pcie_link_up(port))
+ return false;
+ } else if (devfn > 0)
+ /* Only one device down on each root port */
+ return false;
+
+ return true;
+}
+
+static void __iomem *xilinx_pl_dma_pcie_map_bus(struct pci_bus *bus,
+ unsigned int devfn, int where)
+{
+ struct pl_dma_pcie *port = bus->sysdata;
+
+ if (!xilinx_pl_dma_pcie_valid_device(bus, devfn))
+ return NULL;
+
+ return port->reg_base + PCIE_ECAM_OFFSET(bus->number, devfn, where);
+}
+
+/* PCIe operations */
+static const struct pci_ecam_ops xilinx_pl_dma_pcie_ops = {
+ .pci_ops = {
+ .map_bus = xilinx_pl_dma_pcie_map_bus,
+ .read = pci_generic_config_read,
+ .write = pci_generic_config_write,
+ }
+};
+
+static void xilinx_pl_dma_pcie_enable_msi(struct pl_dma_pcie *port)
+{
+ phys_addr_t msi_addr = port->phys_reg_base;
+
+ pcie_write(port, upper_32_bits(msi_addr), XILINX_PCIE_DMA_REG_MSIBASE1);
+ pcie_write(port, lower_32_bits(msi_addr), XILINX_PCIE_DMA_REG_MSIBASE2);
+}
+
+static void xilinx_mask_intx_irq(struct irq_data *data)
+{
+ struct pl_dma_pcie *port = irq_data_get_irq_chip_data(data);
+ unsigned long flags;
+ u32 mask, val;
+
+ mask = BIT(data->hwirq + XILINX_PCIE_DMA_IDRN_SHIFT);
+ raw_spin_lock_irqsave(&port->lock, flags);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IDRN_MASK);
+ pcie_write(port, (val & (~mask)), XILINX_PCIE_DMA_REG_IDRN_MASK);
+ raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void xilinx_unmask_intx_irq(struct irq_data *data)
+{
+ struct pl_dma_pcie *port = irq_data_get_irq_chip_data(data);
+ unsigned long flags;
+ u32 mask, val;
+
+ mask = BIT(data->hwirq + XILINX_PCIE_DMA_IDRN_SHIFT);
+ raw_spin_lock_irqsave(&port->lock, flags);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IDRN_MASK);
+ pcie_write(port, (val | mask), XILINX_PCIE_DMA_REG_IDRN_MASK);
+ raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static struct irq_chip xilinx_leg_irq_chip = {
+ .name = "INTx",
+ .irq_mask = xilinx_mask_intx_irq,
+ .irq_unmask = xilinx_unmask_intx_irq,
+};
+
+static int xilinx_pl_dma_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
+ irq_hw_number_t hwirq)
+{
+ irq_set_chip_and_handler(irq, &xilinx_leg_irq_chip, handle_level_irq);
+ irq_set_chip_data(irq, domain->host_data);
+ irq_set_status_flags(irq, IRQ_LEVEL);
+
+ return 0;
+}
+
+/* INTx IRQ Domain operations */
+static const struct irq_domain_ops intx_domain_ops = {
+ .map = xilinx_pl_dma_pcie_intx_map,
+};
+
+static void xilinx_pl_dma_pcie_handle_msi_irq(struct pl_dma_pcie *port,
+ u32 status_reg)
+{
+ struct xilinx_msi *msi;
+ unsigned long status;
+ u32 bit, virq;
+
+ msi = &port->msi;
+
+ while ((status = pcie_read(port, status_reg)) != 0) {
+ for_each_set_bit(bit, &status, 32) {
+ pcie_write(port, 1 << bit, status_reg);
+ if (status_reg == XILINX_PCIE_DMA_REG_MSI_HI)
+ bit = bit + 32;
+ virq = irq_find_mapping(msi->dev_domain, bit);
+ if (virq)
+ generic_handle_irq(virq);
+ }
+ }
+}
+
+static void xilinx_pl_dma_pcie_msi_handler_high(struct irq_desc *desc)
+{
+ struct pl_dma_pcie *port = irq_desc_get_handler_data(desc);
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+
+ chained_irq_enter(chip, desc);
+ xilinx_pl_dma_pcie_handle_msi_irq(port, XILINX_PCIE_DMA_REG_MSI_HI);
+ chained_irq_exit(chip, desc);
+}
+
+static void xilinx_pl_dma_pcie_msi_handler_low(struct irq_desc *desc)
+{
+ struct pl_dma_pcie *port = irq_desc_get_handler_data(desc);
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+
+ chained_irq_enter(chip, desc);
+ xilinx_pl_dma_pcie_handle_msi_irq(port, XILINX_PCIE_DMA_REG_MSI_LOW);
+ chained_irq_exit(chip, desc);
+}
+
+static void xilinx_pl_dma_pcie_event_flow(struct irq_desc *desc)
+{
+ struct pl_dma_pcie *port = irq_desc_get_handler_data(desc);
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ unsigned long val;
+ int i;
+
+ chained_irq_enter(chip, desc);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IDR);
+ val &= pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
+ for_each_set_bit(i, &val, 32)
+ generic_handle_domain_irq(port->pldma_domain, i);
+
+ pcie_write(port, val, XILINX_PCIE_DMA_REG_IDR);
+
+ chained_irq_exit(chip, desc);
+}
+
+#define _IC(x, s) \
+ [XILINX_PCIE_INTR_ ## x] = { __stringify(x), s }
+
+static const struct {
+ const char *sym;
+ const char *str;
+} intr_cause[32] = {
+ _IC(LINK_DOWN, "Link Down"),
+ _IC(HOT_RESET, "Hot reset"),
+ _IC(CFG_TIMEOUT, "ECAM access timeout"),
+ _IC(CORRECTABLE, "Correctable error message"),
+ _IC(NONFATAL, "Non fatal error message"),
+ _IC(FATAL, "Fatal error message"),
+ _IC(INTX, "INTX error message"),
+ _IC(MSI, "MSI message received"),
+ _IC(SLV_UNSUPP, "Slave unsupported request"),
+ _IC(SLV_UNEXP, "Slave unexpected completion"),
+ _IC(SLV_COMPL, "Slave completion timeout"),
+ _IC(SLV_ERRP, "Slave Error Poison"),
+ _IC(SLV_CMPABT, "Slave Completer Abort"),
+ _IC(SLV_ILLBUR, "Slave Illegal Burst"),
+ _IC(MST_DECERR, "Master decode error"),
+ _IC(MST_SLVERR, "Master slave error"),
+};
+
+static irqreturn_t xilinx_pl_dma_pcie_intr_handler(int irq, void *dev_id)
+{
+ struct pl_dma_pcie *port = (struct pl_dma_pcie *)dev_id;
+ struct device *dev = port->dev;
+ struct irq_data *d;
+
+ d = irq_domain_get_irq_data(port->pldma_domain, irq);
+ switch (d->hwirq) {
+ case XILINX_PCIE_INTR_CORRECTABLE:
+ case XILINX_PCIE_INTR_NONFATAL:
+ case XILINX_PCIE_INTR_FATAL:
+ xilinx_pl_dma_pcie_clear_err_interrupts(port);
+ fallthrough;
+
+ default:
+ if (intr_cause[d->hwirq].str)
+ dev_warn(dev, "%s\n", intr_cause[d->hwirq].str);
+ else
+ dev_warn(dev, "Unknown IRQ %ld\n", d->hwirq);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static struct irq_chip xilinx_msi_irq_chip = {
+ .name = "pl_dma_pciepcie:msi",
+ .irq_enable = pci_msi_unmask_irq,
+ .irq_disable = pci_msi_mask_irq,
+ .irq_mask = pci_msi_mask_irq,
+ .irq_unmask = pci_msi_unmask_irq,
+};
+
+static struct msi_domain_info xilinx_msi_domain_info = {
+ .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+ MSI_FLAG_MULTI_PCI_MSI),
+ .chip = &xilinx_msi_irq_chip,
+};
+
+static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+{
+ struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data);
+ phys_addr_t msi_addr = pcie->phys_reg_base;
+
+ msg->address_lo = lower_32_bits(msi_addr);
+ msg->address_hi = upper_32_bits(msi_addr);
+ msg->data = data->hwirq;
+}
+
+static int xilinx_msi_set_affinity(struct irq_data *irq_data,
+ const struct cpumask *mask, bool force)
+{
+ return -EINVAL;
+}
+
+static struct irq_chip xilinx_irq_chip = {
+ .name = "Xilinx MSI",
+ .irq_compose_msi_msg = xilinx_compose_msi_msg,
+ .irq_set_affinity = xilinx_msi_set_affinity,
+};
+
+static int xilinx_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs, void *args)
+{
+ struct pl_dma_pcie *pcie = domain->host_data;
+ struct xilinx_msi *msi = &pcie->msi;
+ int bit, i;
+
+ mutex_lock(&msi->lock);
+ bit = bitmap_find_free_region(msi->bitmap, XILINX_NUM_MSI_IRQS,
+ get_count_order(nr_irqs));
+ if (bit < 0) {
+ mutex_unlock(&msi->lock);
+ return -ENOSPC;
+ }
+
+ for (i = 0; i < nr_irqs; i++) {
+ irq_domain_set_info(domain, virq + i, bit + i, &xilinx_irq_chip,
+ domain->host_data, handle_simple_irq,
+ NULL, NULL);
+ }
+ mutex_unlock(&msi->lock);
+ return 0;
+}
+
+static void xilinx_irq_domain_free(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs)
+{
+ struct irq_data *data = irq_domain_get_irq_data(domain, virq);
+ struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data);
+ struct xilinx_msi *msi = &pcie->msi;
+
+ mutex_lock(&msi->lock);
+ bitmap_release_region(msi->bitmap, data->hwirq,
+ get_count_order(nr_irqs));
+ mutex_unlock(&msi->lock);
+}
+
+static const struct irq_domain_ops dev_msi_domain_ops = {
+ .alloc = xilinx_irq_domain_alloc,
+ .free = xilinx_irq_domain_free,
+};
+
+static void xilinx_pl_dma_pcie_free_interrupts(struct pl_dma_pcie *port)
+{
+ irq_set_chained_handler_and_data(port->msi.irq_msi0, NULL, NULL);
+ irq_set_chained_handler_and_data(port->msi.irq_msi1, NULL, NULL);
+}
+
+static void xilinx_pl_dma_pcie_free_irq_domains(struct pl_dma_pcie *port)
+{
+ struct xilinx_msi *msi = &port->msi;
+
+ if (port->intx_domain) {
+ irq_domain_remove(port->intx_domain);
+ port->intx_domain = NULL;
+ }
+
+ if (msi->dev_domain) {
+ irq_domain_remove(msi->dev_domain);
+ msi->dev_domain = NULL;
+ }
+
+ if (msi->msi_domain) {
+ irq_domain_remove(msi->msi_domain);
+ msi->msi_domain = NULL;
+ }
+}
+
+static int xilinx_pl_dma_pcie_init_msi_irq_domain(struct pl_dma_pcie *port)
+{
+ struct device *dev = port->dev;
+ struct xilinx_msi *msi = &port->msi;
+ int size = BITS_TO_LONGS(XILINX_NUM_MSI_IRQS) * sizeof(long);
+ struct fwnode_handle *fwnode = of_node_to_fwnode(port->dev->of_node);
+
+ msi->dev_domain = irq_domain_add_linear(NULL, XILINX_NUM_MSI_IRQS,
+ &dev_msi_domain_ops, port);
+ if (!msi->dev_domain)
+ goto out;
+
+ msi->msi_domain = pci_msi_create_irq_domain(fwnode,
+ &xilinx_msi_domain_info,
+ msi->dev_domain);
+ if (!msi->msi_domain)
+ goto out;
+
+ mutex_init(&msi->lock);
+ msi->bitmap = kzalloc(size, GFP_KERNEL);
+ if (!msi->bitmap)
+ goto out;
+
+ raw_spin_lock_init(&port->lock);
+ xilinx_pl_dma_pcie_enable_msi(port);
+
+ return 0;
+
+out:
+ xilinx_pl_dma_pcie_free_irq_domains(port);
+ dev_err(dev, "Failed to allocate MSI IRQ domains\n");
+ return -ENOMEM;
+}
+
+static void xilinx_pl_dma_pcie_intx_flow(struct irq_desc *desc)
+{
+ struct pl_dma_pcie *port = irq_desc_get_handler_data(desc);
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ unsigned long val;
+ int i;
+
+ chained_irq_enter(chip, desc);
+
+ val = FIELD_GET(XILINX_PCIE_DMA_IDRN_MASK,
+ pcie_read(port, XILINX_PCIE_DMA_REG_IDRN));
+
+ for_each_set_bit(i, &val, PCI_NUM_INTX)
+ generic_handle_domain_irq(port->intx_domain, i);
+
+ chained_irq_exit(chip, desc);
+}
+
+static void xilinx_pl_dma_pcie_mask_event_irq(struct irq_data *d)
+{
+ struct pl_dma_pcie *port = irq_data_get_irq_chip_data(d);
+ u32 val;
+
+ raw_spin_lock(&port->lock);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
+ val &= ~BIT(d->hwirq);
+ pcie_write(port, val, XILINX_PCIE_DMA_REG_IMR);
+ raw_spin_unlock(&port->lock);
+}
+
+static void xilinx_pl_dma_pcie_unmask_event_irq(struct irq_data *d)
+{
+ struct pl_dma_pcie *port = irq_data_get_irq_chip_data(d);
+ u32 val;
+
+ raw_spin_lock(&port->lock);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
+ val |= BIT(d->hwirq);
+ pcie_write(port, val, XILINX_PCIE_DMA_REG_IMR);
+ raw_spin_unlock(&port->lock);
+}
+
+static struct irq_chip xilinx_pl_dma_pcie_event_irq_chip = {
+ .name = "RC-Event",
+ .irq_mask = xilinx_pl_dma_pcie_mask_event_irq,
+ .irq_unmask = xilinx_pl_dma_pcie_unmask_event_irq,
+};
+
+static int xilinx_pl_dma_pcie_event_map(struct irq_domain *domain,
+ unsigned int irq, irq_hw_number_t hwirq)
+{
+ irq_set_chip_and_handler(irq, &xilinx_pl_dma_pcie_event_irq_chip,
+ handle_level_irq);
+ irq_set_chip_data(irq, domain->host_data);
+ irq_set_status_flags(irq, IRQ_LEVEL);
+
+ return 0;
+}
+
+static const struct irq_domain_ops event_domain_ops = {
+ .map = xilinx_pl_dma_pcie_event_map,
+};
+
+/**
+ * xilinx_pl_dma_pcie_init_irq_domain - Initialize IRQ domain
+ * @port: PCIe port information
+ *
+ * Return: '0' on success and error value on failure
+ */
+static int xilinx_pl_dma_pcie_init_irq_domain(struct pl_dma_pcie *port)
+{
+ struct device *dev = port->dev;
+ struct device_node *node = dev->of_node;
+ struct device_node *pcie_intc_node;
+ int ret;
+
+ /* Setup INTx */
+ pcie_intc_node = of_get_child_by_name(node, "interrupt-controller");
+ if (!pcie_intc_node) {
+ dev_err(dev, "No PCIe Intc node found\n");
+ return -EINVAL;
+ }
+
+ port->pldma_domain = irq_domain_add_linear(pcie_intc_node, 32,
+ &event_domain_ops, port);
+ if (!port->pldma_domain)
+ return -ENOMEM;
+
+ irq_domain_update_bus_token(port->pldma_domain, DOMAIN_BUS_NEXUS);
+
+ port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
+ &intx_domain_ops, port);
+ if (!port->intx_domain) {
+ dev_err(dev, "Failed to get a INTx IRQ domain\n");
+ return PTR_ERR(port->intx_domain);
+ }
+
+ irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
+
+ ret = xilinx_pl_dma_pcie_init_msi_irq_domain(port);
+ if (ret != 0) {
+ irq_domain_remove(port->intx_domain);
+ return -ENOMEM;
+ }
+
+ of_node_put(pcie_intc_node);
+ raw_spin_lock_init(&port->lock);
+
+ return 0;
+}
+
+static int xilinx_pl_dma_pcie_setup_irq(struct pl_dma_pcie *port)
+{
+ struct device *dev = port->dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ int i, irq;
+
+ port->irq = platform_get_irq(pdev, 0);
+ if (port->irq < 0)
+ return port->irq;
+
+ for (i = 0; i < ARRAY_SIZE(intr_cause); i++) {
+ int err;
+
+ if (!intr_cause[i].str)
+ continue;
+
+ irq = irq_create_mapping(port->pldma_domain, i);
+ if (!irq) {
+ dev_err(dev, "Failed to map interrupt\n");
+ return -ENXIO;
+ }
+
+ err = devm_request_irq(dev, irq, xilinx_pl_dma_pcie_intr_handler,
+ 0, intr_cause[i].sym, port);
+ if (err) {
+ dev_err(dev, "Failed to request IRQ %d\n", irq);
+ return err;
+ }
+ }
+
+ port->intx_irq = irq_create_mapping(port->pldma_domain,
+ XILINX_PCIE_INTR_INTX);
+ if (!port->intx_irq) {
+ dev_err(dev, "Failed to map INTx interrupt\n");
+ return -ENXIO;
+ }
+
+ /* Plug the INTx chained handler */
+ irq_set_chained_handler_and_data(port->intx_irq,
+ xilinx_pl_dma_pcie_intx_flow, port);
+
+ /* Plug the main event chained handler */
+ irq_set_chained_handler_and_data(port->irq,
+ xilinx_pl_dma_pcie_event_flow, port);
+
+ return 0;
+}
+
+static void xilinx_pl_dma_pcie_init_port(struct pl_dma_pcie *port)
+{
+ if (xilinx_pl_dma_pcie_link_up(port))
+ dev_info(port->dev, "PCIe Link is UP\n");
+ else
+ dev_info(port->dev, "PCIe Link is DOWN\n");
+
+ /* Disable all interrupts */
+ pcie_write(port, ~XILINX_PCIE_DMA_IDR_ALL_MASK,
+ XILINX_PCIE_DMA_REG_IMR);
+
+ /* Clear pending interrupts */
+ pcie_write(port, pcie_read(port, XILINX_PCIE_DMA_REG_IDR) &
+ XILINX_PCIE_DMA_IMR_ALL_MASK,
+ XILINX_PCIE_DMA_REG_IDR);
+
+ /* Needed for MSI DECODE MODE */
+ pcie_write(port, XILINX_PCIE_DMA_IDR_ALL_MASK, XILINX_PCIE_DMA_REG_MSI_LOW_MASK);
+ pcie_write(port, XILINX_PCIE_DMA_IDR_ALL_MASK, XILINX_PCIE_DMA_REG_MSI_HI_MASK);
+
+ /*set the Bridge enable bit */
+ pcie_write(port, pcie_read(port, XILINX_PCIE_DMA_REG_RPSC) |
+ XILINX_PCIE_DMA_REG_RPSC_BEN,
+ XILINX_PCIE_DMA_REG_RPSC);
+}
+
+static int xilinx_request_msi_irq(struct pl_dma_pcie *port)
+{
+ struct device *dev = port->dev;
+ struct platform_device *pdev = to_platform_device(dev);
+
+ port->msi.irq_msi0 = platform_get_irq_byname(pdev, "msi0");
+ if (port->msi.irq_msi0 <= 0) {
+ dev_err(dev, "Unable to find msi0 IRQ line\n");
+ return port->msi.irq_msi0;
+ }
+
+ irq_set_chained_handler_and_data(port->msi.irq_msi0,
+ xilinx_pl_dma_pcie_msi_handler_low,
+ port);
+
+ port->msi.irq_msi1 = platform_get_irq_byname(pdev, "msi1");
+ if (port->msi.irq_msi1 <= 0) {
+ irq_set_chained_handler_and_data(port->msi.irq_msi0,
+ NULL, NULL);
+ dev_err(dev, "Unable to find msi1 IRQ line\n");
+ return port->msi.irq_msi1;
+ }
+
+ irq_set_chained_handler_and_data(port->msi.irq_msi1,
+ xilinx_pl_dma_pcie_msi_handler_high,
+ port);
+
+ return 0;
+}
+
+static int xilinx_pl_dma_pcie_parse_dt(struct pl_dma_pcie *port,
+ struct resource *bus_range)
+{
+ struct device *dev = port->dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *res;
+ int err;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(dev, "missing \"reg\" property\n");
+ return -ENXIO;
+ }
+ port->phys_reg_base = res->start;
+
+ port->cfg = pci_ecam_create(dev, res, bus_range, &xilinx_pl_dma_pcie_ops);
+ if (IS_ERR(port->cfg))
+ return PTR_ERR(port->cfg);
+
+ port->reg_base = port->cfg->win;
+
+ err = xilinx_request_msi_irq(port);
+ if (err) {
+ pci_ecam_free(port->cfg);
+ return err;
+ }
+
+ return 0;
+}
+
+static int xilinx_pl_dma_pcie_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct pl_dma_pcie *port;
+ struct pci_host_bridge *bridge;
+ struct resource_entry *bus;
+ int err;
+
+ bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port));
+ if (!bridge)
+ return -ENODEV;
+
+ port = pci_host_bridge_priv(bridge);
+
+ port->dev = dev;
+
+ bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+ if (!bus)
+ return -ENODEV;
+
+ err = xilinx_pl_dma_pcie_parse_dt(port, bus->res);
+ if (err) {
+ dev_err(dev, "Parsing DT failed\n");
+ return err;
+ }
+
+ xilinx_pl_dma_pcie_init_port(port);
+
+ err = xilinx_pl_dma_pcie_init_irq_domain(port);
+ if (err)
+ goto err_irq_domain;
+
+ err = xilinx_pl_dma_pcie_setup_irq(port);
+
+ bridge->sysdata = port;
+ bridge->ops = (struct pci_ops *)&xilinx_pl_dma_pcie_ops.pci_ops;
+
+ err = pci_host_probe(bridge);
+ if (err < 0)
+ goto err_host_bridge;
+
+ return 0;
+
+err_host_bridge:
+ xilinx_pl_dma_pcie_free_irq_domains(port);
+
+err_irq_domain:
+ pci_ecam_free(port->cfg);
+ xilinx_pl_dma_pcie_free_interrupts(port);
+ return err;
+}
+
+static const struct of_device_id xilinx_pl_dma_pcie_of_match[] = {
+ {
+ .compatible = "xlnx,xdma-host-3.00",
+ },
+ {}
+};
+
+static struct platform_driver xilinx_pl_dma_pcie_driver = {
+ .driver = {
+ .name = "xilinx-xdma-pcie",
+ .of_match_table = xilinx_pl_dma_pcie_of_match,
+ .suppress_bind_attrs = true,
+ },
+ .probe = xilinx_pl_dma_pcie_probe,
+};
+
+builtin_platform_driver(xilinx_pl_dma_pcie_driver);