[RFC,v2,13/21] irqchip: riscv-intc: Add ACPI support for AIA
Commit Message
The RINTC subtype structure in MADT also has information about other
interrupt controllers like MMIO. So, save those information and provide
interfaces to retrieve them when required by corresponding drivers.
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
---
arch/riscv/include/asm/irq.h | 19 ++++++
drivers/irqchip/irq-riscv-intc.c | 102 ++++++++++++++++++++++++++++++-
2 files changed, 120 insertions(+), 1 deletion(-)
Comments
On Thu, Oct 26, 2023 at 01:53:36AM +0530, Sunil V L wrote:
> The RINTC subtype structure in MADT also has information about other
> interrupt controllers like MMIO. So, save those information and provide
> interfaces to retrieve them when required by corresponding drivers.
> @@ -218,7 +306,19 @@ static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,
> + * MSI controller (IMSIC) in RISC-V is optional. So, unless
> + * IMSIC is discovered, set system wide MSI support as
> + * unsupported. Once IMSIC is probed, MSI support will be set.
> + */
> + pci_no_msi();
It doesn't seem like we should have to tell the PCI core about
functionality we *don't* have.
I would think IMSIC would be detected before enumerating PCI devices
that might use it, and if we *haven't* found an IMSIC by the time we
get to pci_register_host_bridge(), would/should we set
PCI_BUS_FLAGS_NO_MSI there?
I see Thomas is cc'd; he'd have better insight.
Bjorn
Hi Bjorn,
On Thu, Oct 26, 2023 at 11:51:50AM -0500, Bjorn Helgaas wrote:
> On Thu, Oct 26, 2023 at 01:53:36AM +0530, Sunil V L wrote:
> > The RINTC subtype structure in MADT also has information about other
> > interrupt controllers like MMIO. So, save those information and provide
> > interfaces to retrieve them when required by corresponding drivers.
>
> > @@ -218,7 +306,19 @@ static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,
>
> > + * MSI controller (IMSIC) in RISC-V is optional. So, unless
> > + * IMSIC is discovered, set system wide MSI support as
> > + * unsupported. Once IMSIC is probed, MSI support will be set.
> > + */
> > + pci_no_msi();
>
> It doesn't seem like we should have to tell the PCI core about
> functionality we *don't* have.
>
> I would think IMSIC would be detected before enumerating PCI devices
> that might use it, and if we *haven't* found an IMSIC by the time we
> get to pci_register_host_bridge(), would/should we set
> PCI_BUS_FLAGS_NO_MSI there?
>
The check in pci_register_host_bridge() is like below.
if (bridge->msi_domain && !dev_get_msi_domain(&bus->dev) &&
!pci_host_of_has_msi_map(parent))
bus->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
When there is no IMSIC, bridge->msi_domain is 0 and hence
PCI_BUS_FLAGS_NO_MSI will never be set. Do you recommend to set
PCI_BUS_FLAGS_NO_MSI if bridge->msi_domain is 0? Let me know if I am
missing something.
Thanks,
Sunil
On Fri, Oct 27, 2023 at 04:59:31PM +0530, Sunil V L wrote:
> Hi Bjorn,
>
> On Thu, Oct 26, 2023 at 11:51:50AM -0500, Bjorn Helgaas wrote:
> > On Thu, Oct 26, 2023 at 01:53:36AM +0530, Sunil V L wrote:
> > > The RINTC subtype structure in MADT also has information about other
> > > interrupt controllers like MMIO. So, save those information and provide
> > > interfaces to retrieve them when required by corresponding drivers.
> >
> > > @@ -218,7 +306,19 @@ static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,
> >
> > > + * MSI controller (IMSIC) in RISC-V is optional. So, unless
> > > + * IMSIC is discovered, set system wide MSI support as
> > > + * unsupported. Once IMSIC is probed, MSI support will be set.
> > > + */
> > > + pci_no_msi();
> >
> > It doesn't seem like we should have to tell the PCI core about
> > functionality we *don't* have.
> >
> > I would think IMSIC would be detected before enumerating PCI devices
> > that might use it, and if we *haven't* found an IMSIC by the time we
> > get to pci_register_host_bridge(), would/should we set
> > PCI_BUS_FLAGS_NO_MSI there?
> >
> The check in pci_register_host_bridge() is like below.
>
> if (bridge->msi_domain && !dev_get_msi_domain(&bus->dev) &&
> !pci_host_of_has_msi_map(parent))
> bus->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
>
> When there is no IMSIC, bridge->msi_domain is 0 and hence
> PCI_BUS_FLAGS_NO_MSI will never be set. Do you recommend to set
> PCI_BUS_FLAGS_NO_MSI if bridge->msi_domain is 0? Let me know if I am
> missing something.
>
What seems to work is, setting bridge->msi_domain = true in
pci_create_root_bus() similar to how pci_host_common_probe() sets for OF
framework. Would that be better solution?
Thanks,
Sunil
On Thu, Oct 26 2023 at 11:51, Bjorn Helgaas wrote:
> On Thu, Oct 26, 2023 at 01:53:36AM +0530, Sunil V L wrote:
>> The RINTC subtype structure in MADT also has information about other
>> interrupt controllers like MMIO. So, save those information and provide
>> interfaces to retrieve them when required by corresponding drivers.
>
>> @@ -218,7 +306,19 @@ static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,
>
>> + * MSI controller (IMSIC) in RISC-V is optional. So, unless
>> + * IMSIC is discovered, set system wide MSI support as
>> + * unsupported. Once IMSIC is probed, MSI support will be set.
>> + */
>> + pci_no_msi();
>
> It doesn't seem like we should have to tell the PCI core about
> functionality we *don't* have.
>
> I would think IMSIC would be detected before enumerating PCI devices
> that might use it, and if we *haven't* found an IMSIC by the time we
> get to pci_register_host_bridge(), would/should we set
> PCI_BUS_FLAGS_NO_MSI there?
>
> I see Thomas is cc'd; he'd have better insight.
I was not really involved with this bus and MSI domain logic. Marc
should know. CC'ed.
Thanks,
tglx
On Fri, 27 Oct 2023 18:45:38 +0100,
Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Thu, Oct 26 2023 at 11:51, Bjorn Helgaas wrote:
> > On Thu, Oct 26, 2023 at 01:53:36AM +0530, Sunil V L wrote:
> >> The RINTC subtype structure in MADT also has information about other
> >> interrupt controllers like MMIO. So, save those information and provide
> >> interfaces to retrieve them when required by corresponding drivers.
> >
> >> @@ -218,7 +306,19 @@ static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,
> >
> >> + * MSI controller (IMSIC) in RISC-V is optional. So, unless
> >> + * IMSIC is discovered, set system wide MSI support as
> >> + * unsupported. Once IMSIC is probed, MSI support will be set.
> >> + */
> >> + pci_no_msi();
> >
> > It doesn't seem like we should have to tell the PCI core about
> > functionality we *don't* have.
> >
> > I would think IMSIC would be detected before enumerating PCI devices
> > that might use it, and if we *haven't* found an IMSIC by the time we
> > get to pci_register_host_bridge(), would/should we set
> > PCI_BUS_FLAGS_NO_MSI there?
> >
> > I see Thomas is cc'd; he'd have better insight.
>
> I was not really involved with this bus and MSI domain logic. Marc
> should know. CC'ed.
The canonical way of doing this is by the platform expressing that
there is no linkage between the PCIe RC and the MSI controller. If
there is no MSI domain associated with the RC, then by extension the
endpoints don't get one either.
There are additional quirks linked to the msi_domain host bridge
property, allowing the host bridge driver to indicate that it isn't in
charge of MSIs, but that a third party may provide it (in which case a
MSI irq domain will be associated with it).
In any case, slapping a pci_no_msi() call in an irqchip driver is
gross and most probably a sign that this is going in the wrong
direction, specially as this is platform-wide.
The only cases I'd expect this function to be called are:
- Platform or firmware explicitly disallowing MSIs
- pci=nomsi on the command line
none of which are the business of an irqchip driver.
HTH,
M.
@@ -12,8 +12,27 @@
#include <asm-generic/irq.h>
+#ifdef CONFIG_ACPI
+
+/*
+ * The ext_intc_id format is as follows:
+ * Bits [31:24] APLIC/PLIC ID
+ * Bits [15:0] APLIC IDC ID / PLIC S-Mode Context ID for this hart
+ */
+#define APLIC_PLIC_ID(x) ((x) >> 24)
+#define IDC_CONTEXT_ID(x) ((x) & 0x0000ffff)
+
+int __init acpi_get_intc_index_hartid(u32 index, unsigned long *hartid);
+int acpi_get_ext_intc_parent_hartid(u8 id, u32 idx, unsigned long *hartid);
+void acpi_get_plic_nr_contexts(u8 id, int *nr_contexts);
+int acpi_get_plic_context(u8 id, u32 idx, int *context_id);
+int __init acpi_get_imsic_mmio_info(u32 index, struct resource *res);
+
+#endif
+
void riscv_set_intc_hwnode_fn(struct fwnode_handle *(*fn)(void));
struct fwnode_handle *riscv_get_intc_hwnode(void);
+int acpi_imsic_probe(struct fwnode_handle *parent);
#endif /* _ASM_RISCV_IRQ_H */
@@ -18,6 +18,7 @@
#include <linux/of.h>
#include <linux/smp.h>
#include <asm/hwcap.h>
+#include "../pci/pci.h"
static struct irq_domain *intc_domain;
@@ -195,13 +196,100 @@ IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
#ifdef CONFIG_ACPI
+struct rintc_data {
+ u32 ext_intc_id;
+ unsigned long hart_id;
+ u64 imsic_addr;
+ u32 imsic_size;
+};
+
+static u32 nr_rintc;
+static struct rintc_data *rintc_acpi_data[NR_CPUS];
+
+int acpi_get_intc_index_hartid(u32 index, unsigned long *hartid)
+{
+ if (index >= nr_rintc)
+ return -1;
+
+ *hartid = rintc_acpi_data[index]->hart_id;
+ return 0;
+}
+
+int acpi_get_ext_intc_parent_hartid(u8 id, u32 idx, unsigned long *hartid)
+{
+ int i, j = 0;
+
+ for (i = 0; i < nr_rintc; i++) {
+ if (APLIC_PLIC_ID(rintc_acpi_data[i]->ext_intc_id) == id) {
+ if (idx == j) {
+ *hartid = rintc_acpi_data[i]->hart_id;
+ return 0;
+ }
+ j++;
+ }
+ }
+
+ return -1;
+}
+
+void acpi_get_plic_nr_contexts(u8 id, int *nr_contexts)
+{
+ int i, j = 0;
+
+ for (i = 0; i < nr_rintc; i++) {
+ if (APLIC_PLIC_ID(rintc_acpi_data[i]->ext_intc_id) == id)
+ j++;
+ }
+
+ *nr_contexts = j;
+}
+
+int acpi_get_plic_context(u8 id, u32 idx, int *context_id)
+{
+ int i, j = 0;
+
+ for (i = 0; i < nr_rintc; i++) {
+ if (APLIC_PLIC_ID(rintc_acpi_data[i]->ext_intc_id) == id) {
+ if (idx == j) {
+ *context_id = IDC_CONTEXT_ID(rintc_acpi_data[i]->ext_intc_id);
+ return 0;
+ }
+
+ j++;
+ }
+ }
+
+ return -1;
+}
+
+int acpi_get_imsic_mmio_info(u32 index, struct resource *res)
+{
+ if (index >= nr_rintc)
+ return -1;
+
+ res->start = rintc_acpi_data[index]->imsic_addr;
+ res->end = res->start + rintc_acpi_data[index]->imsic_size - 1;
+ res->flags = IORESOURCE_MEM;
+ return 0;
+}
+
static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,
const unsigned long end)
{
struct fwnode_handle *fn;
struct acpi_madt_rintc *rintc;
+ int rc;
rintc = (struct acpi_madt_rintc *)header;
+ rintc_acpi_data[nr_rintc] = kzalloc(sizeof(*rintc_acpi_data[0]), GFP_KERNEL);
+ if (!rintc_acpi_data[nr_rintc])
+ return -ENOMEM;
+
+ rintc_acpi_data[nr_rintc]->ext_intc_id = rintc->ext_intc_id;
+ rintc_acpi_data[nr_rintc]->hart_id = rintc->hart_id;
+ rintc_acpi_data[nr_rintc]->imsic_addr = rintc->imsic_addr;
+ rintc_acpi_data[nr_rintc]->imsic_size = rintc->imsic_size;
+ nr_rintc++;
/*
* The ACPI MADT will have one INTC for each CPU (or HART)
@@ -218,7 +306,19 @@ static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,
return -ENOMEM;
}
- return riscv_intc_init_common(fn);
+ rc = riscv_intc_init_common(fn);
+ if (rc) {
+ irq_domain_free_fwnode(fn);
+ return rc;
+ }
+
+ /*
+ * MSI controller (IMSIC) in RISC-V is optional. So, unless
+ * IMSIC is discovered, set system wide MSI support as
+ * unsupported. Once IMSIC is probed, MSI support will be set.
+ */
+ pci_no_msi();
+ return 0;
}
IRQCHIP_ACPI_DECLARE(riscv_intc, ACPI_MADT_TYPE_RINTC, NULL,