x86/hyperv: Fix IRQ effective cpu discovery for the interrupts unmasking
Commit Message
As of today, the existent code uses conjunction of IRQ affinity mask and cpu
online mask to find the cpu id to map an interrupt to.
I looks like the intention was to make sure that and IRQ won't be mapped to an
offline CPU.
Although it works correctly today, there are two problems with it:
1. IRQ affinity mask already consists only of online cpus, thus matching it
to the mask on online cpus is redundant.
2. cpumask_first_and() can return nr_cpu_ids in case of IRQ affinity
containing offline cpus in future, and in this case current implementation
will likely lead to kernel crash in hv_map_interrupt due to an attempt to use
invalid cpu id for getting vp set.
This patch fixes this logic by taking the first bit from the affinity
mask as the cpu to map the IRQ to.
It also adds a paranoia WARN_ON_ONCE for the case when the affinity mask
contains offline cpus.
Signed-off-by: Stanislav Kinsburskii <stanislav.kinsburskii@gmail.com>
CC: "K. Y. Srinivasan" <kys@microsoft.com>
CC: Haiyang Zhang <haiyangz@microsoft.com>
CC: Wei Liu <wei.liu@kernel.org>
CC: Dexuan Cui <decui@microsoft.com>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
CC: Borislav Petkov <bp@alien8.de>
CC: Dave Hansen <dave.hansen@linux.intel.com>
CC: x86@kernel.org
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: Joerg Roedel <joro@8bytes.org>
CC: Will Deacon <will@kernel.org>
CC: Robin Murphy <robin.murphy@arm.com>
CC: linux-hyperv@vger.kernel.org
CC: linux-kernel@vger.kernel.org
CC: iommu@lists.linux.dev
---
arch/x86/hyperv/irqdomain.c | 7 ++++---
drivers/iommu/hyperv-iommu.c | 7 ++++---
2 files changed, 8 insertions(+), 6 deletions(-)
@@ -192,7 +192,6 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
struct pci_dev *dev;
struct hv_interrupt_entry out_entry, *stored_entry;
struct irq_cfg *cfg = irqd_cfg(data);
- const cpumask_t *affinity;
int cpu;
u64 status;
@@ -204,8 +203,10 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
return;
}
- affinity = irq_data_get_effective_affinity_mask(data);
- cpu = cpumask_first_and(affinity, cpu_online_mask);
+ cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
+
+ /* Paranoia check: the cpu must be online */
+ WARN_ON_ONCE(!cpumask_test_cpu(cpu, cpu_online_mask));
if (data->chip_data) {
/*
@@ -197,15 +197,16 @@ hyperv_root_ir_compose_msi_msg(struct irq_data *irq_data, struct msi_msg *msg)
u32 vector;
struct irq_cfg *cfg;
int ioapic_id;
- const struct cpumask *affinity;
int cpu;
struct hv_interrupt_entry entry;
struct hyperv_root_ir_data *data = irq_data->chip_data;
struct IO_APIC_route_entry e;
cfg = irqd_cfg(irq_data);
- affinity = irq_data_get_effective_affinity_mask(irq_data);
- cpu = cpumask_first_and(affinity, cpu_online_mask);
+ cpu = cpumask_first(irq_data_get_effective_affinity_mask(irq_data));
+
+ /* Paranoia check: the cpu must be online */
+ WARN_ON_ONCE(!cpumask_test_cpu(cpu, cpu_online_mask));
vector = cfg->vector;
ioapic_id = data->ioapic_id;