Message ID | 20221121140048.659849460@linutronix.de |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1637127wrr; Mon, 21 Nov 2022 06:59:17 -0800 (PST) X-Google-Smtp-Source: AA0mqf6l8CYCCI47rHC6DTagAz8us3DR8i9yFRyCr/Dp7DdwaUwQNoLgqxcnf6D+fLNKfoGaabzl X-Received: by 2002:a17:907:77d6:b0:78d:e26f:bfd8 with SMTP id kz22-20020a17090777d600b0078de26fbfd8mr15877330ejc.482.1669042757711; Mon, 21 Nov 2022 06:59:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669042757; cv=none; d=google.com; s=arc-20160816; b=aqwOfbXVH/o+2Z8YsIYB5UEfyS39goq5o3LNJdJafYa4iR6frlroxTGISAOJqjJl8r s/gfvLpceZx0nG7NpcPuPdu6cfVd8tXAlB/lBP5H5DkdciFh93+uKqgDXfor4gfcE6jA /vVzAZXt11GLOQCszCKVbEUR1d4BlU3+vEqgSiG/XZSnb+qmKm4B3d4uEukt6NlNNwwb t3NsKi8iwRgRsHkTIHaP0cvp9djCt5oHYii7FcSchel/xyc6gsFQXEpvnC1nzITURBxP fkcGyKVObRdkhsXBMmNEvIE6DESFESr3GRo0Tk3kEb9GeA54K+Enf+AFRUizDm9zno+w aAng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:date:mime-version:references:subject:cc:to:from :dkim-signature:dkim-signature:message-id; bh=s+g/QPbklxPzM4FZ4SYWZR5FPe67LrDVUw7p5TgGo7M=; b=ZGZtz20YH4V2nK6f/uBcW2zXMMB7RTmLZUog/rV0O5/Fdimgx55amxgprXdsy6iEab xSZayfIlpHdICCaFOXmykODSQRMmGwOjLy0tgnc3bayF7DhQWm/MHPaVtrxH2YiWmI6t iI4A2y1MGrh9/nZ61iY+roaKAgQpBX1sdNl9cF1OSciXz6TZk1uefkIJKWgIohFUpsal E/5zXDjsr1SA46djWhrEDA6EmhxLE566eieFrTqSBVUflSOzWq4LvAjUfVXZ1ydX/Obs 6245r47O53JWEnSGLwfTmomtHuWMqMCLkINW+LtYKqdOKm7pVH0jmcnGype873FE0qOb Gp9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=ztMtjC8C; dkim=neutral (no key) header.i=@linutronix.de header.b="B/8rXcQr"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hq28-20020a1709073f1c00b007ae1e635ea3si10582886ejc.754.2022.11.21.06.58.53; Mon, 21 Nov 2022 06:59:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=ztMtjC8C; dkim=neutral (no key) header.i=@linutronix.de header.b="B/8rXcQr"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231863AbiKUOql (ORCPT <rfc822;cjcooper78@gmail.com> + 99 others); Mon, 21 Nov 2022 09:46:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231135AbiKUOqT (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 21 Nov 2022 09:46:19 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2269D2347; Mon, 21 Nov 2022 06:40:44 -0800 (PST) Message-ID: <20221121140048.659849460@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1669041576; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=s+g/QPbklxPzM4FZ4SYWZR5FPe67LrDVUw7p5TgGo7M=; b=ztMtjC8CMfgn0ofL21t85GjPjHPmJj0DI3J8Wj1MfBlR/rPA9bWrNAj8tRTWNu+JZ4F+Rz Y6kYfSLLdpSwYVpuJp1Pnvt4t1kSSGDOheLoI98NvBg9myVGmqCGdHBLyq/fMYdeyfOfTM kEy3hloOeLgpVrxfdIRbR6F4oveg0ZdHqBfXPT5u96Bbr4eWrtGqJSP2tDKYN1oGAs+6TK 3DGkJ4NCruCBMMprw16iBMfMjAZCKI4HSW400IL3L/FKElF72mUKNWSm+eI/FTQsT24ZmF AyQiTWfrn4XFXV+0+RUbb6HJ6PAgCYbHDfwif+JU6rYW7oaG7gT4cpvJ/xThAQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1669041576; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=s+g/QPbklxPzM4FZ4SYWZR5FPe67LrDVUw7p5TgGo7M=; b=B/8rXcQr4Eb6KRVesJ4zp0ATD6W2MViFL5MUAP3df6qU9aw5XXOTz46aiq58JCAE1M2DQ1 I72PpkA6DPvOwmAQ== From: Thomas Gleixner <tglx@linutronix.de> To: LKML <linux-kernel@vger.kernel.org> Cc: Will Deacon <will@kernel.org>, linux-pci@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Jason Gunthorpe <jgg@mellanox.com>, Andrew Lunn <andrew@lunn.ch>, Gregory Clement <gregory.clement@bootlin.com>, Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>, Ammar Faizi <ammarfaizi2@gnuweeb.org>, Robin Murphy <robin.murphy@arm.com>, Lorenzo Pieralisi <lpieralisi@kernel.org>, Nishanth Menon <nm@ti.com>, Tero Kristo <kristo@kernel.org>, Santosh Shilimkar <ssantosh@kernel.org>, linux-arm-kernel@lists.infradead.org, Vinod Koul <vkoul@kernel.org>, Sinan Kaya <okaya@kernel.org>, Andy Gross <agross@kernel.org>, Bjorn Andersson <andersson@kernel.org>, Mark Rutland <mark.rutland@arm.com>, Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>, Zenghui Yu <yuzenghui@huawei.com>, Shawn Guo <shawnguo@kernel.org>, Sascha Hauer <s.hauer@pengutronix.de>, Fabio Estevam <festevam@gmail.com> Subject: [patch V2 06/40] PCI/MSI: Provide static key for parent mask/unmask References: <20221121135653.208611233@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Date: Mon, 21 Nov 2022 15:39:36 +0100 (CET) X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750118179098079009?= X-GMAIL-MSGID: =?utf-8?q?1750118179098079009?= |
Series |
genirq, irqchip: Convert ARM MSI handling to per device MSI domains
|
|
Commit Message
Thomas Gleixner
Nov. 21, 2022, 2:39 p.m. UTC
Most ARM(64) PCI/MSI domains mask and unmask in the parent domain after or
before the PCI mask/unmask operation takes place. So there are more than a
dozen of the same wrapper implementation all over the place.
Don't make the same mistake with the new per device PCI/MSI domains and
provide a static key which lets the domain implementation enable this
sequence in the PCI/MSI code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Bjorn Helgaas <bhelgaas@google.com>
---
drivers/pci/msi/irqdomain.c | 30 ++++++++++++++++++++++++++++++
include/linux/msi.h | 2 ++
2 files changed, 32 insertions(+)
Comments
On Mon, 21 Nov 2022 14:39:36 +0000, Thomas Gleixner <tglx@linutronix.de> wrote: > > Most ARM(64) PCI/MSI domains mask and unmask in the parent domain after or > before the PCI mask/unmask operation takes place. So there are more than a > dozen of the same wrapper implementation all over the place. > > Don't make the same mistake with the new per device PCI/MSI domains and > provide a static key which lets the domain implementation enable this > sequence in the PCI/MSI code. > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > Cc: Bjorn Helgaas <bhelgaas@google.com> > --- > drivers/pci/msi/irqdomain.c | 30 ++++++++++++++++++++++++++++++ > include/linux/msi.h | 2 ++ > 2 files changed, 32 insertions(+) > > --- a/drivers/pci/msi/irqdomain.c > +++ b/drivers/pci/msi/irqdomain.c > @@ -148,17 +148,45 @@ static void pci_device_domain_set_desc(m > arg->hwirq = desc->msi_index; > } > > +static DEFINE_STATIC_KEY_FALSE(pci_msi_mask_unmask_parent); > + > +/** > + * pci_device_msi_mask_unmask_parent_enable - Enable propagation of mask/unmask > + * to the parent interrupt chip > + * > + * For MSI parent interrupt domains which want to mask at the parent interrupt > + * chip too. > + */ > +void pci_device_msi_mask_unmask_parent_enable(void) > +{ > + static_branch_enable(&pci_msi_mask_unmask_parent); > +} > + > +static __always_inline void cond_mask_parent(struct irq_data *data) > +{ > + if (static_branch_unlikely(&pci_msi_mask_unmask_parent)) > + irq_chip_mask_parent(data); > +} > + > +static __always_inline void cond_unmask_parent(struct irq_data *data) > +{ > + if (static_branch_unlikely(&pci_msi_mask_unmask_parent)) > + irq_chip_unmask_parent(data); > +} > + > static void pci_mask_msi(struct irq_data *data) > { > struct msi_desc *desc = irq_data_get_msi_desc(data); > > pci_msi_mask(desc, BIT(data->irq - desc->irq)); > + cond_mask_parent(data); I find this a bit odd. If anything, I'd rather drop the masking at the PCI level and keep it local to the interrupt controller, because this is likely to be more universal than the equivalent PCI operation (think multi-MSI, for example, which cannot masks individual MSIs). Another thing is that the static key is a global state. Nothing says that masking one way or the other is a universal thing, specially when you have multiple interrupt controllers dealing with MSIs in different ways. For example, GICv3 can use both the ITS and the GICv3-MBI frame at the same time for different PCI RC. OK, they happen to deal with MSIs in the same way, but you hopefully get my point. Thanks, M.
On Thu, Nov 24 2022 at 13:04, Marc Zyngier wrote: > On Mon, 21 Nov 2022 14:39:36 +0000, >> static void pci_mask_msi(struct irq_data *data) >> { >> struct msi_desc *desc = irq_data_get_msi_desc(data); >> >> pci_msi_mask(desc, BIT(data->irq - desc->irq)); >> + cond_mask_parent(data); > > I find this a bit odd. If anything, I'd rather drop the masking at the > PCI level and keep it local to the interrupt controller, because this > is likely to be more universal than the equivalent PCI operation > (think multi-MSI, for example, which cannot masks individual MSIs). > > Another thing is that the static key is a global state. Nothing says > that masking one way or the other is a universal thing, specially when > you have multiple interrupt controllers dealing with MSIs in different > ways. For example, GICv3 can use both the ITS and the GICv3-MBI frame > at the same time for different PCI RC. OK, they happen to deal with > MSIs in the same way, but you hopefully get my point. I'm fine with dropping that. I did this because basically all of the various ARM PCI/MSI domain implementation have a copy of the same functions. Some of them have pointlessly the wrong order because copy & pasta is so wonderful.... So the alternative solution is to provide _ONE_ set of correct callbacks and let the domain initialization code override the irq chip callbacks of the default PCI/MSI template. Thanks, tglx
On Thu, 24 Nov 2022 13:17:00 +0000, Thomas Gleixner <tglx@linutronix.de> wrote: > > On Thu, Nov 24 2022 at 13:04, Marc Zyngier wrote: > > On Mon, 21 Nov 2022 14:39:36 +0000, > >> static void pci_mask_msi(struct irq_data *data) > >> { > >> struct msi_desc *desc = irq_data_get_msi_desc(data); > >> > >> pci_msi_mask(desc, BIT(data->irq - desc->irq)); > >> + cond_mask_parent(data); > > > > I find this a bit odd. If anything, I'd rather drop the masking at the > > PCI level and keep it local to the interrupt controller, because this > > is likely to be more universal than the equivalent PCI operation > > (think multi-MSI, for example, which cannot masks individual MSIs). > > > > Another thing is that the static key is a global state. Nothing says > > that masking one way or the other is a universal thing, specially when > > you have multiple interrupt controllers dealing with MSIs in different > > ways. For example, GICv3 can use both the ITS and the GICv3-MBI frame > > at the same time for different PCI RC. OK, they happen to deal with > > MSIs in the same way, but you hopefully get my point. > > I'm fine with dropping that. I did this because basically all of the > various ARM PCI/MSI domain implementation have a copy of the same > functions. Some of them have pointlessly the wrong order because copy & > pasta is so wonderful.... > > So the alternative solution is to provide _ONE_ set of correct callbacks > and let the domain initialization code override the irq chip callbacks > of the default PCI/MSI template. If the various irqchips can tell the core code whether they want things to be masked at the PCI level or at the irqchip level, this would be a move in the right direction. For the GIC, I'd definitely want things masked locally. What I'd like to get rid off is the double masking, as I agree it is on the "pretty dumb" side of things. Thanks, M.
On Thu, Nov 24 2022 at 13:38, Marc Zyngier wrote: > On Thu, 24 Nov 2022 13:17:00 +0000, > Thomas Gleixner <tglx@linutronix.de> wrote: >> > I find this a bit odd. If anything, I'd rather drop the masking at the >> > PCI level and keep it local to the interrupt controller, because this >> > is likely to be more universal than the equivalent PCI operation >> > (think multi-MSI, for example, which cannot masks individual MSIs). >> > >> > Another thing is that the static key is a global state. Nothing says >> > that masking one way or the other is a universal thing, specially when >> > you have multiple interrupt controllers dealing with MSIs in different >> > ways. For example, GICv3 can use both the ITS and the GICv3-MBI frame >> > at the same time for different PCI RC. OK, they happen to deal with >> > MSIs in the same way, but you hopefully get my point. >> >> I'm fine with dropping that. I did this because basically all of the >> various ARM PCI/MSI domain implementation have a copy of the same >> functions. Some of them have pointlessly the wrong order because copy & >> pasta is so wonderful.... >> >> So the alternative solution is to provide _ONE_ set of correct callbacks >> and let the domain initialization code override the irq chip callbacks >> of the default PCI/MSI template. > > If the various irqchips can tell the core code whether they want > things to be masked at the PCI level or at the irqchip level, this > would be a move in the right direction. For the GIC, I'd definitely > want things masked locally. > > What I'd like to get rid off is the double masking, as I agree it is > on the "pretty dumb" side of things. Not necessarily. It mitigates the problem of MSI interrupts which can't be masked because the implementers decided to spare the gates. MSI allows that as masking is opt-in... Let me think about it. Thanks, tglx
On Fri, Nov 25 2022 at 01:11, Thomas Gleixner wrote: > On Thu, Nov 24 2022 at 13:38, Marc Zyngier wrote: >> On Thu, 24 Nov 2022 13:17:00 +0000, >> Thomas Gleixner <tglx@linutronix.de> wrote: >>> > I find this a bit odd. If anything, I'd rather drop the masking at the >>> > PCI level and keep it local to the interrupt controller, because this >>> > is likely to be more universal than the equivalent PCI operation >>> > (think multi-MSI, for example, which cannot masks individual MSIs). >>> > >>> > Another thing is that the static key is a global state. Nothing says >>> > that masking one way or the other is a universal thing, specially when >>> > you have multiple interrupt controllers dealing with MSIs in different >>> > ways. For example, GICv3 can use both the ITS and the GICv3-MBI frame >>> > at the same time for different PCI RC. OK, they happen to deal with >>> > MSIs in the same way, but you hopefully get my point. >>> >>> I'm fine with dropping that. I did this because basically all of the >>> various ARM PCI/MSI domain implementation have a copy of the same >>> functions. Some of them have pointlessly the wrong order because copy & >>> pasta is so wonderful.... >>> >>> So the alternative solution is to provide _ONE_ set of correct callbacks >>> and let the domain initialization code override the irq chip callbacks >>> of the default PCI/MSI template. >> >> If the various irqchips can tell the core code whether they want >> things to be masked at the PCI level or at the irqchip level, this >> would be a move in the right direction. For the GIC, I'd definitely >> want things masked locally. >> >> What I'd like to get rid off is the double masking, as I agree it is >> on the "pretty dumb" side of things. > > Not necessarily. It mitigates the problem of MSI interrupts which can't > be masked because the implementers decided to spare the gates. MSI > allows that as masking is opt-in... > > Let me think about it. That really took a while to think about it :) We have the following cases on the PCI/MSI side: 1) The MSI[X] entry can be masked 2) The MSI[X] entry cannot be masked because hardware did not implement it, masking is globally disabled due to XEN, masking does not exist for this horrible virtual MSI hackery Now you said: "For the GIC, I'd definitely want things masked locally." I decoded this, that you want to have these interrupts masked at the GIC level too independent of #1 or #2 above. And then: "What I'd like to get rid off is the double masking." But relying on the GIC alone is not really a good thing IMO. There is no point to let some confused device send unwanted MSI messages around without a way to shut it up from the generic code via the regular mask/unmask callbacks. On the other hand for PCI/MSI[x] the mask/unmask operations are not in the hot path as PCI/MSI[x] are strictly edge. Mask/unmask is only happening on startup, shutdown and when an interrupt arrives after disable_irq() incremented the lazy disable counter. For regular interrupt handling mask/unmask is not involved. So to avoid that global key we can let the parent domain set a new flag, e.g. MSI_FLAG_PCI_MSI_MASK_PARENT, in msi_parent_ops::supported_flags and let the PCI/MSI core code query that information when the per device domain is created and select the appropriate template or fixup the callbacks after the domain is created. Does that address your concerns? Thanks, tglx
On Mon, 22 May 2023 15:19:39 +0100, Thomas Gleixner <tglx@linutronix.de> wrote: > > On Fri, Nov 25 2022 at 01:11, Thomas Gleixner wrote: > > On Thu, Nov 24 2022 at 13:38, Marc Zyngier wrote: > >> On Thu, 24 Nov 2022 13:17:00 +0000, > >> Thomas Gleixner <tglx@linutronix.de> wrote: > >>> > I find this a bit odd. If anything, I'd rather drop the masking at the > >>> > PCI level and keep it local to the interrupt controller, because this > >>> > is likely to be more universal than the equivalent PCI operation > >>> > (think multi-MSI, for example, which cannot masks individual MSIs). > >>> > > >>> > Another thing is that the static key is a global state. Nothing says > >>> > that masking one way or the other is a universal thing, specially when > >>> > you have multiple interrupt controllers dealing with MSIs in different > >>> > ways. For example, GICv3 can use both the ITS and the GICv3-MBI frame > >>> > at the same time for different PCI RC. OK, they happen to deal with > >>> > MSIs in the same way, but you hopefully get my point. > >>> > >>> I'm fine with dropping that. I did this because basically all of the > >>> various ARM PCI/MSI domain implementation have a copy of the same > >>> functions. Some of them have pointlessly the wrong order because copy & > >>> pasta is so wonderful.... > >>> > >>> So the alternative solution is to provide _ONE_ set of correct callbacks > >>> and let the domain initialization code override the irq chip callbacks > >>> of the default PCI/MSI template. > >> > >> If the various irqchips can tell the core code whether they want > >> things to be masked at the PCI level or at the irqchip level, this > >> would be a move in the right direction. For the GIC, I'd definitely > >> want things masked locally. > >> > >> What I'd like to get rid off is the double masking, as I agree it is > >> on the "pretty dumb" side of things. > > > > Not necessarily. It mitigates the problem of MSI interrupts which can't > > be masked because the implementers decided to spare the gates. MSI > > allows that as masking is opt-in... > > > > Let me think about it. > > That really took a while to think about it :) > > We have the following cases on the PCI/MSI side: > > 1) The MSI[X] entry can be masked > > 2) The MSI[X] entry cannot be masked because hardware did not implement > it, masking is globally disabled due to XEN, masking does not exist > for this horrible virtual MSI hackery And as a bonus the case of non-PCI MSIs, which are definitely a thing, and I'd like them to fit in the same model (because life is too short to do anything else). As for the Xen side, I hope to never have to care about it for the architecture I care about (I've long proclaimed Xen/arm64 dead and buried). > > Now you said: > > "For the GIC, I'd definitely want things masked locally." > > I decoded this, that you want to have these interrupts masked at the GIC > level too independent of #1 or #2 above. And then: > > "What I'd like to get rid off is the double masking." > > But relying on the GIC alone is not really a good thing IMO. There is no > point to let some confused device send unwanted MSI messages around > without a way to shut it up from the generic code via the regular > mask/unmask callbacks. I have a slightly different view of the problem. The device masking is somehow orthogonal with the masking at the GIC level: - can the interrupt be generated: this is a device property - can the interrupt be signalled: this is an interrupt controller property In a way, this is no different from your basic device, such as a timer: you need both the interrupt generation to be enabled at the timer level, and the interrupt signalling to be enabled (unmasked) at the irqchip level. Today, we conflate the two, because we have either: - devices that cannot selectively mask interrupts - interrupt controllers that are limited in what they can mask and this results in the terrible pattern that's all over the GIC-related stuff. > On the other hand for PCI/MSI[x] the mask/unmask operations are not in > the hot path as PCI/MSI[x] are strictly edge. Mask/unmask is only > happening on startup, shutdown and when an interrupt arrives after > disable_irq() incremented the lazy disable counter. > > For regular interrupt handling mask/unmask is not involved. > > So to avoid that global key we can let the parent domain set a new flag, > e.g. MSI_FLAG_PCI_MSI_MASK_PARENT, in msi_parent_ops::supported_flags > and let the PCI/MSI core code query that information when the per device > domain is created and select the appropriate template or fixup the > callbacks after the domain is created. > > Does that address your concerns? It does to a certain extent. But what I'd really like is that in the most common case where the interrupt controller is capable of masking MSIs, the PCI/MSI *enabling* becomes the responsibility of the PCI core code and not the IRQ code. The IRQ code should ideally only be concerned with the masking of the interrupt at the irqchip level, and not beyond that. And that'd solve the Xen problem by merely ignoring it. If we have HW out there that cannot mask MSIs at the interrupt controller level, then we'd have to fallback to device-side masking, which doesn't really work in general (MultiMSI being my favourite example). My gut feeling is that this is rare, but I'm pretty sure it exists. Thanks, M.
On Tue, May 23 2023 at 11:25, Marc Zyngier wrote: > On Mon, 22 May 2023 15:19:39 +0100, > Thomas Gleixner <tglx@linutronix.de> wrote: >> On the other hand for PCI/MSI[x] the mask/unmask operations are not in >> the hot path as PCI/MSI[x] are strictly edge. Mask/unmask is only >> happening on startup, shutdown and when an interrupt arrives after >> disable_irq() incremented the lazy disable counter. >> >> For regular interrupt handling mask/unmask is not involved. >> >> So to avoid that global key we can let the parent domain set a new flag, >> e.g. MSI_FLAG_PCI_MSI_MASK_PARENT, in msi_parent_ops::supported_flags >> and let the PCI/MSI core code query that information when the per device >> domain is created and select the appropriate template or fixup the >> callbacks after the domain is created. >> >> Does that address your concerns? > > It does to a certain extent. > > But what I'd really like is that in the most common case where the > interrupt controller is capable of masking MSIs, the PCI/MSI > *enabling* becomes the responsibility of the PCI core code and not the > IRQ code. > > The IRQ code should ideally only be concerned with the masking of the > interrupt at the irqchip level, and not beyond that. And that'd solve > the Xen problem by merely ignoring it. > > If we have HW out there that cannot mask MSIs at the interrupt > controller level, then we'd have to fallback to device-side masking, > which doesn't really work in general (MultiMSI being my favourite > example). My gut feeling is that this is rare, but I'm pretty sure it > exists. Sure. There are 3 parts involved: [Device]--->[PCI/MSI]---->[GIC] irqchip irqchip Controlling the interrupt machinery in the device happens at the device driver level and is conceptually independent of the interrupt manangement code. The device driver has no access to the PCI/MSI irqchip and all it can do is to enable/disable the source of the interrupt in the device. For the interrupt management code the job is to ensure that an interrupt can be prevented from disrupting the OS operation independent of the device driver correctness. As a matter of fact we know that PCI/MSI masking ranges from not possible over flaky to properly working. So we can't reliably prevent that a rougue device spams the PCIe bus with messages. Which means that we should utilize the fact that the next interrupt chip in the hierarchy can mask reliably. I wish I could disable individual vectors at the local APIC level on x86... Now the question is whether we want to make this conditional depending on what the PCI/MSI[X] hardware advertises or just keep it simple and do it unconditionally. Thanks, tglx
On Tue, 23 May 2023 14:05:56 +0100, Thomas Gleixner <tglx@linutronix.de> wrote: > > On Tue, May 23 2023 at 11:25, Marc Zyngier wrote: > > On Mon, 22 May 2023 15:19:39 +0100, > > Thomas Gleixner <tglx@linutronix.de> wrote: > >> On the other hand for PCI/MSI[x] the mask/unmask operations are not in > >> the hot path as PCI/MSI[x] are strictly edge. Mask/unmask is only > >> happening on startup, shutdown and when an interrupt arrives after > >> disable_irq() incremented the lazy disable counter. > >> > >> For regular interrupt handling mask/unmask is not involved. > >> > >> So to avoid that global key we can let the parent domain set a new flag, > >> e.g. MSI_FLAG_PCI_MSI_MASK_PARENT, in msi_parent_ops::supported_flags > >> and let the PCI/MSI core code query that information when the per device > >> domain is created and select the appropriate template or fixup the > >> callbacks after the domain is created. > >> > >> Does that address your concerns? > > > > It does to a certain extent. > > > > But what I'd really like is that in the most common case where the > > interrupt controller is capable of masking MSIs, the PCI/MSI > > *enabling* becomes the responsibility of the PCI core code and not the > > IRQ code. > > > > The IRQ code should ideally only be concerned with the masking of the > > interrupt at the irqchip level, and not beyond that. And that'd solve > > the Xen problem by merely ignoring it. > > > > If we have HW out there that cannot mask MSIs at the interrupt > > controller level, then we'd have to fallback to device-side masking, > > which doesn't really work in general (MultiMSI being my favourite > > example). My gut feeling is that this is rare, but I'm pretty sure it > > exists. > > Sure. There are 3 parts involved: > > [Device]--->[PCI/MSI]---->[GIC] > irqchip irqchip > > Controlling the interrupt machinery in the device happens at the device > driver level and is conceptually independent of the interrupt > manangement code. The device driver has no access to the PCI/MSI irqchip > and all it can do is to enable/disable the source of the interrupt in > the device. > > For the interrupt management code the job is to ensure that an interrupt > can be prevented from disrupting the OS operation independent of the > device driver correctness. > > As a matter of fact we know that PCI/MSI masking ranges from not > possible over flaky to properly working. So we can't reliably prevent > that a rougue device spams the PCIe bus with messages. > > Which means that we should utilize the fact that the next interrupt chip > in the hierarchy can mask reliably. I wish I could disable individual > vectors at the local APIC level on x86... > > Now the question is whether we want to make this conditional depending > on what the PCI/MSI[X] hardware advertises or just keep it simple and do > it unconditionally. I think this should be unconditional if the root irqchip (the GIC in this instance) is capable of it. So a suggestion where the root irqchip exposes its masking capability, which upon detection by the upper layer (whateverbusyouwant/MSI) makes it stop playing with its own device-level mask has my full support (and now breathe normally). Thanks, M.
--- a/drivers/pci/msi/irqdomain.c +++ b/drivers/pci/msi/irqdomain.c @@ -148,17 +148,45 @@ static void pci_device_domain_set_desc(m arg->hwirq = desc->msi_index; } +static DEFINE_STATIC_KEY_FALSE(pci_msi_mask_unmask_parent); + +/** + * pci_device_msi_mask_unmask_parent_enable - Enable propagation of mask/unmask + * to the parent interrupt chip + * + * For MSI parent interrupt domains which want to mask at the parent interrupt + * chip too. + */ +void pci_device_msi_mask_unmask_parent_enable(void) +{ + static_branch_enable(&pci_msi_mask_unmask_parent); +} + +static __always_inline void cond_mask_parent(struct irq_data *data) +{ + if (static_branch_unlikely(&pci_msi_mask_unmask_parent)) + irq_chip_mask_parent(data); +} + +static __always_inline void cond_unmask_parent(struct irq_data *data) +{ + if (static_branch_unlikely(&pci_msi_mask_unmask_parent)) + irq_chip_unmask_parent(data); +} + static void pci_mask_msi(struct irq_data *data) { struct msi_desc *desc = irq_data_get_msi_desc(data); pci_msi_mask(desc, BIT(data->irq - desc->irq)); + cond_mask_parent(data); } static void pci_unmask_msi(struct irq_data *data) { struct msi_desc *desc = irq_data_get_msi_desc(data); + cond_unmask_parent(data); pci_msi_unmask(desc, BIT(data->irq - desc->irq)); } @@ -195,10 +223,12 @@ static struct msi_domain_template pci_ms static void pci_mask_msix(struct irq_data *data) { pci_msix_mask(irq_data_get_msi_desc(data)); + cond_mask_parent(data); } static void pci_unmask_msix(struct irq_data *data) { + cond_unmask_parent(data); pci_msix_unmask(irq_data_get_msi_desc(data)); } --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -653,12 +653,14 @@ struct irq_domain *pci_msi_create_irq_do struct irq_domain *parent); u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev); struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev); +void pci_device_msi_mask_unmask_parent_enable(void); #else /* CONFIG_PCI_MSI */ static inline struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) { return NULL; } static inline void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg) { } +static inline void pci_device_msi_mask_unmask_parent_enable(void) { } #endif /* !CONFIG_PCI_MSI */ #endif /* LINUX_MSI_H */