Message ID | 4e8bda33aac4021b444e40389648deccf61c1f37.1697047261.git.robin.murphy@arm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp728418vqb; Wed, 11 Oct 2023 11:16:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IELxk0pu15bBEBgr3NCdWedhBk1UpxZePSEO4HrMew0P+wtOQ1QT/zXwym933X1Tsh++30g X-Received: by 2002:a05:6a20:7da2:b0:15d:6fd3:8e74 with SMTP id v34-20020a056a207da200b0015d6fd38e74mr27386641pzj.3.1697048181937; Wed, 11 Oct 2023 11:16:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697048181; cv=none; d=google.com; s=arc-20160816; b=tvLWtAEOxa68Mul6mex+QzjjRufx6h2668+jWqxMfvxXGJEyDpD+vd1FCOVmGfCGds gqVBkX2f506oUmHF3lFlp5ZmGaC4NYsofEIQwsYV5exBZ60F6bOvMKRYs5xmbtjwepnR VSkZqTGLQyrpptlliYD9fx9LlMHLUHHxUraffGv336DRfQxlT9OUPl1bve3xOQyA9fAz 3D3h+fRGjmgaGHXxNipgWFOn3LaXCMuMPOM1m1fICnEo0b0v8AUhZq1ScTzu6vOHnpUS RsbOrK+qXdFuWUP4XxItAeI2+kpGLr/qcKp3lK/akeinJcIWKU64/Fan7yZpX73LTODp lPBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=CKjT39HyXUXX35XjrQdaeLewbIAa0J4e6UCDvFr4kwc=; fh=3nQM0QufZPSZ94JO+WDmHkrFziuG1/NJr0OD1jy0Wvc=; b=vlFSQkb1Zu5q/NY79yxEOvguoRXAwUL1PNUyVZ4OVAIYSEfRaPQy2C8vg0Ei04OXCj Ii8qRBDmHd3g+tzq5AFuPfUn1EIq/lxTvplWs1eNceDRhxptzFgkxS0vUvNJhYbHPtWd TZ+msv9UIRSyIRTGFNDRzfkYY0X4HF0Oj19bP2/WdP+JeNb7pzoknpUO7Y4JGt46NxAg o9t4+Z2vXH8g7Er1Q0P9pPxVxHlDx4Q5d5hHirLAje9ObQT1F23I07wd32o+CBC7xBTI sZAKF64gElcE+W9T+7hx6Q0oEBoZRqCz5ktnJRvMKQtQyV3awK4bTBqL5XvHzPgz5JSq xEbw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id k21-20020a635615000000b0057047d68821si314234pgb.565.2023.10.11.11.16.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 11:16:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 84A5F8068FEE; Wed, 11 Oct 2023 11:15:40 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233132AbjJKSPU (ORCPT <rfc822;kartikey406@gmail.com> + 18 others); Wed, 11 Oct 2023 14:15:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233335AbjJKSPH (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 11 Oct 2023 14:15:07 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BCB489D for <linux-kernel@vger.kernel.org>; Wed, 11 Oct 2023 11:15:05 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 43AB01516; Wed, 11 Oct 2023 11:15:46 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A50F23F5A1; Wed, 11 Oct 2023 11:15:04 -0700 (PDT) From: Robin Murphy <robin.murphy@arm.com> To: joro@8bytes.org, will@kernel.org Cc: iommu@lists.linux.dev, jgg@nvidia.com, baolu.lu@linux.intel.com, linux-kernel@vger.kernel.org Subject: [PATCH v5 3/7] iommu: Validate that devices match domains Date: Wed, 11 Oct 2023 19:14:50 +0100 Message-Id: <4e8bda33aac4021b444e40389648deccf61c1f37.1697047261.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: <cover.1697047261.git.robin.murphy@arm.com> References: <cover.1697047261.git.robin.murphy@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=2.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Wed, 11 Oct 2023 11:15:41 -0700 (PDT) X-Spam-Level: ** X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779483994585672175 X-GMAIL-MSGID: 1779483994585672175 |
Series |
iommu: Retire bus ops
|
|
Commit Message
Robin Murphy
Oct. 11, 2023, 6:14 p.m. UTC
Before we can allow drivers to coexist, we need to make sure that one driver's domain ops can't misinterpret another driver's dev_iommu_priv data. To that end, add a token to the domain so we can remember how it was allocated - for now this may as well be the device ops, since they still correlate 1:1 with drivers. We can trust ourselves for internal default domain attachment, so add checks to cover all the public attach interfaces. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> --- v4: Cover iommu_attach_device_pasid() as well, and improve robustness against theoretical attempts to attach a noiommu group. --- drivers/iommu/iommu.c | 10 ++++++++++ include/linux/iommu.h | 1 + 2 files changed, 11 insertions(+)
Comments
On Wed, Oct 11, 2023 at 07:14:50PM +0100, Robin Murphy wrote: > Before we can allow drivers to coexist, we need to make sure that one > driver's domain ops can't misinterpret another driver's dev_iommu_priv > data. To that end, add a token to the domain so we can remember how it > was allocated - for now this may as well be the device ops, since they > still correlate 1:1 with drivers. We can trust ourselves for internal > default domain attachment, so add checks to cover all the public attach > interfaces. > > Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> > Signed-off-by: Robin Murphy <robin.murphy@arm.com> > > --- > > v4: Cover iommu_attach_device_pasid() as well, and improve robustness > against theoretical attempts to attach a noiommu group. > --- Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
On Wed, Oct 11, 2023 at 07:14:50PM +0100, Robin Murphy wrote: > @@ -2279,10 +2280,16 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev) > static int __iommu_attach_group(struct iommu_domain *domain, > struct iommu_group *group) > { > + struct device *dev; > + > if (group->domain && group->domain != group->default_domain && > group->domain != group->blocking_domain) > return -EBUSY; > > + dev = iommu_group_first_dev(group); > + if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner) > + return -EINVAL; I was thinking about this later, how does this work for the global static domains? domain->owner will not be set? if (alloc_type == IOMMU_DOMAIN_IDENTITY && ops->identity_domain) return ops->identity_domain; else if (alloc_type == IOMMU_DOMAIN_BLOCKED && ops->blocked_domain) return ops->blocked_domain; Seems like it will break everything? I suggest we just put a simple void * tag in the const domain->ops at compile time to indicate the owning driver. Jason
On 24/10/2023 7:52 pm, Jason Gunthorpe wrote: > On Wed, Oct 11, 2023 at 07:14:50PM +0100, Robin Murphy wrote: > >> @@ -2279,10 +2280,16 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev) >> static int __iommu_attach_group(struct iommu_domain *domain, >> struct iommu_group *group) >> { >> + struct device *dev; >> + >> if (group->domain && group->domain != group->default_domain && >> group->domain != group->blocking_domain) >> return -EBUSY; >> >> + dev = iommu_group_first_dev(group); >> + if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner) >> + return -EINVAL; > > I was thinking about this later, how does this work for the global > static domains? domain->owner will not be set? > > if (alloc_type == IOMMU_DOMAIN_IDENTITY && ops->identity_domain) > return ops->identity_domain; > else if (alloc_type == IOMMU_DOMAIN_BLOCKED && ops->blocked_domain) > return ops->blocked_domain; > > Seems like it will break everything? I don't believe it makes any significant difference - as the commit message points out, this validation is only applied at the public interface boundaries of iommu_attach_group(), iommu_attach_device(), and iommu_attach_device_pasid(), which are only expected to be operating on explicitly-allocated unmanaged domains. For internal default domain attachment, the domain is initially derived from the device/group itself so we know it's appropriate by construction. I guess this *would* now prevent some external caller reaching in and trying to attach something to some other group's identity default domain, but frankly it feels like making that fail would be no bad thing anyway. Thanks, Robin.
On Wed, Oct 25, 2023 at 01:39:56PM +0100, Robin Murphy wrote: > On 24/10/2023 7:52 pm, Jason Gunthorpe wrote: > > On Wed, Oct 11, 2023 at 07:14:50PM +0100, Robin Murphy wrote: > > > > > @@ -2279,10 +2280,16 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev) > > > static int __iommu_attach_group(struct iommu_domain *domain, > > > struct iommu_group *group) > > > { > > > + struct device *dev; > > > + > > > if (group->domain && group->domain != group->default_domain && > > > group->domain != group->blocking_domain) > > > return -EBUSY; > > > + dev = iommu_group_first_dev(group); > > > + if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner) > > > + return -EINVAL; > > > > I was thinking about this later, how does this work for the global > > static domains? domain->owner will not be set? > > > > if (alloc_type == IOMMU_DOMAIN_IDENTITY && ops->identity_domain) > > return ops->identity_domain; > > else if (alloc_type == IOMMU_DOMAIN_BLOCKED && ops->blocked_domain) > > return ops->blocked_domain; > > > > Seems like it will break everything? > > I don't believe it makes any significant difference - as the commit message > points out, this validation is only applied at the public interface > boundaries of iommu_attach_group(), iommu_attach_device(), Oh, making it only work for on domain type seems kind of hacky.. If that is the intention maybe the owner set should be moved into iommu_domain_alloc() with a little comment noting that it is limited to work in only a few cases? I certainly didn't understand from the commit message to mean it was only actually working for one domain type and this also blocks using other types with the public interface. > and iommu_attach_device_pasid(), which are only expected to be > operating on explicitly-allocated unmanaged domains. We have nesting now in the iommufd branch, and SVA will come soon for these APIs. Regardless this will clash with the iommufd branch for this reason so I guess it needs to wait till rc1. Thanks, Jason
On 25/10/2023 1:55 pm, Jason Gunthorpe wrote: > On Wed, Oct 25, 2023 at 01:39:56PM +0100, Robin Murphy wrote: >> On 24/10/2023 7:52 pm, Jason Gunthorpe wrote: >>> On Wed, Oct 11, 2023 at 07:14:50PM +0100, Robin Murphy wrote: >>> >>>> @@ -2279,10 +2280,16 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev) >>>> static int __iommu_attach_group(struct iommu_domain *domain, >>>> struct iommu_group *group) >>>> { >>>> + struct device *dev; >>>> + >>>> if (group->domain && group->domain != group->default_domain && >>>> group->domain != group->blocking_domain) >>>> return -EBUSY; >>>> + dev = iommu_group_first_dev(group); >>>> + if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner) >>>> + return -EINVAL; >>> >>> I was thinking about this later, how does this work for the global >>> static domains? domain->owner will not be set? >>> >>> if (alloc_type == IOMMU_DOMAIN_IDENTITY && ops->identity_domain) >>> return ops->identity_domain; >>> else if (alloc_type == IOMMU_DOMAIN_BLOCKED && ops->blocked_domain) >>> return ops->blocked_domain; >>> >>> Seems like it will break everything? >> >> I don't believe it makes any significant difference - as the commit message >> points out, this validation is only applied at the public interface >> boundaries of iommu_attach_group(), iommu_attach_device(), > > Oh, making it only work for on domain type seems kind of hacky.. > > If that is the intention maybe the owner set should be moved into > iommu_domain_alloc() with a little comment noting that it is limited > to work in only a few cases? > > I certainly didn't understand from the commit message to mean it was > only actually working for one domain type and this also blocks using > other types with the public interface. It's not about one particular domain type, it's about the scope of what we consider valid usage. External API users should almost always be attaching to their own domain which they have allocated, however we also tolerate co-attaching additional groups to the same DMA domain in rare cases where it's reasonable. The fact is that those users cannot allocate blocking or identity domains, and I can't see that they would ever have any legitimate business trying to do anything with them anyway. So although yes, we technically lose some functionality once this intersects with the static domain optimisation, it's only questionable functionality which was never explicitly intended anyway. I mean, what would be the valid purpose of trying to attach group A to group B's identity domain, even if they *were* backed by the same driver? At best it's pointless if group A also has its own identity domain already, otherwise at worst it's a deliberate attempt to circumvent a default domain policy imposed by the IOMMU core. >> and iommu_attach_device_pasid(), which are only expected to be >> operating on explicitly-allocated unmanaged domains. > > We have nesting now in the iommufd branch, and SVA will come soon for > these APIs. > > Regardless this will clash with the iommufd branch for this reason so > I guess it needs to wait till rc1. Sigh, back on the shelf it goes then... Thanks, Robin.
On Wed, Oct 25, 2023 at 05:05:08PM +0100, Robin Murphy wrote: > On 25/10/2023 1:55 pm, Jason Gunthorpe wrote: > > On Wed, Oct 25, 2023 at 01:39:56PM +0100, Robin Murphy wrote: > > > On 24/10/2023 7:52 pm, Jason Gunthorpe wrote: > > > > On Wed, Oct 11, 2023 at 07:14:50PM +0100, Robin Murphy wrote: > > > > > > > > > @@ -2279,10 +2280,16 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev) > > > > > static int __iommu_attach_group(struct iommu_domain *domain, > > > > > struct iommu_group *group) > > > > > { > > > > > + struct device *dev; > > > > > + > > > > > if (group->domain && group->domain != group->default_domain && > > > > > group->domain != group->blocking_domain) > > > > > return -EBUSY; > > > > > + dev = iommu_group_first_dev(group); > > > > > + if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner) > > > > > + return -EINVAL; > > > > > > > > I was thinking about this later, how does this work for the global > > > > static domains? domain->owner will not be set? > > > > > > > > if (alloc_type == IOMMU_DOMAIN_IDENTITY && ops->identity_domain) > > > > return ops->identity_domain; > > > > else if (alloc_type == IOMMU_DOMAIN_BLOCKED && ops->blocked_domain) > > > > return ops->blocked_domain; > > > > > > > > Seems like it will break everything? > > > > > > I don't believe it makes any significant difference - as the commit message > > > points out, this validation is only applied at the public interface > > > boundaries of iommu_attach_group(), iommu_attach_device(), > > > > Oh, making it only work for on domain type seems kind of hacky.. > > > > If that is the intention maybe the owner set should be moved into > > iommu_domain_alloc() with a little comment noting that it is limited > > to work in only a few cases? > > > > I certainly didn't understand from the commit message to mean it was > > only actually working for one domain type and this also blocks using > > other types with the public interface. > > It's not about one particular domain type, it's about the scope of what we > consider valid usage. External API users should almost always be attaching > to their own domain which they have allocated, however we also tolerate > co-attaching additional groups to the same DMA domain in rare cases where > it's reasonable. The fact is that those users cannot allocate blocking or > identity domains, and I can't see that they would ever have any legitimate > business trying to do anything with them anyway. So although yes, we > technically lose some functionality once this intersects with the static > domain optimisation, it's only questionable functionality which was never > explicitly intended anyway. I have no problem with that argument, I'm saying this is a subtle emergent property. Lets document it, lets be more explicit. The owner checks would do well to go along with specific domain type checks as well to robustly enforce what you just explained. Thanks, Jason
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 7bb92e8b7a49..578292d3b152 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2114,6 +2114,7 @@ static struct iommu_domain *__iommu_domain_alloc(const struct iommu_ops *ops, return NULL; domain->type = type; + domain->owner = ops; /* * If not already set, assume all sizes by default; the driver * may override this later @@ -2279,10 +2280,16 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev) static int __iommu_attach_group(struct iommu_domain *domain, struct iommu_group *group) { + struct device *dev; + if (group->domain && group->domain != group->default_domain && group->domain != group->blocking_domain) return -EBUSY; + dev = iommu_group_first_dev(group); + if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner) + return -EINVAL; + return __iommu_group_set_domain(group, domain); } @@ -3480,6 +3487,9 @@ int iommu_attach_device_pasid(struct iommu_domain *domain, if (!group) return -ENODEV; + if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner) + return -EINVAL; + mutex_lock(&group->mutex); curr = xa_cmpxchg(&group->pasid_array, pasid, NULL, domain, GFP_KERNEL); if (curr) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 2d2802fb2c74..5c9560813d05 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -99,6 +99,7 @@ struct iommu_domain_geometry { struct iommu_domain { unsigned type; const struct iommu_domain_ops *ops; + const struct iommu_ops *owner; /* Whose domain_alloc we came from */ unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ struct iommu_domain_geometry geometry; struct iommu_dma_cookie *iova_cookie;