From patchwork Thu Sep 14 08:56:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 139840 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp625235vqi; Thu, 14 Sep 2023 14:05:35 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHOFW9HLji2Pzo/QOUjt0iJWdr2ow/yPwtFtlDxDXt2W8ZZAUIIYmkzzwWlydJwMvu3LVJ6 X-Received: by 2002:aa7:88c3:0:b0:68f:e245:3aa5 with SMTP id k3-20020aa788c3000000b0068fe2453aa5mr7618240pff.33.1694725534929; Thu, 14 Sep 2023 14:05:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694725534; cv=none; d=google.com; s=arc-20160816; b=WOqdSTxJpiT3/YPyPTGh24X9/W/VbzCuYUNCrBMkRmFm8EGqEYvBqQn+dywcOTmtiV eWJNn5H0/v9GtixzxEdtBlETwUD8ULgoPDq8yw1YC/YaOc5Qy67gSI+PulpWDkRR8Cn8 iD18/wOYqtTaCungjVLJWGq4CH+fN7fQO+dIqbDyppvR8xjuxRLDGtJ4epV/pmcOY9j8 9625Z6wTnXUn2CEJqvc0qgOVnOX13Tsc4otGbvIz7hT+T0WbUJA+cUYz7mr6AceEcay8 Z90TSwJM7ti4VZxu49dvVDMZYyw217crnHR5TOcYwbPP5YSUZ6qPb1dBvPgotMB743Ce D04g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UlvgBAPX2YppA9iAdQ9d1WRvddcgjSfp5qFOiCBpx3o=; fh=nYlV1hExaZQmlcfIklAKMTX16av7Bt28QTvLP0QH32I=; b=Dfu8OTIopiNeXOQpTTaOnRv2TSeWZEjOGVMZrmHwG6v9eFudxb5x4Qk9UChP0zaQmN uiq8g8Au45oHTXmkB3LaT5P0jamTjUVmmTxVS0QOKeLJ5eKBxwaYMLhPvKRadZygX8ik 67fSqiOfhaMkMxupnXcUeX/eJNLyGcHp1zq3upcYKBsKaRDV0vc9TFuNjoTUnZbuYplY 59V4Lsy8hqzRz5niArWgX5zUUwrJssQidPVMLPyqe8JpxTBx6hILTvikjjmhnAn8wTBS tsyiHnDEOXuAj2VZDK/fmsMSWKJ02xi3Pe6oX7fIf3dI3VYyHW0DFEP6+JmL0+UajMQy dhkw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="U7/uUt6v"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id r135-20020a632b8d000000b00565ec06b593si2097737pgr.108.2023.09.14.14.05.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 14:05:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="U7/uUt6v"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 8BA0B832D720; Thu, 14 Sep 2023 02:01:54 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236798AbjINJBd (ORCPT + 35 others); Thu, 14 Sep 2023 05:01:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236664AbjINJAw (ORCPT ); Thu, 14 Sep 2023 05:00:52 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F7A52D59; Thu, 14 Sep 2023 02:00:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694682029; x=1726218029; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZtOXclHsAEIRIFEpox6fhTrVNI+nxlXlWz9EfU2LP9M=; b=U7/uUt6vzfMkk3xVHij06Q5y6vBrvjKMedpOs8mEnX1GM2+HKvDfpX4Q AN+TX6lR02m1LFo4Uxm+2QUbtnPEu/mg5vwxhRDwUOv0KXXmuqvIZ624t dInEcQ6mV/YtnOvISRA4dDj1jfjilbBpEn+Sr/gNSvPd6AU0DXB/TgTrQ fITPgUGAXXyG7jG1kKitoU//fvoJstLkqVKcLlXAV4XO/uA0Oq2aWKdY4 mrVMPSKVG8YMS/3kksvvNT/hEOqaOUzhOcq64I5gRhWexoGGRRK6GdkLt iIx1I9PJWbFrguxFs6pphOd8Qg0g9LjgBk9k4L8JTz8e75q1hnk26taVA w==; X-IronPort-AV: E=McAfee;i="6600,9927,10832"; a="465266533" X-IronPort-AV: E=Sophos;i="6.02,145,1688454000"; d="scan'208";a="465266533" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2023 02:00:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10832"; a="859613429" X-IronPort-AV: E=Sophos;i="6.02,145,1688454000"; d="scan'208";a="859613429" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga002.fm.intel.com with ESMTP; 14 Sep 2023 02:00:25 -0700 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v5 12/12] iommu: Improve iopf_queue_flush_dev() Date: Thu, 14 Sep 2023 16:56:38 +0800 Message-Id: <20230914085638.17307-13-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230914085638.17307-1-baolu.lu@linux.intel.com> References: <20230914085638.17307-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 14 Sep 2023 02:01:54 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777048522378551977 X-GMAIL-MSGID: 1777048522378551977 The iopf_queue_flush_dev() is called by the iommu driver before releasing a PASID. It ensures that all pending faults for this PASID have been handled or cancelled, and won't hit the address space that reuses this PASID. The driver must make sure that no new fault is added to the queue. The SMMUv3 driver doesn't use it because it only implements the Arm-specific stall fault model where DMA transactions are held in the SMMU while waiting for the OS to handle iopf's. Since a device driver must complete all DMA transactions before detaching domain, there are no pending iopf's with the stall model. PRI support requires adding a call to iopf_queue_flush_dev() after flushing the hardware page fault queue. The current implementation of iopf_queue_flush_dev() is a simplified version. It is only suitable for SVA case in which the processing of iopf is implemented in the inner loop of the iommu subsystem. Improve this interface to make it also work for handling iopf out of the iommu core. Remove a warning message in iommu_page_response() since the iopf queue might get flushed before possible pending responses. Signed-off-by: Lu Baolu --- include/linux/iommu.h | 4 ++-- drivers/iommu/intel/svm.c | 2 +- drivers/iommu/io-pgfault.c | 46 +++++++++++++++++++++++++++++++++----- 3 files changed, 44 insertions(+), 8 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 77ad33ffe3ac..465e23e945d0 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -1275,7 +1275,7 @@ iommu_sva_domain_alloc(struct device *dev, struct mm_struct *mm) #ifdef CONFIG_IOMMU_IOPF int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev); int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev); -int iopf_queue_flush_dev(struct device *dev); +int iopf_queue_flush_dev(struct device *dev, ioasid_t pasid); struct iopf_queue *iopf_queue_alloc(const char *name); void iopf_queue_free(struct iopf_queue *queue); int iopf_queue_discard_partial(struct iopf_queue *queue); @@ -1295,7 +1295,7 @@ iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) return -ENODEV; } -static inline int iopf_queue_flush_dev(struct device *dev) +static inline int iopf_queue_flush_dev(struct device *dev, ioasid_t pasid) { return -ENODEV; } diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 780c5bd73ec2..4c3f4533e337 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -495,7 +495,7 @@ void intel_drain_pasid_prq(struct device *dev, u32 pasid) goto prq_retry; } - iopf_queue_flush_dev(dev); + iopf_queue_flush_dev(dev, pasid); /* * Perform steps described in VT-d spec CH7.10 to drain page diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 3e6845bc5902..8d81688f715d 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -254,10 +254,9 @@ int iommu_page_response(struct device *dev, /* Only send response if there is a fault report pending */ mutex_lock(&fault_param->lock); - if (list_empty(&fault_param->faults)) { - dev_warn_ratelimited(dev, "no pending PRQ, drop response\n"); + if (list_empty(&fault_param->faults)) goto done_unlock; - } + /* * Check if we have a matching page request pending to respond, * otherwise return -EINVAL @@ -300,6 +299,7 @@ EXPORT_SYMBOL_GPL(iommu_page_response); /** * iopf_queue_flush_dev - Ensure that all queued faults have been processed * @dev: the endpoint whose faults need to be flushed. + * @pasid: the PASID of the endpoint. * * The IOMMU driver calls this before releasing a PASID, to ensure that all * pending faults for this PASID have been handled, and won't hit the address @@ -309,17 +309,53 @@ EXPORT_SYMBOL_GPL(iommu_page_response); * * Return: 0 on success and <0 on error. */ -int iopf_queue_flush_dev(struct device *dev) +int iopf_queue_flush_dev(struct device *dev, ioasid_t pasid) { struct iommu_fault_param *iopf_param = iopf_get_dev_fault_param(dev); + const struct iommu_ops *ops = dev_iommu_ops(dev); + struct iommu_page_response resp; + struct iopf_fault *iopf, *next; + int ret = 0; if (!iopf_param) return -ENODEV; flush_workqueue(iopf_param->queue->wq); + + mutex_lock(&iopf_param->lock); + list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { + if (!(iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) || + iopf->fault.prm.pasid != pasid) + break; + + list_del(&iopf->list); + kfree(iopf); + } + + list_for_each_entry_safe(iopf, next, &iopf_param->faults, list) { + if (!(iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) || + iopf->fault.prm.pasid != pasid) + continue; + + memset(&resp, 0, sizeof(struct iommu_page_response)); + resp.pasid = iopf->fault.prm.pasid; + resp.grpid = iopf->fault.prm.grpid; + resp.code = IOMMU_PAGE_RESP_INVALID; + + if (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID) + resp.flags = IOMMU_PAGE_RESP_PASID_VALID; + + ret = ops->page_response(dev, iopf, &resp); + if (ret) + break; + + list_del(&iopf->list); + kfree(iopf); + } + mutex_unlock(&iopf_param->lock); iopf_put_dev_fault_param(iopf_param); - return 0; + return ret; } EXPORT_SYMBOL_GPL(iopf_queue_flush_dev);