From patchwork Mon Feb 12 01:22:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 199581 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp2220531dyd; Sun, 11 Feb 2024 20:00:50 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWkyqoOjxphah3cUHCvuBNFyavTv3wuWnvLl/gol9FlZSJFxKZvXVj+Aua0lWmzTsZOh3JcmXNjSlbw40o0X5buVCVGOA== X-Google-Smtp-Source: AGHT+IGa/Ah8d0NH5NMLrWO/hFCkkdaC/FTP2vh2AP9mDFkc2FaUEXUbdeSL8CF4QFMIK1MtR0hm X-Received: by 2002:ac8:570f:0:b0:42c:37b1:a5f8 with SMTP id 15-20020ac8570f000000b0042c37b1a5f8mr9754910qtw.62.1707710450085; Sun, 11 Feb 2024 20:00:50 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707710450; cv=pass; d=google.com; s=arc-20160816; b=IRYNIZo13nvlxmfiGtEK6FKJ50c/rf85TUAecEZDPz6I2xUgBlgupijj3Q6eiuJyW2 9YE0B8e9nJ9fcai75GPi8rN+YITAFjmaeUlglhQ0obn8zowNVZJjitcvOk0XdP8zPtE/ EWxPispYVhzL9UzrrqZ69M65iFCTh0+SUeCRmHJsmaZ7eDZH3T7GwADD3D1gjZlhsJun FzAMkf3shOD5SMOZCFgV1iQZbWvLSjrG7ij5PWi95hJTB2c8UQRjkhZgaMThlJ+ZJJfm eHNULouYhY9EWuKVeB8geuj/zUJNkY8k0b0SywcUyi90zTV7qDv1OIgb2Z/iJnHVbRiu kHdA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=tvhhbv1E/HZvN2dcGLQmOWNLoowMsT95B2Qg0m2sBBg=; fh=AwxWLnxsqWv71b4wZbtPdkiaTtsjUi/gabG0DqHa3Sk=; b=Unoj1J71FCk4qIFLHOYcTgm51jpnQzHCy6k3l98isrtq025WZund7G6TaRuRrR8t7z l6LqXNQajFuESgQ0Q3T/oRIz0zNF3h38YPtEjQHpouUgSX87BqnxveR3+/A3UHLSobg6 Pe0/JH1/RCOfXaLddwQFw/nZTEFvcYzr+e96r4oFFvdki3Y65C47ScJBEo5UrkshTibd t8rX0ZHN1L/I9HIz0buSMj1arWsytDi/bzsoARIlZ4Fja/Jv2YfeomWUYG63eyMrrW2h mDjQlCIPxxvfNa+9D+0rEuEItV3h8z+SQRkzWJuDO/bC/p2/TohNGkVWf2rCtZwhsPL/ mVcw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="G2/d7jz7"; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-61056-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-61056-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=2; AJvYcCWfSjAsjuW/KFLi0g0BdWJj12bylYE16UTq1yqnhuuA3IT8oJaaF/Px/gGiwZr+BUp2zqy0EHDvfGouWj3MqK0F06bZXQ== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id u17-20020ac87511000000b0042c5d53c544si6156598qtq.347.2024.02.11.20.00.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Feb 2024 20:00:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-61056-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="G2/d7jz7"; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-61056-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-61056-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id E301C1C23AB7 for ; Mon, 12 Feb 2024 01:33:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D9B971DFD9; Mon, 12 Feb 2024 01:29:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="G2/d7jz7" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD49718AEA; Mon, 12 Feb 2024 01:29:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707701362; cv=none; b=gCj1qnb+z4AijsXcndSDrDSu8w3uJlk95IMqINQg+wzN6FzdxCRfdaom7R1j1o4YNVorsAo6A80BSOmGnSq5al7DdbXDGfXPXCiQC5cRRWRhN4NlDusHOJo57StaD1ZvJK2LU5GYd/fE2nvKWE5cWfKzo/6D3tn+oHTNDg2iSh4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707701362; c=relaxed/simple; bh=XBlWfviHQ2r7aBjkRLrI3TMQKAtXtxPcTjG5M5Y798c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CCZu6hA+v6EcdCxlwn9r9tl9guCwfJCCuHgACs4bal3P/ibNhtr0LWQRE/3NSyfMc7nQrv/xr3H0jN4BwvKwKVPyUgWMuTfEzy+cqB2MtVmUotSHnYUYV3S+s59TfETDxPyxQfPAu4uTlqkbJ53C8vP2NkfhBt42A6fFErjr3a0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=G2/d7jz7; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707701362; x=1739237362; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XBlWfviHQ2r7aBjkRLrI3TMQKAtXtxPcTjG5M5Y798c=; b=G2/d7jz795ci048Sz+ih2tE7OE6++9Leu1iyDLvzNA2Aga/X4ZjtKlf3 dLXsqY5SVoRF7s3ohfPUynbylIWr5zEW651qcOrlU3Ai2X6aPY331Ozre bwK+b/mp/JEfMpXK+5jh+xgh9aEivCR485UuWaSjACaGtDO4kltyzFerg M+MoXYoDZCDDUy6h5TF/Uy8umeHj3v2mUCbeNb1lUdfCMzhAxybDW6qU4 ANeUZCiKbbsahTiuGSVQNs61LfhGl62dHGCZHdQe9odNkjmsX5j09axFG UX4WPlDCxcvkEwjJtTwM5Fm0NaUmJ8ZrzNEhPp4s+FbZyvbPsd9z7d59c w==; X-IronPort-AV: E=McAfee;i="6600,9927,10981"; a="5502222" X-IronPort-AV: E=Sophos;i="6.05,261,1701158400"; d="scan'208";a="5502222" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Feb 2024 17:29:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,261,1701158400"; d="scan'208";a="7132268" Received: from allen-box.sh.intel.com ([10.239.159.127]) by orviesa005.jf.intel.com with ESMTP; 11 Feb 2024 17:29:17 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , Joel Granados , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v13 16/16] iommu: Make iommu_report_device_fault() return void Date: Mon, 12 Feb 2024 09:22:27 +0800 Message-Id: <20240212012227.119381-17-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240212012227.119381-1-baolu.lu@linux.intel.com> References: <20240212012227.119381-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790664192725250626 X-GMAIL-MSGID: 1790664192725250626 As the iommu_report_device_fault() has been converted to auto-respond a page fault if it fails to enqueue it, there's no need to return a code in any case. Make it return void. Suggested-by: Jason Gunthorpe Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian --- include/linux/iommu.h | 5 ++--- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 4 ++-- drivers/iommu/intel/svm.c | 19 ++++++---------- drivers/iommu/io-pgfault.c | 25 +++++++-------------- 4 files changed, 19 insertions(+), 34 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 326aae1ab3a2..de839fd01bb8 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -1569,7 +1569,7 @@ struct iopf_queue *iopf_queue_alloc(const char *name); void iopf_queue_free(struct iopf_queue *queue); int iopf_queue_discard_partial(struct iopf_queue *queue); void iopf_free_group(struct iopf_group *group); -int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt); +void iommu_report_device_fault(struct device *dev, struct iopf_fault *evt); void iopf_group_response(struct iopf_group *group, enum iommu_page_response_code status); #else @@ -1607,10 +1607,9 @@ static inline void iopf_free_group(struct iopf_group *group) { } -static inline int +static inline void iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) { - return -ENODEV; } static inline void iopf_group_response(struct iopf_group *group, diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 42eb59cb99f4..02580364acda 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1455,7 +1455,7 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) /* IRQ and event handlers */ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) { - int ret; + int ret = 0; u32 perm = 0; struct arm_smmu_master *master; bool ssid_valid = evt[0] & EVTQ_0_SSV; @@ -1511,7 +1511,7 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) goto out_unlock; } - ret = iommu_report_device_fault(master->dev, &fault_evt); + iommu_report_device_fault(master->dev, &fault_evt); out_unlock: mutex_unlock(&smmu->streams_mutex); return ret; diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 2f8716636dbb..b644d57da841 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -561,14 +561,11 @@ static int prq_to_iommu_prot(struct page_req_dsc *req) return prot; } -static int intel_svm_prq_report(struct intel_iommu *iommu, struct device *dev, - struct page_req_dsc *desc) +static void intel_svm_prq_report(struct intel_iommu *iommu, struct device *dev, + struct page_req_dsc *desc) { struct iopf_fault event = { }; - if (!dev || !dev_is_pci(dev)) - return -ENODEV; - /* Fill in event data for device specific processing */ event.fault.type = IOMMU_FAULT_PAGE_REQ; event.fault.prm.addr = (u64)desc->addr << VTD_PAGE_SHIFT; @@ -601,7 +598,7 @@ static int intel_svm_prq_report(struct intel_iommu *iommu, struct device *dev, event.fault.prm.private_data[0] = ktime_to_ns(ktime_get()); } - return iommu_report_device_fault(dev, &event); + iommu_report_device_fault(dev, &event); } static void handle_bad_prq_event(struct intel_iommu *iommu, @@ -704,12 +701,10 @@ static irqreturn_t prq_event_thread(int irq, void *d) if (!pdev) goto bad_req; - if (intel_svm_prq_report(iommu, &pdev->dev, req)) - handle_bad_prq_event(iommu, req, QI_RESP_INVALID); - else - trace_prq_report(iommu, &pdev->dev, req->qw_0, req->qw_1, - req->priv_data[0], req->priv_data[1], - iommu->prq_seq_number++); + intel_svm_prq_report(iommu, &pdev->dev, req); + trace_prq_report(iommu, &pdev->dev, req->qw_0, req->qw_1, + req->priv_data[0], req->priv_data[1], + iommu->prq_seq_number++); pci_dev_put(pdev); prq_advance: head = (head + sizeof(*req)) & PRQ_RING_MASK; diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 6a325bff8164..06d78fcc79fd 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -176,26 +176,22 @@ static struct iopf_group *iopf_group_alloc(struct iommu_fault_param *iopf_param, * freed after the device has stopped generating page faults (or the iommu * hardware has been set to block the page faults) and the pending page faults * have been flushed. - * - * Return: 0 on success and <0 on error. */ -int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) +void iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) { struct iommu_fault *fault = &evt->fault; struct iommu_fault_param *iopf_param; struct iopf_group abort_group = {}; struct iopf_group *group; - int ret; iopf_param = iopf_get_dev_fault_param(dev); if (WARN_ON(!iopf_param)) - return -ENODEV; + return; if (!(fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { - ret = report_partial_fault(iopf_param, fault); + report_partial_fault(iopf_param, fault); iopf_put_dev_fault_param(iopf_param); /* A request that is not the last does not need to be ack'd */ - return ret; } /* @@ -207,25 +203,21 @@ int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) * leaving, otherwise partial faults will be stuck. */ group = iopf_group_alloc(iopf_param, evt, &abort_group); - if (group == &abort_group) { - ret = -ENOMEM; + if (group == &abort_group) goto err_abort; - } group->domain = get_domain_for_iopf(dev, fault); - if (!group->domain) { - ret = -EINVAL; + if (!group->domain) goto err_abort; - } /* * On success iopf_handler must call iopf_group_response() and * iopf_free_group() */ - ret = group->domain->iopf_handler(group); - if (ret) + if (group->domain->iopf_handler(group)) goto err_abort; - return 0; + + return; err_abort: iopf_group_response(group, IOMMU_PAGE_RESP_FAILURE); @@ -233,7 +225,6 @@ int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) __iopf_free_group(group); else iopf_free_group(group); - return ret; } EXPORT_SYMBOL_GPL(iommu_report_device_fault);