From patchwork Wed Dec 20 01:23:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181401 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2350632dyi; Tue, 19 Dec 2023 17:29:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IE3PI+BcHByOrVHiKw0Vxf8SutDS5v1xo0RM1ddujIhYP6cy4z/o+O2YdY9r/KJKVf9XmtF X-Received: by 2002:a05:6402:12ca:b0:553:3749:7a6a with SMTP id k10-20020a05640212ca00b0055337497a6amr2074326edx.6.1703035756180; Tue, 19 Dec 2023 17:29:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035756; cv=none; d=google.com; s=arc-20160816; b=k4pu/p6hjhmhKsO1AkpMwTO1kx6/RF6PVPGRxHi5nYHV30uRjgG88k54ftJ4Leopr/ PcvoO/GwFZxjs1tlF+Y2F12dm/Ap1FqlnHsBY71s9f/ET/NQXVs/NmZVCm2h8PhS/mwq Ugd9Xnhf4hVCxGbU3pn3PUUcrPQZyuowaikboBv4KOYKK/OkcSrpwdWUDWAv0I/tH0A3 d6Ow4WZGe05h1pOZ2p+X1BtvS0aCcRdo+f+R8lknTiIQSFGPGS9UN/OIOn18ZuOjEYNc iBiXS8r+q60QjmidEH1U0cb8qN9+l7XF6iG6UfKUXiKOsGUoWApHrfakYmmFjPWnQ2Qi Ja8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=6sAnGbcx/b4zPoESA72HcZt66nYOKKfXK9mELByMpo4=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=LB/O23OrxRunXv7oFQGhtROtT8lFDmIqIAg2/9G9RmteI+Cov0z1BFb+E6H7p/8b3L wj+ksYBlMWH7Snp1Uwwy2FP+MrFbpJr0JpLdGLRstKwnDCV2fLnUlaLsg7CH7q+/7pHq Tby6MPlZrqHrp+SBDQCb7M1mVjJPD8tKqR8Yzs8en3Iis/HZPVAJBFFueXab03xLabFx XFTOzdCJof9mUyvc3eohPBxGU8CEdFo0gi/IBixih09+R3PtKzLP7vcNrttBaJZIMA+y cFNisj6kHd0NifHk7YdCrnltWMF1oVumk8QVBVUrlizyC+dOyJtTF/Y/1redGLngs56D c6QA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=RTqLDCef; spf=pass (google.com: domain of linux-kernel+bounces-6235-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6235-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id m4-20020a056402430400b00550db4d6ec5si10165940edc.340.2023.12.19.17.29.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:29:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6235-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=RTqLDCef; spf=pass (google.com: domain of linux-kernel+bounces-6235-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6235-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id A20161F25BCD for ; Wed, 20 Dec 2023 01:29:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D45E3BA45; Wed, 20 Dec 2023 01:28:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="RTqLDCef" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6ED3A749C; Wed, 20 Dec 2023 01:28:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035715; x=1734571715; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4yO2cVd32Ql1SfMr3bHpIUH8k8s6HyzQGwFybGU/+uU=; b=RTqLDCefF8SS/Tm/NDJr8XjRHQbB4rn377s7uWZ6LwBbDYey/sj1eT60 egDX9E8PiVFa8XWaTdrZQ2RKL/IjRovkpznEK/XmlKtofTognoW6t8fR2 UMgOfJAbdPzuykyWbBNMIRQp4oJ3yDF14/ifE7A/hgB6CpkSSzn3kydXj w+rbTAQd1uvbubiSVuzlWT2LtbHCcavGwa4RmK9q6vrCQxbmRFsgi5WFX QylWFyLfWu2MRBDpAQhwELahY9QIuKdSM7E5kb9RzZ+BJlre4u4DOkzIP CqSPJskJAIxz1Xy8iaYPoOAaGWe37s7kQSLilc3EYGBYg9KNOdrGeDc1D A==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965719" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965719" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:28:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319058" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319058" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:31 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 01/14] iommu: Move iommu fault data to linux/iommu.h Date: Wed, 20 Dec 2023 09:23:19 +0800 Message-Id: <20231220012332.168188-2-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762421193559926 X-GMAIL-MSGID: 1785762421193559926 The iommu fault data is currently defined in uapi/linux/iommu.h, but is only used inside the iommu subsystem. Move it to linux/iommu.h, where it will be more accessible to kernel drivers. With this done, uapi/linux/iommu.h becomes empty and can be removed from the tree. Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Reviewed-by: Yi Liu Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 152 +++++++++++++++++++++++++++++++++- include/uapi/linux/iommu.h | 161 ------------------------------------- MAINTAINERS | 1 - 3 files changed, 151 insertions(+), 163 deletions(-) delete mode 100644 include/uapi/linux/iommu.h diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 3a556996fea7..80dfdc40b267 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -14,7 +14,6 @@ #include #include #include -#include #define IOMMU_READ (1 << 0) #define IOMMU_WRITE (1 << 1) @@ -44,6 +43,157 @@ struct iommu_sva; struct iommu_fault_event; struct iommu_dma_cookie; +#define IOMMU_FAULT_PERM_READ (1 << 0) /* read */ +#define IOMMU_FAULT_PERM_WRITE (1 << 1) /* write */ +#define IOMMU_FAULT_PERM_EXEC (1 << 2) /* exec */ +#define IOMMU_FAULT_PERM_PRIV (1 << 3) /* privileged */ + +/* Generic fault types, can be expanded IRQ remapping fault */ +enum iommu_fault_type { + IOMMU_FAULT_DMA_UNRECOV = 1, /* unrecoverable fault */ + IOMMU_FAULT_PAGE_REQ, /* page request fault */ +}; + +enum iommu_fault_reason { + IOMMU_FAULT_REASON_UNKNOWN = 0, + + /* Could not access the PASID table (fetch caused external abort) */ + IOMMU_FAULT_REASON_PASID_FETCH, + + /* PASID entry is invalid or has configuration errors */ + IOMMU_FAULT_REASON_BAD_PASID_ENTRY, + + /* + * PASID is out of range (e.g. exceeds the maximum PASID + * supported by the IOMMU) or disabled. + */ + IOMMU_FAULT_REASON_PASID_INVALID, + + /* + * An external abort occurred fetching (or updating) a translation + * table descriptor + */ + IOMMU_FAULT_REASON_WALK_EABT, + + /* + * Could not access the page table entry (Bad address), + * actual translation fault + */ + IOMMU_FAULT_REASON_PTE_FETCH, + + /* Protection flag check failed */ + IOMMU_FAULT_REASON_PERMISSION, + + /* access flag check failed */ + IOMMU_FAULT_REASON_ACCESS, + + /* Output address of a translation stage caused Address Size fault */ + IOMMU_FAULT_REASON_OOR_ADDRESS, +}; + +/** + * struct iommu_fault_unrecoverable - Unrecoverable fault data + * @reason: reason of the fault, from &enum iommu_fault_reason + * @flags: parameters of this fault (IOMMU_FAULT_UNRECOV_* values) + * @pasid: Process Address Space ID + * @perm: requested permission access using by the incoming transaction + * (IOMMU_FAULT_PERM_* values) + * @addr: offending page address + * @fetch_addr: address that caused a fetch abort, if any + */ +struct iommu_fault_unrecoverable { + __u32 reason; +#define IOMMU_FAULT_UNRECOV_PASID_VALID (1 << 0) +#define IOMMU_FAULT_UNRECOV_ADDR_VALID (1 << 1) +#define IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID (1 << 2) + __u32 flags; + __u32 pasid; + __u32 perm; + __u64 addr; + __u64 fetch_addr; +}; + +/** + * struct iommu_fault_page_request - Page Request data + * @flags: encodes whether the corresponding fields are valid and whether this + * is the last page in group (IOMMU_FAULT_PAGE_REQUEST_* values). + * When IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID is set, the page response + * must have the same PASID value as the page request. When it is clear, + * the page response should not have a PASID. + * @pasid: Process Address Space ID + * @grpid: Page Request Group Index + * @perm: requested page permissions (IOMMU_FAULT_PERM_* values) + * @addr: page address + * @private_data: device-specific private information + */ +struct iommu_fault_page_request { +#define IOMMU_FAULT_PAGE_REQUEST_PASID_VALID (1 << 0) +#define IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE (1 << 1) +#define IOMMU_FAULT_PAGE_REQUEST_PRIV_DATA (1 << 2) +#define IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID (1 << 3) + __u32 flags; + __u32 pasid; + __u32 grpid; + __u32 perm; + __u64 addr; + __u64 private_data[2]; +}; + +/** + * struct iommu_fault - Generic fault data + * @type: fault type from &enum iommu_fault_type + * @padding: reserved for future use (should be zero) + * @event: fault event, when @type is %IOMMU_FAULT_DMA_UNRECOV + * @prm: Page Request message, when @type is %IOMMU_FAULT_PAGE_REQ + * @padding2: sets the fault size to allow for future extensions + */ +struct iommu_fault { + __u32 type; + __u32 padding; + union { + struct iommu_fault_unrecoverable event; + struct iommu_fault_page_request prm; + __u8 padding2[56]; + }; +}; + +/** + * enum iommu_page_response_code - Return status of fault handlers + * @IOMMU_PAGE_RESP_SUCCESS: Fault has been handled and the page tables + * populated, retry the access. This is "Success" in PCI PRI. + * @IOMMU_PAGE_RESP_FAILURE: General error. Drop all subsequent faults from + * this device if possible. This is "Response Failure" in PCI PRI. + * @IOMMU_PAGE_RESP_INVALID: Could not handle this fault, don't retry the + * access. This is "Invalid Request" in PCI PRI. + */ +enum iommu_page_response_code { + IOMMU_PAGE_RESP_SUCCESS = 0, + IOMMU_PAGE_RESP_INVALID, + IOMMU_PAGE_RESP_FAILURE, +}; + +/** + * struct iommu_page_response - Generic page response information + * @argsz: User filled size of this data + * @version: API version of this structure + * @flags: encodes whether the corresponding fields are valid + * (IOMMU_FAULT_PAGE_RESPONSE_* values) + * @pasid: Process Address Space ID + * @grpid: Page Request Group Index + * @code: response code from &enum iommu_page_response_code + */ +struct iommu_page_response { + __u32 argsz; +#define IOMMU_PAGE_RESP_VERSION_1 1 + __u32 version; +#define IOMMU_PAGE_RESP_PASID_VALID (1 << 0) + __u32 flags; + __u32 pasid; + __u32 grpid; + __u32 code; +}; + + /* iommu fault flags */ #define IOMMU_FAULT_READ 0x0 #define IOMMU_FAULT_WRITE 0x1 diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h deleted file mode 100644 index 65d8b0234f69..000000000000 --- a/include/uapi/linux/iommu.h +++ /dev/null @@ -1,161 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ -/* - * IOMMU user API definitions - */ - -#ifndef _UAPI_IOMMU_H -#define _UAPI_IOMMU_H - -#include - -#define IOMMU_FAULT_PERM_READ (1 << 0) /* read */ -#define IOMMU_FAULT_PERM_WRITE (1 << 1) /* write */ -#define IOMMU_FAULT_PERM_EXEC (1 << 2) /* exec */ -#define IOMMU_FAULT_PERM_PRIV (1 << 3) /* privileged */ - -/* Generic fault types, can be expanded IRQ remapping fault */ -enum iommu_fault_type { - IOMMU_FAULT_DMA_UNRECOV = 1, /* unrecoverable fault */ - IOMMU_FAULT_PAGE_REQ, /* page request fault */ -}; - -enum iommu_fault_reason { - IOMMU_FAULT_REASON_UNKNOWN = 0, - - /* Could not access the PASID table (fetch caused external abort) */ - IOMMU_FAULT_REASON_PASID_FETCH, - - /* PASID entry is invalid or has configuration errors */ - IOMMU_FAULT_REASON_BAD_PASID_ENTRY, - - /* - * PASID is out of range (e.g. exceeds the maximum PASID - * supported by the IOMMU) or disabled. - */ - IOMMU_FAULT_REASON_PASID_INVALID, - - /* - * An external abort occurred fetching (or updating) a translation - * table descriptor - */ - IOMMU_FAULT_REASON_WALK_EABT, - - /* - * Could not access the page table entry (Bad address), - * actual translation fault - */ - IOMMU_FAULT_REASON_PTE_FETCH, - - /* Protection flag check failed */ - IOMMU_FAULT_REASON_PERMISSION, - - /* access flag check failed */ - IOMMU_FAULT_REASON_ACCESS, - - /* Output address of a translation stage caused Address Size fault */ - IOMMU_FAULT_REASON_OOR_ADDRESS, -}; - -/** - * struct iommu_fault_unrecoverable - Unrecoverable fault data - * @reason: reason of the fault, from &enum iommu_fault_reason - * @flags: parameters of this fault (IOMMU_FAULT_UNRECOV_* values) - * @pasid: Process Address Space ID - * @perm: requested permission access using by the incoming transaction - * (IOMMU_FAULT_PERM_* values) - * @addr: offending page address - * @fetch_addr: address that caused a fetch abort, if any - */ -struct iommu_fault_unrecoverable { - __u32 reason; -#define IOMMU_FAULT_UNRECOV_PASID_VALID (1 << 0) -#define IOMMU_FAULT_UNRECOV_ADDR_VALID (1 << 1) -#define IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID (1 << 2) - __u32 flags; - __u32 pasid; - __u32 perm; - __u64 addr; - __u64 fetch_addr; -}; - -/** - * struct iommu_fault_page_request - Page Request data - * @flags: encodes whether the corresponding fields are valid and whether this - * is the last page in group (IOMMU_FAULT_PAGE_REQUEST_* values). - * When IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID is set, the page response - * must have the same PASID value as the page request. When it is clear, - * the page response should not have a PASID. - * @pasid: Process Address Space ID - * @grpid: Page Request Group Index - * @perm: requested page permissions (IOMMU_FAULT_PERM_* values) - * @addr: page address - * @private_data: device-specific private information - */ -struct iommu_fault_page_request { -#define IOMMU_FAULT_PAGE_REQUEST_PASID_VALID (1 << 0) -#define IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE (1 << 1) -#define IOMMU_FAULT_PAGE_REQUEST_PRIV_DATA (1 << 2) -#define IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID (1 << 3) - __u32 flags; - __u32 pasid; - __u32 grpid; - __u32 perm; - __u64 addr; - __u64 private_data[2]; -}; - -/** - * struct iommu_fault - Generic fault data - * @type: fault type from &enum iommu_fault_type - * @padding: reserved for future use (should be zero) - * @event: fault event, when @type is %IOMMU_FAULT_DMA_UNRECOV - * @prm: Page Request message, when @type is %IOMMU_FAULT_PAGE_REQ - * @padding2: sets the fault size to allow for future extensions - */ -struct iommu_fault { - __u32 type; - __u32 padding; - union { - struct iommu_fault_unrecoverable event; - struct iommu_fault_page_request prm; - __u8 padding2[56]; - }; -}; - -/** - * enum iommu_page_response_code - Return status of fault handlers - * @IOMMU_PAGE_RESP_SUCCESS: Fault has been handled and the page tables - * populated, retry the access. This is "Success" in PCI PRI. - * @IOMMU_PAGE_RESP_FAILURE: General error. Drop all subsequent faults from - * this device if possible. This is "Response Failure" in PCI PRI. - * @IOMMU_PAGE_RESP_INVALID: Could not handle this fault, don't retry the - * access. This is "Invalid Request" in PCI PRI. - */ -enum iommu_page_response_code { - IOMMU_PAGE_RESP_SUCCESS = 0, - IOMMU_PAGE_RESP_INVALID, - IOMMU_PAGE_RESP_FAILURE, -}; - -/** - * struct iommu_page_response - Generic page response information - * @argsz: User filled size of this data - * @version: API version of this structure - * @flags: encodes whether the corresponding fields are valid - * (IOMMU_FAULT_PAGE_RESPONSE_* values) - * @pasid: Process Address Space ID - * @grpid: Page Request Group Index - * @code: response code from &enum iommu_page_response_code - */ -struct iommu_page_response { - __u32 argsz; -#define IOMMU_PAGE_RESP_VERSION_1 1 - __u32 version; -#define IOMMU_PAGE_RESP_PASID_VALID (1 << 0) - __u32 flags; - __u32 pasid; - __u32 grpid; - __u32 code; -}; - -#endif /* _UAPI_IOMMU_H */ diff --git a/MAINTAINERS b/MAINTAINERS index 9104430e148e..4b84164c69bd 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11074,7 +11074,6 @@ F: drivers/iommu/ F: include/linux/iommu.h F: include/linux/iova.h F: include/linux/of_iommu.h -F: include/uapi/linux/iommu.h IOMMUFD M: Jason Gunthorpe From patchwork Wed Dec 20 01:23:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181407 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2351564dyi; Tue, 19 Dec 2023 17:31:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IEeRBIEST89W7SKi+5W7j5HtgFm2qyMHP0G9HDcB0KdE+32SpSsxFU2X43Or2T9O6qBoTir X-Received: by 2002:a92:d149:0:b0:35d:59a2:3336 with SMTP id t9-20020a92d149000000b0035d59a23336mr20083861ilg.58.1703035781443; Tue, 19 Dec 2023 17:29:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035781; cv=none; d=google.com; s=arc-20160816; b=Nz4NAddqyYHdHS5k45O0wTQtrvmsSyKOrkJtPNAYn5oSNGvkJ8z777Wnojb87165lI Ygs3ydAy1WNLVT6/b4HqwxC+Z3gR8oX3PPqETHWjgsqeEBiKhx5gc7dPzpIeXtRAaasJ aW7VZtO7KynG2L49NqJgDY39CqE2ZG3qgc07oU07/eXcnnWo9xGKhHdFdtNgPupLcon9 Icm/uNw4QaAAiSorGalJSLL2Gi1TAwJ9uilFrAfUbk7MhkH7K0d8rKv5ojvunvl41lEF 0pSsJ/ihTYiBYANWsVZwQvRHE+ollOxS7Pxo1gFt0Cm2uLwFxhyopvTS6U81vuX6/qqz V1Kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=DRzNf2h2qIM9/niJ5pFyIO/r07OJqBum/ymEEJkM55A=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=Wiylhy/2n2JGEPJ7zmdtMmWU7znbZUFQ4VLJiBqEyuux3zADpzXFo7IHp/l4k8fKid bqgrF7Ubxc/1wxeSATvHYQZHuaqumyYIX/uDSaeRM42F5MFBWfb/vsNlpiDrJGIXSLdC QTJM+5cMNa5hRhH+N/9gCcM9A5WKT3Q0Vdk1rVGtzTj26ZkZALoUOdmzRtbf3ZkhHV6o RhZ30jjjFksGGFjaf3ace/UAtch1ieBhnys1NK7wfQvdFnhJirLpZUQQ0Ay5Fwmmj5yT GkfNMjNqOBMUd25jWkhMUkl8Iydyew6/B2ewncXU185F4PQfsM1HYWuBt8rUc520hiiQ JhEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dT7HpBZj; spf=pass (google.com: domain of linux-kernel+bounces-6236-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6236-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id rm9-20020a17090b3ec900b0028bb48367a7si2102237pjb.9.2023.12.19.17.29.41 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:29:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6236-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dT7HpBZj; spf=pass (google.com: domain of linux-kernel+bounces-6236-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6236-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id BF90128518E for ; Wed, 20 Dec 2023 01:29:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7FC7BF9E3; Wed, 20 Dec 2023 01:28:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dT7HpBZj" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67A4EBA2B; Wed, 20 Dec 2023 01:28:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035719; x=1734571719; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bQjTjgCLfYsPH5p+7Al5Zy19r6O5Dl9UPL8NgLDT+B4=; b=dT7HpBZjpxMHwSTJ2Aii73Q00OouX5DgmKC0DpLwOM4SdvupSY4xhp1X kMNxZAMz2kKmm8bvoKxGaY5XZfqD5r2nLJhrRPaksEQsUgsGYof/LU5gL Y99PenfQkREMHwy8Xl3Uwg9oTG99fXsjHHu0l2ntvdkyn2aBDK5hoWi/7 zoInbpKbnRuFCQN109d/Y6V+RRkES8OiUBc/FTrlTLe0Ub1xyycG8QQMM tgLj68xvCswR7bQjcRgwRowwpJxUM85vMSDIospYEIYPvPDc0V41v0m/G wmP6XVV2aHzJf0eZgMPPqPTWY8z7duNvIQIP+7Qoy5o4JxVfvazjJrzj4 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965738" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965738" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:28:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319063" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319063" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:35 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 02/14] iommu/arm-smmu-v3: Remove unrecoverable faults reporting Date: Wed, 20 Dec 2023 09:23:20 +0800 Message-Id: <20231220012332.168188-3-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762447472852849 X-GMAIL-MSGID: 1785762447472852849 No device driver registers fault handler to handle the reported unrecoveraable faults. Remove it to avoid dead code. Signed-off-by: Lu Baolu Reviewed-by: Kevin Tian Reviewed-by: Jason Gunthorpe Tested-by: Longfang Liu --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 46 ++++++--------------- 1 file changed, 13 insertions(+), 33 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 0ffb1cf17e0b..4cf1054ed321 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1461,7 +1461,6 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) { int ret; - u32 reason; u32 perm = 0; struct arm_smmu_master *master; bool ssid_valid = evt[0] & EVTQ_0_SSV; @@ -1471,16 +1470,9 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) switch (FIELD_GET(EVTQ_0_ID, evt[0])) { case EVT_ID_TRANSLATION_FAULT: - reason = IOMMU_FAULT_REASON_PTE_FETCH; - break; case EVT_ID_ADDR_SIZE_FAULT: - reason = IOMMU_FAULT_REASON_OOR_ADDRESS; - break; case EVT_ID_ACCESS_FAULT: - reason = IOMMU_FAULT_REASON_ACCESS; - break; case EVT_ID_PERMISSION_FAULT: - reason = IOMMU_FAULT_REASON_PERMISSION; break; default: return -EOPNOTSUPP; @@ -1490,6 +1482,9 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) if (evt[1] & EVTQ_1_S2) return -EFAULT; + if (!(evt[1] & EVTQ_1_STALL)) + return -EOPNOTSUPP; + if (evt[1] & EVTQ_1_RnW) perm |= IOMMU_FAULT_PERM_READ; else @@ -1501,32 +1496,17 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) if (evt[1] & EVTQ_1_PnU) perm |= IOMMU_FAULT_PERM_PRIV; - if (evt[1] & EVTQ_1_STALL) { - flt->type = IOMMU_FAULT_PAGE_REQ; - flt->prm = (struct iommu_fault_page_request) { - .flags = IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE, - .grpid = FIELD_GET(EVTQ_1_STAG, evt[1]), - .perm = perm, - .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), - }; + flt->type = IOMMU_FAULT_PAGE_REQ; + flt->prm = (struct iommu_fault_page_request) { + .flags = IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE, + .grpid = FIELD_GET(EVTQ_1_STAG, evt[1]), + .perm = perm, + .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), + }; - if (ssid_valid) { - flt->prm.flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; - flt->prm.pasid = FIELD_GET(EVTQ_0_SSID, evt[0]); - } - } else { - flt->type = IOMMU_FAULT_DMA_UNRECOV; - flt->event = (struct iommu_fault_unrecoverable) { - .reason = reason, - .flags = IOMMU_FAULT_UNRECOV_ADDR_VALID, - .perm = perm, - .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), - }; - - if (ssid_valid) { - flt->event.flags |= IOMMU_FAULT_UNRECOV_PASID_VALID; - flt->event.pasid = FIELD_GET(EVTQ_0_SSID, evt[0]); - } + if (ssid_valid) { + flt->prm.flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; + flt->prm.pasid = FIELD_GET(EVTQ_0_SSID, evt[0]); } mutex_lock(&smmu->streams_mutex); From patchwork Wed Dec 20 01:23:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181402 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2350890dyi; Tue, 19 Dec 2023 17:30:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IG2N1ZmEU40H4ewRWNdFDPjcAbCv1R4vf5qPhfgjUao/yPhLnFTCoSY5esa4jumHMSFa35n X-Received: by 2002:a05:6870:701f:b0:204:428:a31b with SMTP id u31-20020a056870701f00b002040428a31bmr887120oae.101.1703035800308; Tue, 19 Dec 2023 17:30:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035800; cv=none; d=google.com; s=arc-20160816; b=lqMk8edJKP7AH0ofKzBIzC6PLFhGVpgdL3vxims33T/FwSii9q0wkd72cqre/eFX6K iJ716ZISVnvUdhtFJZOAXIRbd/rtdGNzc4Oz5rMXdVh8seMM6H0VonLvhmviIvgaw/Qd RVDYt/CumCpuu3iZMRG0AfYt+aVw2qBysIMW9KASjdSGrZzE/zHp21HjDm6nuAbnKw0i E5N/rohV0eAmMHLJ6X0J+5WaA85PSCiyc4YeK4GRj6dvuaSsgT9GZ86Qs6oG+RAzS0Op 99mFWfsP6bGjsrG7wGOS0cr+WmOgDcwFzNCr/e4c2cCkmQ3fad3WOvBaa9U+o8XkxWDJ adLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=dM/YivZ9n5ZoqZAQElbUInX1Kyfl9FpmbtFZ6CAaEnQ=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=W3oYrgIDpclDf3uf7xybGciY5IO+S4bZGW+e1QCA7ffB0qKidBRLJW88uWqWAusqfQ TST1Yq0DqRbE7Srcjo9Rh7y99DTINTXkslNiaPBfrQp6cVaV3JXVaIJNjDrCExdB02H5 J9JWo2W1LA7oZVlK+BN7txSgDqv/1iV99f7VqWKspBaG3OGx4TVYV97raAd17xZofuDf bQJ3ev2MwxKjs1wkAJgCKQbKamloFbWM7f+fZORXxcdfkUEEHMGoZ3kops46y6v3UkHr 6gbmwpthrj3yBwLzf6+ZNoq62Rqwmf9vwT9dt75XJtyGHbJbz1CVgxGvnHnRWIdN+LZK 9DfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Gr2pHw5s; spf=pass (google.com: domain of linux-kernel+bounces-6237-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6237-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id e12-20020a65478c000000b005c16f26b1b4si20115254pgs.440.2023.12.19.17.30.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:30:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6237-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Gr2pHw5s; spf=pass (google.com: domain of linux-kernel+bounces-6237-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6237-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id D52E728367E for ; Wed, 20 Dec 2023 01:29:57 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B028FFC04; Wed, 20 Dec 2023 01:28:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Gr2pHw5s" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 311DBEACD; Wed, 20 Dec 2023 01:28:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035723; x=1734571723; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BLX4zQ7UIi2x2hw/ngzDFRoKaAHe7Rh5npj32JkG4L0=; b=Gr2pHw5sn49iDFiJwgoqSB3MckPyyg6zMHVH9JLoOFuDktuVn1+c5dPE IRX0dsN6APp5yOFgGJdp57XkuJenxdWQN0l20kcMl0F0u5o1zds/L17BV ShRCqLBurdLUDT4RVpIR/bqqmKDhzkusyTf/CPbGR1NHAb4IqJh+hujtO Zab/9HZqn19Kx+NLVVkcPpGGDx0cRP/F/idEHe0LB7zknedN8sBMmG3jy vIPVp6tjFa4ouc+bz6M9TogXY7nGLQbv2wxgYwa/lNIlusXq3VC7qRrFN iQ/2bJIpGwLxtV0+aoqvwvsLoGWF/W/0IroFUJ76+Y9gRgdwSQHhe/oxv g==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965745" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965745" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:28:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319076" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319076" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:39 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 03/14] iommu: Remove unrecoverable fault data Date: Wed, 20 Dec 2023 09:23:21 +0800 Message-Id: <20231220012332.168188-4-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762467272661043 X-GMAIL-MSGID: 1785762467272661043 The unrecoverable fault data is not used anywhere. Remove it to avoid dead code. Suggested-by: Kevin Tian Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 72 ++----------------------------------------- 1 file changed, 2 insertions(+), 70 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 80dfdc40b267..2d0ab1c3dce5 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -50,67 +50,7 @@ struct iommu_dma_cookie; /* Generic fault types, can be expanded IRQ remapping fault */ enum iommu_fault_type { - IOMMU_FAULT_DMA_UNRECOV = 1, /* unrecoverable fault */ - IOMMU_FAULT_PAGE_REQ, /* page request fault */ -}; - -enum iommu_fault_reason { - IOMMU_FAULT_REASON_UNKNOWN = 0, - - /* Could not access the PASID table (fetch caused external abort) */ - IOMMU_FAULT_REASON_PASID_FETCH, - - /* PASID entry is invalid or has configuration errors */ - IOMMU_FAULT_REASON_BAD_PASID_ENTRY, - - /* - * PASID is out of range (e.g. exceeds the maximum PASID - * supported by the IOMMU) or disabled. - */ - IOMMU_FAULT_REASON_PASID_INVALID, - - /* - * An external abort occurred fetching (or updating) a translation - * table descriptor - */ - IOMMU_FAULT_REASON_WALK_EABT, - - /* - * Could not access the page table entry (Bad address), - * actual translation fault - */ - IOMMU_FAULT_REASON_PTE_FETCH, - - /* Protection flag check failed */ - IOMMU_FAULT_REASON_PERMISSION, - - /* access flag check failed */ - IOMMU_FAULT_REASON_ACCESS, - - /* Output address of a translation stage caused Address Size fault */ - IOMMU_FAULT_REASON_OOR_ADDRESS, -}; - -/** - * struct iommu_fault_unrecoverable - Unrecoverable fault data - * @reason: reason of the fault, from &enum iommu_fault_reason - * @flags: parameters of this fault (IOMMU_FAULT_UNRECOV_* values) - * @pasid: Process Address Space ID - * @perm: requested permission access using by the incoming transaction - * (IOMMU_FAULT_PERM_* values) - * @addr: offending page address - * @fetch_addr: address that caused a fetch abort, if any - */ -struct iommu_fault_unrecoverable { - __u32 reason; -#define IOMMU_FAULT_UNRECOV_PASID_VALID (1 << 0) -#define IOMMU_FAULT_UNRECOV_ADDR_VALID (1 << 1) -#define IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID (1 << 2) - __u32 flags; - __u32 pasid; - __u32 perm; - __u64 addr; - __u64 fetch_addr; + IOMMU_FAULT_PAGE_REQ = 1, /* page request fault */ }; /** @@ -142,19 +82,11 @@ struct iommu_fault_page_request { /** * struct iommu_fault - Generic fault data * @type: fault type from &enum iommu_fault_type - * @padding: reserved for future use (should be zero) - * @event: fault event, when @type is %IOMMU_FAULT_DMA_UNRECOV * @prm: Page Request message, when @type is %IOMMU_FAULT_PAGE_REQ - * @padding2: sets the fault size to allow for future extensions */ struct iommu_fault { __u32 type; - __u32 padding; - union { - struct iommu_fault_unrecoverable event; - struct iommu_fault_page_request prm; - __u8 padding2[56]; - }; + struct iommu_fault_page_request prm; }; /** From patchwork Wed Dec 20 01:23:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181403 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2351000dyi; Tue, 19 Dec 2023 17:30:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IEIszNMM2K9Xfj8UmbjUzm/G5EoYsxmu6ko0uCTGFLOGO9fXbOB6pXN2SUy8caBNrft/I5t X-Received: by 2002:a05:6808:1385:b0:3b8:b063:5045 with SMTP id c5-20020a056808138500b003b8b0635045mr23464468oiw.70.1703035815339; Tue, 19 Dec 2023 17:30:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035815; cv=none; d=google.com; s=arc-20160816; b=0hwSqQ9XvvC4/ikoWoTeIVrX8uj6TgvIxexxwx02MevWPmzBQhmF0XsT3aLP588PQa +yv/xHjlZUJK7vASmu12IQA6YucTCwQVXbXeyZm5CWKirj45D/qIb3ozV5c7OUFDQWUZ is55snD8yyO5AFZzEHM5ud88LQGJBYjUWsB+bZb+pfQe1VouyUshQKfiLjtud+BHzBJ/ BWJHTpH/LBK/quDgV2G3ioy12hJuxeKNih7DWn/NYX97f1C7elwacjHovepwwyV3q+36 9lk5VG4fpnv7NPatja/Cm78cUbJh+H6IKIFv9dRpD5NlZryBXr8hwmjvnt9GMBC4m1K2 Yk8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=sTtxn1/Lw378q54HmD4Nz0s1o+/67iR+XYuKfYt7ufo=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=NbBvAx0BbLM5fpYwrVPNNnksat6z2GDL88BW1SCZ1AezVT0LqviOR0FXhqnJUH1g9S oShPNhH8OLJd4gkqD8uP3nbofIL8yj4WeBJSN1tpSBCnmhATZ7lLOrHk0ulHd6vmz+I7 mL1XKQYyCjOoKNVipKjZXs3AJGm+tVsxNzkFw25KTwqG2QoLR3ffF6S1XLBrCyaj7HhI V9YSdncuMQOVvsyIgS+se2HWCb61KXChtuWcMjeWsFrq7qBUEGrlAaFQw05T3FVzlltT 6ijesPXPuPUvNU0XNVdtopmn4BOSPHUd9pMWnU4wf9svV7I8Sd/TN6gHcsIO/UGSLy5g 5rDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=alFt0Dkk; spf=pass (google.com: domain of linux-kernel+bounces-6238-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6238-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id s23-20020a17090aad9700b0028bbcd027aasi1949615pjq.94.2023.12.19.17.30.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:30:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6238-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=alFt0Dkk; spf=pass (google.com: domain of linux-kernel+bounces-6238-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6238-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 188C828310A for ; Wed, 20 Dec 2023 01:30:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1775D156F0; Wed, 20 Dec 2023 01:28:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="alFt0Dkk" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A42510A3C; Wed, 20 Dec 2023 01:28:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035727; x=1734571727; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+MEuFLbHWdEbMZi8FQCyE7/yW5eCF50YUSFJ/v8RyHU=; b=alFt0DkkopWpLTigetpbJY7SB1p8QCNQrV97YCFuSNlyD70N1kf4qspX DFV6iqvNwmADw0etmsA91XJEUeqddg701J5g9O52PzyyrsnVdudDHyHY/ +0PL0Z6fu4j/4pXewNFCSjE6AMPHn29r/AUWcff4zxo+UDHkcR49KLuGS UKVG8GSQ0Vl6DLyOwXlHNXjN9iDED9TXMeXu0kaZxc1HcRNIK+Tl3zbYQ T8tr2B6GmXffD4vdumDB4/vi7jfLYIsmsZLt4Bfoh5m6K366uARfpom6u mkSAXMyQo609n+5JI7RXrqDdDTbdFj7jmmba+LP4m1dndANTva9x8Hy9l Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965758" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965758" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:28:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319088" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319088" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:43 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 04/14] iommu: Cleanup iopf data structure definitions Date: Wed, 20 Dec 2023 09:23:22 +0800 Message-Id: <20231220012332.168188-5-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762482693962422 X-GMAIL-MSGID: 1785762482693962422 struct iommu_fault_page_request and struct iommu_page_response are not part of uAPI anymore. Convert them to data structures for kAPI. Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Reviewed-by: Yi Liu Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 27 +++++++++++---------------- drivers/iommu/io-pgfault.c | 1 - drivers/iommu/iommu.c | 4 ---- 3 files changed, 11 insertions(+), 21 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 2d0ab1c3dce5..18642e682f57 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -71,12 +71,12 @@ struct iommu_fault_page_request { #define IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE (1 << 1) #define IOMMU_FAULT_PAGE_REQUEST_PRIV_DATA (1 << 2) #define IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID (1 << 3) - __u32 flags; - __u32 pasid; - __u32 grpid; - __u32 perm; - __u64 addr; - __u64 private_data[2]; + u32 flags; + u32 pasid; + u32 grpid; + u32 perm; + u64 addr; + u64 private_data[2]; }; /** @@ -85,7 +85,7 @@ struct iommu_fault_page_request { * @prm: Page Request message, when @type is %IOMMU_FAULT_PAGE_REQ */ struct iommu_fault { - __u32 type; + u32 type; struct iommu_fault_page_request prm; }; @@ -106,8 +106,6 @@ enum iommu_page_response_code { /** * struct iommu_page_response - Generic page response information - * @argsz: User filled size of this data - * @version: API version of this structure * @flags: encodes whether the corresponding fields are valid * (IOMMU_FAULT_PAGE_RESPONSE_* values) * @pasid: Process Address Space ID @@ -115,14 +113,11 @@ enum iommu_page_response_code { * @code: response code from &enum iommu_page_response_code */ struct iommu_page_response { - __u32 argsz; -#define IOMMU_PAGE_RESP_VERSION_1 1 - __u32 version; #define IOMMU_PAGE_RESP_PASID_VALID (1 << 0) - __u32 flags; - __u32 pasid; - __u32 grpid; - __u32 code; + u32 flags; + u32 pasid; + u32 grpid; + u32 code; }; diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index e5b8b9110c13..24b5545352ae 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -56,7 +56,6 @@ static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, enum iommu_page_response_code status) { struct iommu_page_response resp = { - .version = IOMMU_PAGE_RESP_VERSION_1, .pasid = iopf->fault.prm.pasid, .grpid = iopf->fault.prm.grpid, .code = status, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 68e648b55767..b88dc3e0595c 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1494,10 +1494,6 @@ int iommu_page_response(struct device *dev, if (!param || !param->fault_param) return -EINVAL; - if (msg->version != IOMMU_PAGE_RESP_VERSION_1 || - msg->flags & ~IOMMU_PAGE_RESP_PASID_VALID) - return -EINVAL; - /* Only send response if there is a fault report pending */ mutex_lock(¶m->fault_param->lock); if (list_empty(¶m->fault_param->faults)) { From patchwork Wed Dec 20 01:23:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181404 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2351150dyi; Tue, 19 Dec 2023 17:30:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IGZqlKizCymzhFNmDbIit4f1+lfjN+Zo1cOLiHKlBfHQDslW4IPxZFCnKc2bJDpP8D65w67 X-Received: by 2002:a05:6e02:2169:b0:35f:842e:2b96 with SMTP id s9-20020a056e02216900b0035f842e2b96mr12351685ilv.19.1703035836550; Tue, 19 Dec 2023 17:30:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035836; cv=none; d=google.com; s=arc-20160816; b=QKEgwzMNkVEN02W1dZpqTuKiCAXMFh01OfeoE2rQwBWUViIS7BFEestCisroexL9zf YJ5WTzqz4pFpNZ9oiaGb0cwPku1mmZGH04A4B3A9n3I95w6zeioC9Iyrmg3sUtXB/92n uf4J9gCemckNpu0RmJHbaI4w0ctwpORbXuL09dCAbPECqB505KeVUykfpFBtTeev91jO ULMF6Q3TBn2yWFCe42lPkz+jUcbjFVyQubjF3PD/iLkcerB1jJ7HrAgc1IT3gSSf3BGX DROXsCd33fFFK4IrAwbV0axqoKWR+OztXqXBOQrltvO0YRHaPUoFtbTU64wRA2If7uyU KsOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=F9L2hgNcYLmmcSIy9MxAFc/PD0pecRsa4Ru6yodPLqo=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=aBpFbrSXcgQpOm2jdlfY4RLWeuyjsPxGm8jD4l4ZypVCvu1lMkxpo1rLpAsHytJipe +1PXgj3xBlW5HXt6lNpNZThkcTVVPVaDXO8z+fS+yn1DIkKY3WEXR+OOgkDqL9T6VPA5 Qgk+bMjc62sm2pTED7PmDfoqUFrfH2TTteT/sWO3ox9Jtx69tMTGa1WCx0mcCdpg1u0f jsNQC0E8AG5rRk2N3neGofwtkgFkMRD6jnPKkOcXIpA9u1utc0BLRy5O93ioJKPz2SJN 2U+JlcHhvL1dKdaOepr6EvbrENgZGIrpmcw9ZasLsB+b3H/GT43RTeGFDHatI27nwGTQ JeDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="jGGx/agV"; spf=pass (google.com: domain of linux-kernel+bounces-6239-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6239-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id w1-20020a170902e88100b001d366f8cfe3si11014600plg.52.2023.12.19.17.30.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:30:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6239-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="jGGx/agV"; spf=pass (google.com: domain of linux-kernel+bounces-6239-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6239-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 4AEB5281E41 for ; Wed, 20 Dec 2023 01:30:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CEAE9168C3; Wed, 20 Dec 2023 01:28:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jGGx/agV" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E3B5156E6; Wed, 20 Dec 2023 01:28:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035731; x=1734571731; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=97yzz5IUPfHT0ZODav6a8DSBJinRnjoREj2K7p9nCW8=; b=jGGx/agVHAB95Wc3IyDl7DuPoJrH99esFmUzaLx59gKFr65ZxpdaZADX P/NMtIrI4WHYg7K3rddAUB9IE7iJILGxqmpapii7P5kWwzu1lSayJCFpc lLmBHP9Y00PKUogPgpemaN0/As+P5hozCwpBAsQNAhoopWtN7tNoJrrPE UtWr24Au+yTPvIWdyMG4uo5ZogWgCgCCPBt1LIvKES0iSutKRT0GdEQ2y kG+Y06kCzadHVAQXEriDrWl/x9+X3jgsi8K4ADljalPVMIAk3MMCOLTbG gHC4h61MyT5xvPD3DL3s1xb2yyUu2FiLMRHm+injzH06ADijB1gLGWnxx g==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965773" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965773" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:28:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319101" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319101" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:46 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 05/14] iommu: Merge iopf_device_param into iommu_fault_param Date: Wed, 20 Dec 2023 09:23:23 +0800 Message-Id: <20231220012332.168188-6-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762505662542373 X-GMAIL-MSGID: 1785762505662542373 The struct dev_iommu contains two pointers, fault_param and iopf_param. The fault_param pointer points to a data structure that is used to store pending faults that are awaiting responses. The iopf_param pointer points to a data structure that is used to store partial faults that are part of a Page Request Group. The fault_param and iopf_param pointers are essentially duplicate. This causes memory waste. Merge the iopf_device_param pointer into the iommu_fault_param pointer to consolidate the code and save memory. The consolidated pointer would be allocated on demand when the device driver enables the iopf on device, and would be freed after iopf is disabled. Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 18 ++++-- drivers/iommu/io-pgfault.c | 110 ++++++++++++++++++------------------- drivers/iommu/iommu.c | 34 ++---------- 3 files changed, 72 insertions(+), 90 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 18642e682f57..5063e1025c85 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -42,6 +42,7 @@ struct notifier_block; struct iommu_sva; struct iommu_fault_event; struct iommu_dma_cookie; +struct iopf_queue; #define IOMMU_FAULT_PERM_READ (1 << 0) /* read */ #define IOMMU_FAULT_PERM_WRITE (1 << 1) /* write */ @@ -595,21 +596,31 @@ struct iommu_fault_event { * struct iommu_fault_param - per-device IOMMU fault data * @handler: Callback function to handle IOMMU faults at device level * @data: handler private data - * @faults: holds the pending faults which needs response * @lock: protect pending faults list + * @dev: the device that owns this param + * @queue: IOPF queue + * @queue_list: index into queue->devices + * @partial: faults that are part of a Page Request Group for which the last + * request hasn't been submitted yet. + * @faults: holds the pending faults which need response */ struct iommu_fault_param { iommu_dev_fault_handler_t handler; void *data; + struct mutex lock; + + struct device *dev; + struct iopf_queue *queue; + struct list_head queue_list; + + struct list_head partial; struct list_head faults; - struct mutex lock; }; /** * struct dev_iommu - Collection of per-device IOMMU data * * @fault_param: IOMMU detected device fault reporting data - * @iopf_param: I/O Page Fault queue and data * @fwspec: IOMMU fwspec data * @iommu_dev: IOMMU device this device is linked to * @priv: IOMMU Driver private data @@ -625,7 +636,6 @@ struct iommu_fault_param { struct dev_iommu { struct mutex lock; struct iommu_fault_param *fault_param; - struct iopf_device_param *iopf_param; struct iommu_fwspec *fwspec; struct iommu_device *iommu_dev; void *priv; diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 24b5545352ae..f948303b2a91 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -25,21 +25,6 @@ struct iopf_queue { struct mutex lock; }; -/** - * struct iopf_device_param - IO Page Fault data attached to a device - * @dev: the device that owns this param - * @queue: IOPF queue - * @queue_list: index into queue->devices - * @partial: faults that are part of a Page Request Group for which the last - * request hasn't been submitted yet. - */ -struct iopf_device_param { - struct device *dev; - struct iopf_queue *queue; - struct list_head queue_list; - struct list_head partial; -}; - struct iopf_fault { struct iommu_fault fault; struct list_head list; @@ -144,7 +129,7 @@ int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) int ret; struct iopf_group *group; struct iopf_fault *iopf, *next; - struct iopf_device_param *iopf_param; + struct iommu_fault_param *iopf_param; struct device *dev = cookie; struct dev_iommu *param = dev->iommu; @@ -159,7 +144,7 @@ int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) * As long as we're holding param->lock, the queue can't be unlinked * from the device and therefore cannot disappear. */ - iopf_param = param->iopf_param; + iopf_param = param->fault_param; if (!iopf_param) return -ENODEV; @@ -229,14 +214,14 @@ EXPORT_SYMBOL_GPL(iommu_queue_iopf); int iopf_queue_flush_dev(struct device *dev) { int ret = 0; - struct iopf_device_param *iopf_param; + struct iommu_fault_param *iopf_param; struct dev_iommu *param = dev->iommu; if (!param) return -ENODEV; mutex_lock(¶m->lock); - iopf_param = param->iopf_param; + iopf_param = param->fault_param; if (iopf_param) flush_workqueue(iopf_param->queue->wq); else @@ -260,7 +245,7 @@ EXPORT_SYMBOL_GPL(iopf_queue_flush_dev); int iopf_queue_discard_partial(struct iopf_queue *queue) { struct iopf_fault *iopf, *next; - struct iopf_device_param *iopf_param; + struct iommu_fault_param *iopf_param; if (!queue) return -EINVAL; @@ -287,34 +272,36 @@ EXPORT_SYMBOL_GPL(iopf_queue_discard_partial); */ int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) { - int ret = -EBUSY; - struct iopf_device_param *iopf_param; + int ret = 0; struct dev_iommu *param = dev->iommu; - - if (!param) - return -ENODEV; - - iopf_param = kzalloc(sizeof(*iopf_param), GFP_KERNEL); - if (!iopf_param) - return -ENOMEM; - - INIT_LIST_HEAD(&iopf_param->partial); - iopf_param->queue = queue; - iopf_param->dev = dev; + struct iommu_fault_param *fault_param; mutex_lock(&queue->lock); mutex_lock(¶m->lock); - if (!param->iopf_param) { - list_add(&iopf_param->queue_list, &queue->devices); - param->iopf_param = iopf_param; - ret = 0; + if (param->fault_param) { + ret = -EBUSY; + goto done_unlock; } + + fault_param = kzalloc(sizeof(*fault_param), GFP_KERNEL); + if (!fault_param) { + ret = -ENOMEM; + goto done_unlock; + } + + mutex_init(&fault_param->lock); + INIT_LIST_HEAD(&fault_param->faults); + INIT_LIST_HEAD(&fault_param->partial); + fault_param->dev = dev; + list_add(&fault_param->queue_list, &queue->devices); + fault_param->queue = queue; + + param->fault_param = fault_param; + +done_unlock: mutex_unlock(¶m->lock); mutex_unlock(&queue->lock); - if (ret) - kfree(iopf_param); - return ret; } EXPORT_SYMBOL_GPL(iopf_queue_add_device); @@ -330,34 +317,41 @@ EXPORT_SYMBOL_GPL(iopf_queue_add_device); */ int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) { - int ret = -EINVAL; + int ret = 0; struct iopf_fault *iopf, *next; - struct iopf_device_param *iopf_param; struct dev_iommu *param = dev->iommu; - - if (!param || !queue) - return -EINVAL; + struct iommu_fault_param *fault_param = param->fault_param; mutex_lock(&queue->lock); mutex_lock(¶m->lock); - iopf_param = param->iopf_param; - if (iopf_param && iopf_param->queue == queue) { - list_del(&iopf_param->queue_list); - param->iopf_param = NULL; - ret = 0; + if (!fault_param) { + ret = -ENODEV; + goto unlock; } - mutex_unlock(¶m->lock); - mutex_unlock(&queue->lock); - if (ret) - return ret; + + if (fault_param->queue != queue) { + ret = -EINVAL; + goto unlock; + } + + if (!list_empty(&fault_param->faults)) { + ret = -EBUSY; + goto unlock; + } + + list_del(&fault_param->queue_list); /* Just in case some faults are still stuck */ - list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) + list_for_each_entry_safe(iopf, next, &fault_param->partial, list) kfree(iopf); - kfree(iopf_param); + param->fault_param = NULL; + kfree(fault_param); +unlock: + mutex_unlock(¶m->lock); + mutex_unlock(&queue->lock); - return 0; + return ret; } EXPORT_SYMBOL_GPL(iopf_queue_remove_device); @@ -403,7 +397,7 @@ EXPORT_SYMBOL_GPL(iopf_queue_alloc); */ void iopf_queue_free(struct iopf_queue *queue) { - struct iopf_device_param *iopf_param, *next; + struct iommu_fault_param *iopf_param, *next; if (!queue) return; diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index b88dc3e0595c..e8f2bcea7f51 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1355,27 +1355,18 @@ int iommu_register_device_fault_handler(struct device *dev, struct dev_iommu *param = dev->iommu; int ret = 0; - if (!param) + if (!param || !param->fault_param) return -EINVAL; mutex_lock(¶m->lock); /* Only allow one fault handler registered for each device */ - if (param->fault_param) { + if (param->fault_param->handler) { ret = -EBUSY; goto done_unlock; } - get_device(dev); - param->fault_param = kzalloc(sizeof(*param->fault_param), GFP_KERNEL); - if (!param->fault_param) { - put_device(dev); - ret = -ENOMEM; - goto done_unlock; - } param->fault_param->handler = handler; param->fault_param->data = data; - mutex_init(¶m->fault_param->lock); - INIT_LIST_HEAD(¶m->fault_param->faults); done_unlock: mutex_unlock(¶m->lock); @@ -1396,29 +1387,16 @@ EXPORT_SYMBOL_GPL(iommu_register_device_fault_handler); int iommu_unregister_device_fault_handler(struct device *dev) { struct dev_iommu *param = dev->iommu; - int ret = 0; - if (!param) + if (!param || !param->fault_param) return -EINVAL; mutex_lock(¶m->lock); - - if (!param->fault_param) - goto unlock; - - /* we cannot unregister handler if there are pending faults */ - if (!list_empty(¶m->fault_param->faults)) { - ret = -EBUSY; - goto unlock; - } - - kfree(param->fault_param); - param->fault_param = NULL; - put_device(dev); -unlock: + param->fault_param->handler = NULL; + param->fault_param->data = NULL; mutex_unlock(¶m->lock); - return ret; + return 0; } EXPORT_SYMBOL_GPL(iommu_unregister_device_fault_handler); From patchwork Wed Dec 20 01:23:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181405 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2351331dyi; Tue, 19 Dec 2023 17:31:02 -0800 (PST) X-Google-Smtp-Source: AGHT+IFC1lkE7uXrhrjoPtJyC7IWC7lqII0xB6d9qDYcuHoA11pWVOYnepQrQxFJZ9Nl9XX7WKFX X-Received: by 2002:a0d:d910:0:b0:5e7:7536:348 with SMTP id b16-20020a0dd910000000b005e775360348mr2149038ywe.25.1703035862032; Tue, 19 Dec 2023 17:31:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035862; cv=none; d=google.com; s=arc-20160816; b=xS3jEqk3tr+gI8HWoODpo3iEYp4VXBM6oMWyllRfKbPHB3fHP70xD0vGu+U2fxOzbk P1qq+rSwLiBl9DyxA/zPD6boYM+C4ptovQIMZMA183wBtgapIyWQAZCjVjIz7NSIgPbW 1962EQQ83wKdaXTTJ60Ba+LAIG8ZML0K5yJOKeeXjWH5/3T0GgG5tTa3JXq8Yvzxt8m3 s1n3YVB8g0m9aA/N6reuE3V2pkBLgRvO8goSM/w2SPocF7x/zarCjiL5nMSTj1+u0rzl UW/ItaMpH9D0OJVBp8Acq48b9A3dYXCvhdn8MggWhiDHLpT9YVkpH6jGGKa+tgHgdRQv ZPGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=hbg7UgT9dnHmErU7NTib3lTc2X9iNXFU/cAbM2+kqvs=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=rIig0L5Zq6YTEJdrORu3W4WxwP9iMOAaWZqyVtV4Cuwo99bQJOGh1IR5hIhSIYPZ/a dEqNZxajnR00176MnqhSqxHRmFlShuDn150kna6Oc2l4xBUvWNqqZAXURYSDePODcrXr XZ9gZRLmbJYn0Z2uqeU8d9nQIT32TfxmQYjOPTyTKhLTpIW8NZpHWnoq4Y77/wLplLBU /ym5hRiJmggdglpPG3GlRVZ5w8JpWxR0a5KBpaewHOfQ9DVreP3I+DxTaBAnNBVazsiU Vp60euDXDkDZff6EvzIWF9gVBtXcTOZfa5XEamMN1ftnq2G8cjh1hiPRI/SheS6LEYY/ guPA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FdLpV0Cl; spf=pass (google.com: domain of linux-kernel+bounces-6240-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6240-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id z8-20020a0cda88000000b0067f66f23747si1624155qvj.399.2023.12.19.17.31.01 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:31:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6240-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FdLpV0Cl; spf=pass (google.com: domain of linux-kernel+bounces-6240-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6240-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id BD5541C254C1 for ; Wed, 20 Dec 2023 01:31:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2E5EB19BB8; Wed, 20 Dec 2023 01:29:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FdLpV0Cl" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CDE41641D; Wed, 20 Dec 2023 01:28:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035735; x=1734571735; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2xoK4NT+bp82akQP6uCznFLbh9NDnUUFLLnz6ops994=; b=FdLpV0CldlESlwc81gtGSzgqVIwY3Gwttf8IqVcnt4ELCUEdkRaP5s7a zAiBL6CNh45ueK9ktr8yj9z5amVc9nrWzWZkAs62Cb5H6tR4ahbYCNPS6 hGC4SK9rkg8PNNhoj2dYtvris3dpuq/oFTQRLc10mbY3Jv50PiB7Z3g7Q anPuFDjNG3+RQAsTe0PgZUUfzR5WQUlg8GAiNETrWgPWWtGAISIRZhMrg +rtLE8gJJNiY/EgI4aNbTTCiMyhzeuY5HhUB9X9CmJhPXF4iHnFkC4QI0 QwnYIUrIQ9pC/544Sr1+z/Vo222oOT35nO5xgkA0ta8LhfzDcFAciDvwG w==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965794" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965794" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:28:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319114" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319114" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:51 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 06/14] iommu: Remove iommu_[un]register_device_fault_handler() Date: Wed, 20 Dec 2023 09:23:24 +0800 Message-Id: <20231220012332.168188-7-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762531637832870 X-GMAIL-MSGID: 1785762531637832870 The individual iommu driver reports the iommu page faults by calling iommu_report_device_fault(), where a pre-registered device fault handler is called to route the fault to another fault handler installed on the corresponding iommu domain. The pre-registered device fault handler is static and won't be dynamic as the fault handler is eventually per iommu domain. Replace calling device fault handler with iommu_queue_iopf(). After this replacement, the registering and unregistering fault handler interfaces are not needed anywhere. Remove the interfaces and the related data structures to avoid dead code. Convert cookie parameter of iommu_queue_iopf() into a device pointer that is really passed. Signed-off-by: Lu Baolu Reviewed-by: Kevin Tian Reviewed-by: Jason Gunthorpe Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 23 ------ drivers/iommu/iommu-sva.h | 4 +- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 13 +--- drivers/iommu/intel/iommu.c | 24 ++---- drivers/iommu/io-pgfault.c | 6 +- drivers/iommu/iommu.c | 76 +------------------ 6 files changed, 13 insertions(+), 133 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 5063e1025c85..3f4ea7ea2fb8 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -128,7 +128,6 @@ struct iommu_page_response { typedef int (*iommu_fault_handler_t)(struct iommu_domain *, struct device *, unsigned long, int, void *); -typedef int (*iommu_dev_fault_handler_t)(struct iommu_fault *, void *); struct iommu_domain_geometry { dma_addr_t aperture_start; /* First address that can be mapped */ @@ -594,8 +593,6 @@ struct iommu_fault_event { /** * struct iommu_fault_param - per-device IOMMU fault data - * @handler: Callback function to handle IOMMU faults at device level - * @data: handler private data * @lock: protect pending faults list * @dev: the device that owns this param * @queue: IOPF queue @@ -605,8 +602,6 @@ struct iommu_fault_event { * @faults: holds the pending faults which need response */ struct iommu_fault_param { - iommu_dev_fault_handler_t handler; - void *data; struct mutex lock; struct device *dev; @@ -729,11 +724,6 @@ extern int iommu_group_for_each_dev(struct iommu_group *group, void *data, extern struct iommu_group *iommu_group_get(struct device *dev); extern struct iommu_group *iommu_group_ref_get(struct iommu_group *group); extern void iommu_group_put(struct iommu_group *group); -extern int iommu_register_device_fault_handler(struct device *dev, - iommu_dev_fault_handler_t handler, - void *data); - -extern int iommu_unregister_device_fault_handler(struct device *dev); extern int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt); @@ -1145,19 +1135,6 @@ static inline void iommu_group_put(struct iommu_group *group) { } -static inline -int iommu_register_device_fault_handler(struct device *dev, - iommu_dev_fault_handler_t handler, - void *data) -{ - return -ENODEV; -} - -static inline int iommu_unregister_device_fault_handler(struct device *dev) -{ - return 0; -} - static inline int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt) { diff --git a/drivers/iommu/iommu-sva.h b/drivers/iommu/iommu-sva.h index 54946b5a7caf..de7819c796ce 100644 --- a/drivers/iommu/iommu-sva.h +++ b/drivers/iommu/iommu-sva.h @@ -13,7 +13,7 @@ struct iommu_fault; struct iopf_queue; #ifdef CONFIG_IOMMU_SVA -int iommu_queue_iopf(struct iommu_fault *fault, void *cookie); +int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev); int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev); int iopf_queue_remove_device(struct iopf_queue *queue, @@ -26,7 +26,7 @@ enum iommu_page_response_code iommu_sva_handle_iopf(struct iommu_fault *fault, void *data); #else /* CONFIG_IOMMU_SVA */ -static inline int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) +static inline int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) { return -ENODEV; } diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 05722121f00e..ab2b0a5e4369 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -487,7 +487,6 @@ bool arm_smmu_master_sva_enabled(struct arm_smmu_master *master) static int arm_smmu_master_sva_enable_iopf(struct arm_smmu_master *master) { - int ret; struct device *dev = master->dev; /* @@ -500,16 +499,7 @@ static int arm_smmu_master_sva_enable_iopf(struct arm_smmu_master *master) if (!master->iopf_enabled) return -EINVAL; - ret = iopf_queue_add_device(master->smmu->evtq.iopf, dev); - if (ret) - return ret; - - ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev); - if (ret) { - iopf_queue_remove_device(master->smmu->evtq.iopf, dev); - return ret; - } - return 0; + return iopf_queue_add_device(master->smmu->evtq.iopf, dev); } static void arm_smmu_master_sva_disable_iopf(struct arm_smmu_master *master) @@ -519,7 +509,6 @@ static void arm_smmu_master_sva_disable_iopf(struct arm_smmu_master *master) if (!master->iopf_enabled) return; - iommu_unregister_device_fault_handler(dev); iopf_queue_remove_device(master->smmu->evtq.iopf, dev); } diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 6fb5f6fceea1..df6ceefc09ee 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4427,23 +4427,15 @@ static int intel_iommu_enable_iopf(struct device *dev) if (ret) return ret; - ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev); - if (ret) - goto iopf_remove_device; - ret = pci_enable_pri(pdev, PRQ_DEPTH); - if (ret) - goto iopf_unregister_handler; + if (ret) { + iopf_queue_remove_device(iommu->iopf_queue, dev); + return ret; + } + info->pri_enabled = 1; return 0; - -iopf_unregister_handler: - iommu_unregister_device_fault_handler(dev); -iopf_remove_device: - iopf_queue_remove_device(iommu->iopf_queue, dev); - - return ret; } static int intel_iommu_disable_iopf(struct device *dev) @@ -4466,11 +4458,9 @@ static int intel_iommu_disable_iopf(struct device *dev) info->pri_enabled = 0; /* - * With PRI disabled and outstanding PRQs drained, unregistering - * fault handler and removing device from iopf queue should never - * fail. + * With PRI disabled and outstanding PRQs drained, removing device + * from iopf queue should never fail. */ - WARN_ON(iommu_unregister_device_fault_handler(dev)); WARN_ON(iopf_queue_remove_device(iommu->iopf_queue, dev)); return 0; diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index f948303b2a91..4fda01de5589 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -87,7 +87,7 @@ static void iopf_handler(struct work_struct *work) /** * iommu_queue_iopf - IO Page Fault handler * @fault: fault event - * @cookie: struct device, passed to iommu_register_device_fault_handler. + * @dev: struct device. * * Add a fault to the device workqueue, to be handled by mm. * @@ -124,14 +124,12 @@ static void iopf_handler(struct work_struct *work) * * Return: 0 on success and <0 on error. */ -int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) +int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) { int ret; struct iopf_group *group; struct iopf_fault *iopf, *next; struct iommu_fault_param *iopf_param; - - struct device *dev = cookie; struct dev_iommu *param = dev->iommu; lockdep_assert_held(¶m->lock); diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index e8f2bcea7f51..bfa3c594542c 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1330,76 +1330,6 @@ void iommu_group_put(struct iommu_group *group) } EXPORT_SYMBOL_GPL(iommu_group_put); -/** - * iommu_register_device_fault_handler() - Register a device fault handler - * @dev: the device - * @handler: the fault handler - * @data: private data passed as argument to the handler - * - * When an IOMMU fault event is received, this handler gets called with the - * fault event and data as argument. The handler should return 0 on success. If - * the fault is recoverable (IOMMU_FAULT_PAGE_REQ), the consumer should also - * complete the fault by calling iommu_page_response() with one of the following - * response code: - * - IOMMU_PAGE_RESP_SUCCESS: retry the translation - * - IOMMU_PAGE_RESP_INVALID: terminate the fault - * - IOMMU_PAGE_RESP_FAILURE: terminate the fault and stop reporting - * page faults if possible. - * - * Return 0 if the fault handler was installed successfully, or an error. - */ -int iommu_register_device_fault_handler(struct device *dev, - iommu_dev_fault_handler_t handler, - void *data) -{ - struct dev_iommu *param = dev->iommu; - int ret = 0; - - if (!param || !param->fault_param) - return -EINVAL; - - mutex_lock(¶m->lock); - /* Only allow one fault handler registered for each device */ - if (param->fault_param->handler) { - ret = -EBUSY; - goto done_unlock; - } - - param->fault_param->handler = handler; - param->fault_param->data = data; - -done_unlock: - mutex_unlock(¶m->lock); - - return ret; -} -EXPORT_SYMBOL_GPL(iommu_register_device_fault_handler); - -/** - * iommu_unregister_device_fault_handler() - Unregister the device fault handler - * @dev: the device - * - * Remove the device fault handler installed with - * iommu_register_device_fault_handler(). - * - * Return 0 on success, or an error. - */ -int iommu_unregister_device_fault_handler(struct device *dev) -{ - struct dev_iommu *param = dev->iommu; - - if (!param || !param->fault_param) - return -EINVAL; - - mutex_lock(¶m->lock); - param->fault_param->handler = NULL; - param->fault_param->data = NULL; - mutex_unlock(¶m->lock); - - return 0; -} -EXPORT_SYMBOL_GPL(iommu_unregister_device_fault_handler); - /** * iommu_report_device_fault() - Report fault event to device driver * @dev: the device @@ -1424,10 +1354,6 @@ int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt) /* we only report device fault if there is a handler registered */ mutex_lock(¶m->lock); fparam = param->fault_param; - if (!fparam || !fparam->handler) { - ret = -EINVAL; - goto done_unlock; - } if (evt->fault.type == IOMMU_FAULT_PAGE_REQ && (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { @@ -1442,7 +1368,7 @@ int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt) mutex_unlock(&fparam->lock); } - ret = fparam->handler(&evt->fault, fparam->data); + ret = iommu_queue_iopf(&evt->fault, dev); if (ret && evt_pending) { mutex_lock(&fparam->lock); list_del(&evt_pending->list); From patchwork Wed Dec 20 01:23:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181406 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2351493dyi; Tue, 19 Dec 2023 17:31:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IHKZq95Ex0M+lB5Z6cB/qO1xUFUyKbzbo5Q/FSFxgIeLkPyEU49xP6Ps7aqTlX8sXAMXp9o X-Received: by 2002:a17:906:c214:b0:a19:a19b:78bd with SMTP id d20-20020a170906c21400b00a19a19b78bdmr9201464ejz.128.1703035880990; Tue, 19 Dec 2023 17:31:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035880; cv=none; d=google.com; s=arc-20160816; b=fZ6ygXinhWYO+40ymyiQ+npy4TZnvj/ue7bK6TpEdm7s1tTov4H09vtbrGPEYAXr8Q MXK1gCc8MQWyUEAbv6O+/yr+J7qXfBvR088QQdfHO3bSPpjhhQE2wrlQ3adLEb2FtkqH 1uDvFEhAwwA/DsFKdFf9C0Ql4mvFy9JJZ5zQiDzWY/FS9f1NLCIzadXgyWc5HaNHbf6/ PvVzZ9ZNVOWQp527O7b8jxzml3pyuhmStFOFSNMiq8LQ9+o1e6JKUTOPHmM6Tc0ZT1lE xjMl2VWLSDKyxIvWeJxF7wmxJWLfbgwO5AgmVYdHYZgrBkON3Wa8yZU3cIfEABK22b/j 4UYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=E3Sm/D+BKrPb9VgTHrABqJ3Sh24UxVXgLjh/XLmKvc8=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=lTtn+LRl2NGEcGPUtWDM9/jyNhzoRifYumFsu6zkex4/+qtfFgLd5UYoi61DD1HRQu qryFADvfs146MimeIauOr8SBla9+KOQmOMRNscMoL75iQ6wzQv18ZhLsF2XkQuAbQ2Tp 5CiBxcVg33HmqvSNmKXqKMu/18gbWQN4RGfLMHrd6mz8yQgutWt/Yf9yv9i+WFkxemzX EUguKQfGwH2ydyEvAYw78kJNMYmNiiKvEkXSgcqn1yixbhXu8KVeQur3A0jNzQY5YsB0 y3VXchQ0/6y3WAo4bfsEzcOmChsOIEfyNz18snfun8lMyA0Qb3Spn2Z2pt33qbvrOaen pdag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VQKY96cR; spf=pass (google.com: domain of linux-kernel+bounces-6241-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6241-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id n8-20020a170906700800b00a22fb3af196si6408181ejj.881.2023.12.19.17.31.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:31:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6241-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VQKY96cR; spf=pass (google.com: domain of linux-kernel+bounces-6241-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6241-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 6FD6B1F25B0A for ; Wed, 20 Dec 2023 01:31:20 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F20861D6AE; Wed, 20 Dec 2023 01:29:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VQKY96cR" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD32716424; Wed, 20 Dec 2023 01:28:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035739; x=1734571739; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LEaRjdoPedKIaBPGwOWonC2FMTqfcKHM83lUJrBFAFc=; b=VQKY96cRaSQflR2uafWyJ/MWU7cX0pJv3Jzu3WABtSqmE3iPvi3t2VL6 MYnGRuDBzzQRYJ9JZVhiB/R2UJ+RxmMAAvHZqCNudJmSQ7LkxeQwzCymn 7BdPY141KEaBOzLRs6hms+88DF8gNaeylZcjKonjkTc2eEn7ebTc8RBnU 10qKhIeukYx6DEG8uLW3CvPKL3160BHZsZ3AWl1l5QWcqDGYgCCIMfQ+O wlthRkVQGElFy0v6LkQySc+UqrPOqah/VqM6GQxl5LAXMcLdyPQjqf4f6 ISdIcLHBImQ6XN4xucyax9wYHADp/KZMXHT0XUeLyqfyB8giWoiS5OfeP w==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965800" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965800" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:28:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319125" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319125" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:54 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 07/14] iommu: Merge iommu_fault_event and iopf_fault Date: Wed, 20 Dec 2023 09:23:25 +0800 Message-Id: <20231220012332.168188-8-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762552244603767 X-GMAIL-MSGID: 1785762552244603767 The iommu_fault_event and iopf_fault data structures store the same information about an iopf fault. They are also used in the same way. Merge these two data structures into a single one to make the code more concise and easier to maintain. Signed-off-by: Lu Baolu Reviewed-by: Kevin Tian Reviewed-by: Jason Gunthorpe Reviewed-by: Yi Liu Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 27 ++++++--------------- drivers/iommu/intel/iommu.h | 2 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 4 +-- drivers/iommu/intel/svm.c | 5 ++-- drivers/iommu/io-pgfault.c | 5 ---- drivers/iommu/iommu.c | 8 +++--- 6 files changed, 17 insertions(+), 34 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 3f4ea7ea2fb8..f97a5ab52af6 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -40,7 +40,6 @@ struct iommu_domain_ops; struct iommu_dirty_ops; struct notifier_block; struct iommu_sva; -struct iommu_fault_event; struct iommu_dma_cookie; struct iopf_queue; @@ -121,6 +120,11 @@ struct iommu_page_response { u32 code; }; +struct iopf_fault { + struct iommu_fault fault; + /* node for pending lists */ + struct list_head list; +}; /* iommu fault flags */ #define IOMMU_FAULT_READ 0x0 @@ -485,7 +489,7 @@ struct iommu_ops { int (*dev_disable_feat)(struct device *dev, enum iommu_dev_features f); int (*page_response)(struct device *dev, - struct iommu_fault_event *evt, + struct iopf_fault *evt, struct iommu_page_response *msg); int (*def_domain_type)(struct device *dev); @@ -577,20 +581,6 @@ struct iommu_device { u32 max_pasids; }; -/** - * struct iommu_fault_event - Generic fault event - * - * Can represent recoverable faults such as a page requests or - * unrecoverable faults such as DMA or IRQ remapping faults. - * - * @fault: fault descriptor - * @list: pending fault event list, used for tracking responses - */ -struct iommu_fault_event { - struct iommu_fault fault; - struct list_head list; -}; - /** * struct iommu_fault_param - per-device IOMMU fault data * @lock: protect pending faults list @@ -725,8 +715,7 @@ extern struct iommu_group *iommu_group_get(struct device *dev); extern struct iommu_group *iommu_group_ref_get(struct iommu_group *group); extern void iommu_group_put(struct iommu_group *group); -extern int iommu_report_device_fault(struct device *dev, - struct iommu_fault_event *evt); +extern int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt); extern int iommu_page_response(struct device *dev, struct iommu_page_response *msg); @@ -1136,7 +1125,7 @@ static inline void iommu_group_put(struct iommu_group *group) } static inline -int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt) +int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) { return -ENODEV; } diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index d02f916d8e59..696d95293a69 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -1079,7 +1079,7 @@ struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *parent, void intel_svm_check(struct intel_iommu *iommu); int intel_svm_enable_prq(struct intel_iommu *iommu); int intel_svm_finish_prq(struct intel_iommu *iommu); -int intel_svm_page_response(struct device *dev, struct iommu_fault_event *evt, +int intel_svm_page_response(struct device *dev, struct iopf_fault *evt, struct iommu_page_response *msg); struct iommu_domain *intel_svm_domain_alloc(void); void intel_svm_remove_dev_pasid(struct device *dev, ioasid_t pasid); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 4cf1054ed321..ab4f04c7f932 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -922,7 +922,7 @@ static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, } static int arm_smmu_page_response(struct device *dev, - struct iommu_fault_event *unused, + struct iopf_fault *unused, struct iommu_page_response *resp) { struct arm_smmu_cmdq_ent cmd = {0}; @@ -1465,7 +1465,7 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) struct arm_smmu_master *master; bool ssid_valid = evt[0] & EVTQ_0_SSV; u32 sid = FIELD_GET(EVTQ_0_SID, evt[0]); - struct iommu_fault_event fault_evt = { }; + struct iopf_fault fault_evt = { }; struct iommu_fault *flt = &fault_evt.fault; switch (FIELD_GET(EVTQ_0_ID, evt[0])) { diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 40edd282903f..9751f037e188 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -565,13 +565,12 @@ static int prq_to_iommu_prot(struct page_req_dsc *req) static int intel_svm_prq_report(struct intel_iommu *iommu, struct device *dev, struct page_req_dsc *desc) { - struct iommu_fault_event event; + struct iopf_fault event = { }; if (!dev || !dev_is_pci(dev)) return -ENODEV; /* Fill in event data for device specific processing */ - memset(&event, 0, sizeof(struct iommu_fault_event)); event.fault.type = IOMMU_FAULT_PAGE_REQ; event.fault.prm.addr = (u64)desc->addr << VTD_PAGE_SHIFT; event.fault.prm.pasid = desc->pasid; @@ -743,7 +742,7 @@ static irqreturn_t prq_event_thread(int irq, void *d) } int intel_svm_page_response(struct device *dev, - struct iommu_fault_event *evt, + struct iopf_fault *evt, struct iommu_page_response *msg) { struct device_domain_info *info = dev_iommu_priv_get(dev); diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 4fda01de5589..10d48eb72608 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -25,11 +25,6 @@ struct iopf_queue { struct mutex lock; }; -struct iopf_fault { - struct iommu_fault fault; - struct list_head list; -}; - struct iopf_group { struct iopf_fault last_fault; struct list_head faults; diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index bfa3c594542c..2bfdacdee8aa 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1341,10 +1341,10 @@ EXPORT_SYMBOL_GPL(iommu_group_put); * * Return 0 on success, or an error. */ -int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt) +int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) { struct dev_iommu *param = dev->iommu; - struct iommu_fault_event *evt_pending = NULL; + struct iopf_fault *evt_pending = NULL; struct iommu_fault_param *fparam; int ret = 0; @@ -1357,7 +1357,7 @@ int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt) if (evt->fault.type == IOMMU_FAULT_PAGE_REQ && (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { - evt_pending = kmemdup(evt, sizeof(struct iommu_fault_event), + evt_pending = kmemdup(evt, sizeof(struct iopf_fault), GFP_KERNEL); if (!evt_pending) { ret = -ENOMEM; @@ -1386,7 +1386,7 @@ int iommu_page_response(struct device *dev, { bool needs_pasid; int ret = -EINVAL; - struct iommu_fault_event *evt; + struct iopf_fault *evt; struct iommu_fault_page_request *prm; struct dev_iommu *param = dev->iommu; const struct iommu_ops *ops = dev_iommu_ops(dev); From patchwork Wed Dec 20 01:23:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181408 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2351720dyi; Tue, 19 Dec 2023 17:31:57 -0800 (PST) X-Google-Smtp-Source: AGHT+IHFziMt3KiPN+HZEgcGt3DnO6JIRZyLLqLldGPy6yNmsZeuE8LnKXIUVPoPac+jy2ovmpjV X-Received: by 2002:ac2:4d1b:0:b0:50e:450a:486b with SMTP id r27-20020ac24d1b000000b0050e450a486bmr933257lfi.20.1703035916786; Tue, 19 Dec 2023 17:31:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035916; cv=none; d=google.com; s=arc-20160816; b=0nkDWiy2K+ZBIDBezsQrdDvVA3wp2V6zdmpqikcbOKNqQZbpMTeP6F0nH7g0Wf0yRP ESjcd5ieHOv76el7p4WxufQ+1+xM8ZJfewsSST++lznnqjqvhx9Df22OeB5HJr9vpe3O 25tuqjIecLNIRjPebj7/AvUiVJs/NvMSxKba+Bh97rw8WMDon24mjzo97mOeO6TkukYh 4CFF6Rzjsi/l3DmdJPEbapTj1SU+J7hoWCYzz0nfNe2JyV94OvW8Iws2L/hTUx+v/O7a cD1wizjXKC/Vb1xAhH90WaUvy8LfE54PzJNsWNyo1awMBmBorp+t0aLr/u4/UK/Z4gCh aiRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=SQ4WUTYfN1kkVE9316Ja9GjWvaninkiBoZT15YkEEr8=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=LyG/yVTNhJ49sLtqiePhW0lj7UigtZBmhbysGdE/j+hrQPrhWAKISTz9LumRoB7Kcw TcLepsuc3fNPp4wehgGDWT1oIwKfjw2c2WOgAiHLUpaVFHtkIOhOE9lJarsdsQjf6AMi jTPFTc/f9eV74gzZRyBIgv4KW3KPoNGWcmlP38pXokUKIstbkzB7SG6eXSKSN0mKv2sb 6NKTM1r6S00Vx71Ks2+0qS/5aw0g3KrwHRtY+ZhunLR5Aoti7cvxVBAPMep5NRoxR7Oz bv0xYh4h6IEdsF13VPsqP1yx0JSpM8M9OykfCUMn757W1mN9sNSo8R1ZZnNwXe1ZSUOM 4bgA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nmksXrpc; spf=pass (google.com: domain of linux-kernel+bounces-6242-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6242-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id r27-20020a170906281b00b00a1d89e70079si11051196ejc.306.2023.12.19.17.31.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:31:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6242-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nmksXrpc; spf=pass (google.com: domain of linux-kernel+bounces-6242-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6242-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 64AA01F26086 for ; Wed, 20 Dec 2023 01:31:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A39371DFD1; Wed, 20 Dec 2023 01:29:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nmksXrpc" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B46EE1CA9A; Wed, 20 Dec 2023 01:29:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035743; x=1734571743; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PU8G+GJrdy2ZHCO/SwXRChWvczfDrcoZWX9GsuoccEE=; b=nmksXrpcG2m3X/BNkWizFJMvcrypMnQT50SUt7CjIfc6M8x3jK2/fKbJ /BF5vhy0V1wDkyU4IthJ3oZg6eBeAOW8pBimWE7LNu7lFAWK7adtx0ED0 6ESKzzFGlfwDI3vudLE+8i7+O1LqvXkXjtaWCI/ruRmgUnLbG3XgjBL0c 16yZu3Pk1v6MZcuf5rA/MMVVvXaCS1f5Em7QxurAhVHWZiPecZ2u6Ccw8 ZWTL2+Smlyvhpd6ShpMJ3hzJB0BlhMd9jLbseBUaXN7BqkeFd4h3JN+Rg Asaf8W+dL5+es0DUvtfpw2080NaUCLZCjutgwrGWdFDo0E1NjKqC0nLKU A==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965809" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965809" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:29:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319130" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319130" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:58 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 08/14] iommu: Prepare for separating SVA and IOPF Date: Wed, 20 Dec 2023 09:23:26 +0800 Message-Id: <20231220012332.168188-9-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762589051991724 X-GMAIL-MSGID: 1785762589051991724 Move iopf_group data structure to iommu.h to make it a minimal set of faults that a domain's page fault handler should handle. Add a new function, iopf_free_group(), to free a fault group after all faults in the group are handled. This function will be made global so that it can be called from other files, such as iommu-sva.c. Move iopf_queue data structure to iommu.h to allow the workqueue to be scheduled out of this file. This will simplify the sequential patches. Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Reviewed-by: Yi Liu Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 20 +++++++++++++++++++- drivers/iommu/io-pgfault.c | 37 +++++++++++++------------------------ 2 files changed, 32 insertions(+), 25 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index f97a5ab52af6..799b56563026 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -41,7 +41,6 @@ struct iommu_dirty_ops; struct notifier_block; struct iommu_sva; struct iommu_dma_cookie; -struct iopf_queue; #define IOMMU_FAULT_PERM_READ (1 << 0) /* read */ #define IOMMU_FAULT_PERM_WRITE (1 << 1) /* write */ @@ -126,6 +125,25 @@ struct iopf_fault { struct list_head list; }; +struct iopf_group { + struct iopf_fault last_fault; + struct list_head faults; + struct work_struct work; + struct device *dev; +}; + +/** + * struct iopf_queue - IO Page Fault queue + * @wq: the fault workqueue + * @devices: devices attached to this queue + * @lock: protects the device list + */ +struct iopf_queue { + struct workqueue_struct *wq; + struct list_head devices; + struct mutex lock; +}; + /* iommu fault flags */ #define IOMMU_FAULT_READ 0x0 #define IOMMU_FAULT_WRITE 0x1 diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 10d48eb72608..c7e6bbed5c05 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -13,24 +13,17 @@ #include "iommu-sva.h" -/** - * struct iopf_queue - IO Page Fault queue - * @wq: the fault workqueue - * @devices: devices attached to this queue - * @lock: protects the device list - */ -struct iopf_queue { - struct workqueue_struct *wq; - struct list_head devices; - struct mutex lock; -}; +static void iopf_free_group(struct iopf_group *group) +{ + struct iopf_fault *iopf, *next; -struct iopf_group { - struct iopf_fault last_fault; - struct list_head faults; - struct work_struct work; - struct device *dev; -}; + list_for_each_entry_safe(iopf, next, &group->faults, list) { + if (!(iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) + kfree(iopf); + } + + kfree(group); +} static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, enum iommu_page_response_code status) @@ -50,9 +43,9 @@ static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, static void iopf_handler(struct work_struct *work) { + struct iopf_fault *iopf; struct iopf_group *group; struct iommu_domain *domain; - struct iopf_fault *iopf, *next; enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; group = container_of(work, struct iopf_group, work); @@ -61,7 +54,7 @@ static void iopf_handler(struct work_struct *work) if (!domain || !domain->iopf_handler) status = IOMMU_PAGE_RESP_INVALID; - list_for_each_entry_safe(iopf, next, &group->faults, list) { + list_for_each_entry(iopf, &group->faults, list) { /* * For the moment, errors are sticky: don't handle subsequent * faults in the group if there is an error. @@ -69,14 +62,10 @@ static void iopf_handler(struct work_struct *work) if (status == IOMMU_PAGE_RESP_SUCCESS) status = domain->iopf_handler(&iopf->fault, domain->fault_data); - - if (!(iopf->fault.prm.flags & - IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) - kfree(iopf); } iopf_complete_group(group->dev, &group->last_fault, status); - kfree(group); + iopf_free_group(group); } /** From patchwork Wed Dec 20 01:23:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181409 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2351768dyi; Tue, 19 Dec 2023 17:32:05 -0800 (PST) X-Google-Smtp-Source: AGHT+IEmtEnQScdtsod7WoDaLPlFGwReRrNvbqxxDH7Tomlepas5AVHJq9IhXcpbhbmEwcy8bNhR X-Received: by 2002:a17:906:d4:b0:a23:5d29:da14 with SMTP id 20-20020a17090600d400b00a235d29da14mr2002752eji.51.1703035924783; Tue, 19 Dec 2023 17:32:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035924; cv=none; d=google.com; s=arc-20160816; b=hrOf4kYZraSz0eJY9vFh+JLDqfn0HNsndkZ5rFoiChl+Rlb0nAAp/wznFse6mq2Y22 KftkmPBzPRF3OGTUBR1ozHSa9gNPjqBAQ/zaCXBPZ0IZ1PnLFVkfoTVpDIQncyQ7CypI DSvDYYFKue5iQwupeu2xThY+bRb7sp0kQ3JKG8CPHsPFG9kTNHVY9VVw0hKn4NNuf37L Tzm6Y6ScjkBfYFOlCVQ3hvKFAe0OqLgubq5cFdWFkGlHOfcXOWtBt8ZyQmMMbFN3NqYx THkbJYfERYn8wJlmMo9zFCi0PRLzy7O5j/58/XjfpEREwCvwEzB+4z/TIYrJrpNZtDVM fF0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=GxlFsKHdhor6uvR6MywQH6ofEJdkQWMFUVZIDM/8dIs=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=kM1lZTT5yAI8mqRorY8C3KWA9ynQMvLW93U+Ns1X75FgWyptk11YUlqWUFnWuRyX/j fUODeO6SWk24fAcHw6TrE9nI26y+V1ei8rVTfFvpfZF5vgLn1k7ZMmjBZQkr2X4IZf4L IH9c4wFklgSSn2Wx11PqzUvzObmH6fBtiH24MUBfo9DEJFYTPCwWHSiDPV/TTJguzn0L rfNJv10AV0uCACX3RFCdxHkiFPaSKyouZ93ICUCisevBjrqJMkmlOyGeSNTFcGUVZWky eFKH9OJjUOZtFhqjeYXg1ldyrTAyHyo1AniK3bLxNr+ESeI0FFSE3js5ILgTm6XeQevr v2Uw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=aRhUOY3o; spf=pass (google.com: domain of linux-kernel+bounces-6243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6243-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id ay10-20020a170906d28a00b00a2373875b8bsi1132488ejb.464.2023.12.19.17.32.04 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:32:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=aRhUOY3o; spf=pass (google.com: domain of linux-kernel+bounces-6243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6243-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 3DA091F25B56 for ; Wed, 20 Dec 2023 01:32:04 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B6F071F5E4; Wed, 20 Dec 2023 01:29:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="aRhUOY3o" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15BB78821; Wed, 20 Dec 2023 01:29:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035747; x=1734571747; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PQT0OY2zL7aLHPHc26zxTk5lnm9ED4zmEBaEmOKyYZ8=; b=aRhUOY3oo7w4RnwlDCRxpJctvEMo/299C83PIcPy0MH7rR6BgUGJBIVj bWdANUt6qG2P192KI7IuVU0WVIJYKQT+38t9pMRczHE8bw/8SaM2umz/n DoKgo/3JLAlVEb/NHiVuh50oqEWnUONxIe0gBUCNsv+wlH8FEg3XIMiCe STHDki+LY9Gv3wc5+tUFsedj/DTD4Itc254xBJucmM5Y6ohOJIXaqwLp5 hZX6aRJj7AoJSxU78g6INY/uS9hEGfWAme1EhXNOeejfMP/ihcwRFX9MJ DZtT++IcVZUMu6NHFDML5clUkW71s/M6C/gFUIBwhnWbRXZ/gLa7gqqT7 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965823" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965823" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:29:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319149" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319149" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:29:02 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 09/14] iommu: Make iommu_queue_iopf() more generic Date: Wed, 20 Dec 2023 09:23:27 +0800 Message-Id: <20231220012332.168188-10-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762597937360626 X-GMAIL-MSGID: 1785762597937360626 Make iommu_queue_iopf() more generic by making the iopf_group a minimal set of iopf's that an iopf handler of domain should handle and respond to. Add domain parameter to struct iopf_group so that the handler can retrieve and use it directly. Change iommu_queue_iopf() to forward groups of iopf's to the domain's iopf handler. This is also a necessary step to decouple the sva iopf handling code from this interface. Signed-off-by: Lu Baolu Reviewed-by: Kevin Tian Reviewed-by: Jason Gunthorpe Reviewed-by: Yi Liu Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 4 +-- drivers/iommu/iommu-sva.h | 6 ++-- drivers/iommu/io-pgfault.c | 68 +++++++++++++++++++++++++++++++------- drivers/iommu/iommu-sva.c | 3 +- 4 files changed, 61 insertions(+), 20 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 799b56563026..d90168d635cd 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -130,6 +130,7 @@ struct iopf_group { struct list_head faults; struct work_struct work; struct device *dev; + struct iommu_domain *domain; }; /** @@ -209,8 +210,7 @@ struct iommu_domain { unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ struct iommu_domain_geometry geometry; struct iommu_dma_cookie *iova_cookie; - enum iommu_page_response_code (*iopf_handler)(struct iommu_fault *fault, - void *data); + int (*iopf_handler)(struct iopf_group *group); void *fault_data; union { struct { diff --git a/drivers/iommu/iommu-sva.h b/drivers/iommu/iommu-sva.h index de7819c796ce..27c8da115b41 100644 --- a/drivers/iommu/iommu-sva.h +++ b/drivers/iommu/iommu-sva.h @@ -22,8 +22,7 @@ int iopf_queue_flush_dev(struct device *dev); struct iopf_queue *iopf_queue_alloc(const char *name); void iopf_queue_free(struct iopf_queue *queue); int iopf_queue_discard_partial(struct iopf_queue *queue); -enum iommu_page_response_code -iommu_sva_handle_iopf(struct iommu_fault *fault, void *data); +int iommu_sva_handle_iopf(struct iopf_group *group); #else /* CONFIG_IOMMU_SVA */ static inline int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) @@ -62,8 +61,7 @@ static inline int iopf_queue_discard_partial(struct iopf_queue *queue) return -ENODEV; } -static inline enum iommu_page_response_code -iommu_sva_handle_iopf(struct iommu_fault *fault, void *data) +static inline int iommu_sva_handle_iopf(struct iopf_group *group) { return IOMMU_PAGE_RESP_INVALID; } diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index c7e6bbed5c05..13cd0929e766 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -13,6 +13,9 @@ #include "iommu-sva.h" +enum iommu_page_response_code +iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm); + static void iopf_free_group(struct iopf_group *group) { struct iopf_fault *iopf, *next; @@ -45,29 +48,48 @@ static void iopf_handler(struct work_struct *work) { struct iopf_fault *iopf; struct iopf_group *group; - struct iommu_domain *domain; enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; group = container_of(work, struct iopf_group, work); - domain = iommu_get_domain_for_dev_pasid(group->dev, - group->last_fault.fault.prm.pasid, 0); - if (!domain || !domain->iopf_handler) - status = IOMMU_PAGE_RESP_INVALID; - list_for_each_entry(iopf, &group->faults, list) { /* * For the moment, errors are sticky: don't handle subsequent * faults in the group if there is an error. */ - if (status == IOMMU_PAGE_RESP_SUCCESS) - status = domain->iopf_handler(&iopf->fault, - domain->fault_data); + if (status != IOMMU_PAGE_RESP_SUCCESS) + break; + + status = iommu_sva_handle_mm(&iopf->fault, group->domain->mm); } iopf_complete_group(group->dev, &group->last_fault, status); iopf_free_group(group); } +static struct iommu_domain *get_domain_for_iopf(struct device *dev, + struct iommu_fault *fault) +{ + struct iommu_domain *domain; + + if (fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) { + domain = iommu_get_domain_for_dev_pasid(dev, fault->prm.pasid, 0); + if (IS_ERR(domain)) + domain = NULL; + } else { + domain = iommu_get_domain_for_dev(dev); + } + + if (!domain || !domain->iopf_handler) { + dev_warn_ratelimited(dev, + "iopf (pasid %d) without domain attached or handler installed\n", + fault->prm.pasid); + + return NULL; + } + + return domain; +} + /** * iommu_queue_iopf - IO Page Fault handler * @fault: fault event @@ -112,6 +134,7 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) { int ret; struct iopf_group *group; + struct iommu_domain *domain; struct iopf_fault *iopf, *next; struct iommu_fault_param *iopf_param; struct dev_iommu *param = dev->iommu; @@ -143,6 +166,12 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) return 0; } + domain = get_domain_for_iopf(dev, fault); + if (!domain) { + ret = -EINVAL; + goto cleanup_partial; + } + group = kzalloc(sizeof(*group), GFP_KERNEL); if (!group) { /* @@ -157,8 +186,8 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) group->dev = dev; group->last_fault.fault = *fault; INIT_LIST_HEAD(&group->faults); + group->domain = domain; list_add(&group->last_fault.list, &group->faults); - INIT_WORK(&group->work, iopf_handler); /* See if we have partial faults for this group */ list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { @@ -167,9 +196,13 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) list_move(&iopf->list, &group->faults); } - queue_work(iopf_param->queue->wq, &group->work); - return 0; + mutex_unlock(&iopf_param->lock); + ret = domain->iopf_handler(group); + mutex_lock(&iopf_param->lock); + if (ret) + iopf_free_group(group); + return ret; cleanup_partial: list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { if (iopf->fault.prm.grpid == fault->prm.grpid) { @@ -181,6 +214,17 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) } EXPORT_SYMBOL_GPL(iommu_queue_iopf); +int iommu_sva_handle_iopf(struct iopf_group *group) +{ + struct iommu_fault_param *fault_param = group->dev->iommu->fault_param; + + INIT_WORK(&group->work, iopf_handler); + if (!queue_work(fault_param->queue->wq, &group->work)) + return -EBUSY; + + return 0; +} + /** * iopf_queue_flush_dev - Ensure that all queued faults have been processed * @dev: the endpoint whose faults need to be flushed. diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index c3fc9201d0be..fcae7308fcb7 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -163,11 +163,10 @@ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid); * I/O page fault handler for SVA */ enum iommu_page_response_code -iommu_sva_handle_iopf(struct iommu_fault *fault, void *data) +iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm) { vm_fault_t ret; struct vm_area_struct *vma; - struct mm_struct *mm = data; unsigned int access_flags = 0; unsigned int fault_flags = FAULT_FLAG_REMOTE; struct iommu_fault_page_request *prm = &fault->prm; From patchwork Wed Dec 20 01:23:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181410 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2351941dyi; Tue, 19 Dec 2023 17:32:29 -0800 (PST) X-Google-Smtp-Source: AGHT+IGv874TP3Bm9D5hQJdfH7dMJE+iiHJBpoCKuI2aJVm+ZTJx+x4s4dp0oILh7u2TLKn3s+NR X-Received: by 2002:a50:d6d8:0:b0:553:31c4:fb1e with SMTP id l24-20020a50d6d8000000b0055331c4fb1emr1757786edj.123.1703035949202; Tue, 19 Dec 2023 17:32:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035949; cv=none; d=google.com; s=arc-20160816; b=l7nmNZ7qz+GM2bIEVNqOV4MkMYCeFN5hJYfZh//mJiz2weehnt4gmu61p7td/sZG0r Y/pdaEpBTZK56TSlvzx9hPsC2Yqiu+3XlDd+EH0jjTZNqLs30Sj8LofzVgHIIcKvC10F AXeWBaqX8uG83qKE9lfLRyUk+/6AVEShZmJd1vC4b4/hP/ncWQ+1zlr5hEjJKvFniO40 ZwcSv0SKI5ZO9uNfFlbQI2iXq2coMWR1tFTX13eZLmq8/x92/pM0w9qjnkKtkrB0MKiz h0a8MLsp641Btw0yLwdyrcyAyTEKHRnx6zwoyyBNG4N83xqp6PCazMnFgoa0QyhQ6ED/ 0UVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ibET0aJGS/FsyX+xZPitVcgmFdiWxxCadCjBhlEpaMA=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=AO7WRPyzBZHHlb4YL7Al1jAgpTXQaz4zQqo05ihFjUEzlc8ogEQ49fQqkjA8ckJrjQ zw4FcqP/KVG5L+IjiPpPHkw0bLB2La24efKC1xSEFAZKQsEsAZq+ibR6Chl5Him0m/TW qMXyF70KmIAbGBdceAdMWjaYERya+xR+Lqiz7JeSCgV/pyqMaL07w3MzOhL7dfyn+JqQ fnd76V9vJUu4b+opDyAChMSTge7sa/GwH6zamejmyYqgh6jAICtFXuj6gvJbBaA9aDHo 3+Jh0SLFvRnvSHonHrnxvlyuDq5bj8y/4KQ+gZG+uD4yn6jIpt8HQLn77Cy6vqhu/UiI JHuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cRLkuRHu; spf=pass (google.com: domain of linux-kernel+bounces-6244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6244-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id f21-20020a50a6d5000000b005534721633esi2408901edc.154.2023.12.19.17.32.29 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:32:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cRLkuRHu; spf=pass (google.com: domain of linux-kernel+bounces-6244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6244-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 7C5E71F25E5E for ; Wed, 20 Dec 2023 01:32:28 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E55051D55C; Wed, 20 Dec 2023 01:29:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cRLkuRHu" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0005D1EA73; Wed, 20 Dec 2023 01:29:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035751; x=1734571751; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wdvw7IYhSwip0Duj/qh5CupDu3Fu4nGKXEerFiQKvSU=; b=cRLkuRHu8C+7VZNiFljZKiYaDLRI9WVTYP3qeBtsmrrrcnHAbFu5QRSy zlERYq1rx3yljwVTkix099U0j128cPMPT/D6bnHexY5bplNy5MIFSjX2v bOybLjo7Lw+sl9GfMcL7vHOKN/06FfPWtCfIDe8Oo+cH2/jzHuNy6wwqh VeKH/e3BfpWaL2xrj1tJO5jPVWVjdS+xZ7a8l+nRFZb4SwAhR30QB+MVU TkH1kvJctf1R9ZRqrY4Ut3pUes6KyR/V+qEOG9o8A5enzIBMswFpJTqI4 CuJMflYSBbxJA1xYxNN5V5BUizpdmhC2RgGa0A8gWNP9KPthn66dRI3FZ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965832" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965832" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:29:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319185" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319185" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:29:06 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 10/14] iommu: Separate SVA and IOPF Date: Wed, 20 Dec 2023 09:23:28 +0800 Message-Id: <20231220012332.168188-11-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762623522364550 X-GMAIL-MSGID: 1785762623522364550 Add CONFIG_IOMMU_IOPF for page fault handling framework and select it from its real consumer. Move iopf function declaration from iommu-sva.h to iommu.h and remove iommu-sva.h as it's empty now. Consolidate all SVA related code into iommu-sva.c: - Move iommu_sva_domain_alloc() from iommu.c to iommu-sva.c. - Move sva iopf handling code from io-pgfault.c to iommu-sva.c. Consolidate iommu_report_device_fault() and iommu_page_response() into io-pgfault.c. Export iopf_free_group() and iopf_group_response() for iopf handlers implemented in modules. Some functions are renamed with more meaningful names. No other intentional functionality changes. Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 98 ++++++--- drivers/iommu/iommu-sva.h | 69 ------- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 1 - drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 1 - drivers/iommu/intel/iommu.c | 1 - drivers/iommu/intel/svm.c | 1 - drivers/iommu/io-pgfault.c | 188 +++++++++++++----- drivers/iommu/iommu-sva.c | 68 ++++++- drivers/iommu/iommu.c | 133 ------------- drivers/iommu/Kconfig | 4 + drivers/iommu/Makefile | 3 +- drivers/iommu/intel/Kconfig | 1 + 12 files changed, 277 insertions(+), 291 deletions(-) delete mode 100644 drivers/iommu/iommu-sva.h diff --git a/include/linux/iommu.h b/include/linux/iommu.h index d90168d635cd..dbad2cb9eca2 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -733,10 +733,6 @@ extern struct iommu_group *iommu_group_get(struct device *dev); extern struct iommu_group *iommu_group_ref_get(struct iommu_group *group); extern void iommu_group_put(struct iommu_group *group); -extern int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt); -extern int iommu_page_response(struct device *dev, - struct iommu_page_response *msg); - extern int iommu_group_id(struct iommu_group *group); extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *); @@ -952,8 +948,6 @@ bool iommu_group_dma_owner_claimed(struct iommu_group *group); int iommu_device_claim_dma_owner(struct device *dev, void *owner); void iommu_device_release_dma_owner(struct device *dev); -struct iommu_domain *iommu_sva_domain_alloc(struct device *dev, - struct mm_struct *mm); int iommu_attach_device_pasid(struct iommu_domain *domain, struct device *dev, ioasid_t pasid); void iommu_detach_device_pasid(struct iommu_domain *domain, @@ -1142,18 +1136,6 @@ static inline void iommu_group_put(struct iommu_group *group) { } -static inline -int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) -{ - return -ENODEV; -} - -static inline int iommu_page_response(struct device *dev, - struct iommu_page_response *msg) -{ - return -ENODEV; -} - static inline int iommu_group_id(struct iommu_group *group) { return -ENODEV; @@ -1302,12 +1284,6 @@ static inline int iommu_device_claim_dma_owner(struct device *dev, void *owner) return -ENODEV; } -static inline struct iommu_domain * -iommu_sva_domain_alloc(struct device *dev, struct mm_struct *mm) -{ - return NULL; -} - static inline int iommu_attach_device_pasid(struct iommu_domain *domain, struct device *dev, ioasid_t pasid) { @@ -1447,6 +1423,8 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev, struct mm_struct *mm); void iommu_sva_unbind_device(struct iommu_sva *handle); u32 iommu_sva_get_pasid(struct iommu_sva *handle); +struct iommu_domain *iommu_sva_domain_alloc(struct device *dev, + struct mm_struct *mm); #else static inline struct iommu_sva * iommu_sva_bind_device(struct device *dev, struct mm_struct *mm) @@ -1471,6 +1449,78 @@ static inline u32 mm_get_enqcmd_pasid(struct mm_struct *mm) } static inline void mm_pasid_drop(struct mm_struct *mm) {} + +static inline struct iommu_domain * +iommu_sva_domain_alloc(struct device *dev, struct mm_struct *mm) +{ + return NULL; +} #endif /* CONFIG_IOMMU_SVA */ +#ifdef CONFIG_IOMMU_IOPF +int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev); +int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev); +int iopf_queue_flush_dev(struct device *dev); +struct iopf_queue *iopf_queue_alloc(const char *name); +void iopf_queue_free(struct iopf_queue *queue); +int iopf_queue_discard_partial(struct iopf_queue *queue); +void iopf_free_group(struct iopf_group *group); +int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt); +int iommu_page_response(struct device *dev, struct iommu_page_response *msg); +int iopf_group_response(struct iopf_group *group, + enum iommu_page_response_code status); +#else +static inline int +iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) +{ + return -ENODEV; +} + +static inline int +iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) +{ + return -ENODEV; +} + +static inline int iopf_queue_flush_dev(struct device *dev) +{ + return -ENODEV; +} + +static inline struct iopf_queue *iopf_queue_alloc(const char *name) +{ + return NULL; +} + +static inline void iopf_queue_free(struct iopf_queue *queue) +{ +} + +static inline int iopf_queue_discard_partial(struct iopf_queue *queue) +{ + return -ENODEV; +} + +static inline void iopf_free_group(struct iopf_group *group) +{ +} + +static inline int +iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) +{ + return -ENODEV; +} + +static inline int +iommu_page_response(struct device *dev, struct iommu_page_response *msg) +{ + return -ENODEV; +} + +static inline int iopf_group_response(struct iopf_group *group, + enum iommu_page_response_code status) +{ + return -ENODEV; +} +#endif /* CONFIG_IOMMU_IOPF */ #endif /* __LINUX_IOMMU_H */ diff --git a/drivers/iommu/iommu-sva.h b/drivers/iommu/iommu-sva.h deleted file mode 100644 index 27c8da115b41..000000000000 --- a/drivers/iommu/iommu-sva.h +++ /dev/null @@ -1,69 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * SVA library for IOMMU drivers - */ -#ifndef _IOMMU_SVA_H -#define _IOMMU_SVA_H - -#include - -/* I/O Page fault */ -struct device; -struct iommu_fault; -struct iopf_queue; - -#ifdef CONFIG_IOMMU_SVA -int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev); - -int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev); -int iopf_queue_remove_device(struct iopf_queue *queue, - struct device *dev); -int iopf_queue_flush_dev(struct device *dev); -struct iopf_queue *iopf_queue_alloc(const char *name); -void iopf_queue_free(struct iopf_queue *queue); -int iopf_queue_discard_partial(struct iopf_queue *queue); -int iommu_sva_handle_iopf(struct iopf_group *group); - -#else /* CONFIG_IOMMU_SVA */ -static inline int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) -{ - return -ENODEV; -} - -static inline int iopf_queue_add_device(struct iopf_queue *queue, - struct device *dev) -{ - return -ENODEV; -} - -static inline int iopf_queue_remove_device(struct iopf_queue *queue, - struct device *dev) -{ - return -ENODEV; -} - -static inline int iopf_queue_flush_dev(struct device *dev) -{ - return -ENODEV; -} - -static inline struct iopf_queue *iopf_queue_alloc(const char *name) -{ - return NULL; -} - -static inline void iopf_queue_free(struct iopf_queue *queue) -{ -} - -static inline int iopf_queue_discard_partial(struct iopf_queue *queue) -{ - return -ENODEV; -} - -static inline int iommu_sva_handle_iopf(struct iopf_group *group) -{ - return IOMMU_PAGE_RESP_INVALID; -} -#endif /* CONFIG_IOMMU_SVA */ -#endif /* _IOMMU_SVA_H */ diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index ab2b0a5e4369..6513a98fcb72 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -10,7 +10,6 @@ #include #include "arm-smmu-v3.h" -#include "../../iommu-sva.h" #include "../../io-pgtable-arm.h" struct arm_smmu_mmu_notifier { diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index ab4f04c7f932..4e93e845458c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -29,7 +29,6 @@ #include "arm-smmu-v3.h" #include "../../dma-iommu.h" -#include "../../iommu-sva.h" static bool disable_bypass = true; module_param(disable_bypass, bool, 0444); diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index df6ceefc09ee..29a12f289e2e 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -27,7 +27,6 @@ #include "iommu.h" #include "../dma-iommu.h" #include "../irq_remapping.h" -#include "../iommu-sva.h" #include "pasid.h" #include "cap_audit.h" #include "perfmon.h" diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 9751f037e188..e1cbcb9515f0 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -22,7 +22,6 @@ #include "iommu.h" #include "pasid.h" #include "perf.h" -#include "../iommu-sva.h" #include "trace.h" static irqreturn_t prq_event_thread(int irq, void *d); diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 13cd0929e766..c1e88da973ce 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -11,12 +11,9 @@ #include #include -#include "iommu-sva.h" +#include "iommu-priv.h" -enum iommu_page_response_code -iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm); - -static void iopf_free_group(struct iopf_group *group) +void iopf_free_group(struct iopf_group *group) { struct iopf_fault *iopf, *next; @@ -27,44 +24,7 @@ static void iopf_free_group(struct iopf_group *group) kfree(group); } - -static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, - enum iommu_page_response_code status) -{ - struct iommu_page_response resp = { - .pasid = iopf->fault.prm.pasid, - .grpid = iopf->fault.prm.grpid, - .code = status, - }; - - if ((iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) && - (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID)) - resp.flags = IOMMU_PAGE_RESP_PASID_VALID; - - return iommu_page_response(dev, &resp); -} - -static void iopf_handler(struct work_struct *work) -{ - struct iopf_fault *iopf; - struct iopf_group *group; - enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; - - group = container_of(work, struct iopf_group, work); - list_for_each_entry(iopf, &group->faults, list) { - /* - * For the moment, errors are sticky: don't handle subsequent - * faults in the group if there is an error. - */ - if (status != IOMMU_PAGE_RESP_SUCCESS) - break; - - status = iommu_sva_handle_mm(&iopf->fault, group->domain->mm); - } - - iopf_complete_group(group->dev, &group->last_fault, status); - iopf_free_group(group); -} +EXPORT_SYMBOL_GPL(iopf_free_group); static struct iommu_domain *get_domain_for_iopf(struct device *dev, struct iommu_fault *fault) @@ -91,7 +51,7 @@ static struct iommu_domain *get_domain_for_iopf(struct device *dev, } /** - * iommu_queue_iopf - IO Page Fault handler + * iommu_handle_iopf - IO Page Fault handler * @fault: fault event * @dev: struct device. * @@ -130,7 +90,7 @@ static struct iommu_domain *get_domain_for_iopf(struct device *dev, * * Return: 0 on success and <0 on error. */ -int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) +static int iommu_handle_iopf(struct iommu_fault *fault, struct device *dev) { int ret; struct iopf_group *group; @@ -212,18 +172,117 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) } return ret; } -EXPORT_SYMBOL_GPL(iommu_queue_iopf); -int iommu_sva_handle_iopf(struct iopf_group *group) +/** + * iommu_report_device_fault() - Report fault event to device driver + * @dev: the device + * @evt: fault event data + * + * Called by IOMMU drivers when a fault is detected, typically in a threaded IRQ + * handler. When this function fails and the fault is recoverable, it is the + * caller's responsibility to complete the fault. + * + * Return 0 on success, or an error. + */ +int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) { - struct iommu_fault_param *fault_param = group->dev->iommu->fault_param; + struct dev_iommu *param = dev->iommu; + struct iopf_fault *evt_pending = NULL; + struct iommu_fault_param *fparam; + int ret = 0; - INIT_WORK(&group->work, iopf_handler); - if (!queue_work(fault_param->queue->wq, &group->work)) - return -EBUSY; + if (!param || !evt) + return -EINVAL; - return 0; + /* we only report device fault if there is a handler registered */ + mutex_lock(¶m->lock); + fparam = param->fault_param; + + if (evt->fault.type == IOMMU_FAULT_PAGE_REQ && + (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { + evt_pending = kmemdup(evt, sizeof(struct iopf_fault), + GFP_KERNEL); + if (!evt_pending) { + ret = -ENOMEM; + goto done_unlock; + } + mutex_lock(&fparam->lock); + list_add_tail(&evt_pending->list, &fparam->faults); + mutex_unlock(&fparam->lock); + } + + ret = iommu_handle_iopf(&evt->fault, dev); + if (ret && evt_pending) { + mutex_lock(&fparam->lock); + list_del(&evt_pending->list); + mutex_unlock(&fparam->lock); + kfree(evt_pending); + } +done_unlock: + mutex_unlock(¶m->lock); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_report_device_fault); + +int iommu_page_response(struct device *dev, + struct iommu_page_response *msg) +{ + bool needs_pasid; + int ret = -EINVAL; + struct iopf_fault *evt; + struct iommu_fault_page_request *prm; + struct dev_iommu *param = dev->iommu; + const struct iommu_ops *ops = dev_iommu_ops(dev); + bool has_pasid = msg->flags & IOMMU_PAGE_RESP_PASID_VALID; + + if (!ops->page_response) + return -ENODEV; + + if (!param || !param->fault_param) + return -EINVAL; + + /* Only send response if there is a fault report pending */ + mutex_lock(¶m->fault_param->lock); + if (list_empty(¶m->fault_param->faults)) { + dev_warn_ratelimited(dev, "no pending PRQ, drop response\n"); + goto done_unlock; + } + /* + * Check if we have a matching page request pending to respond, + * otherwise return -EINVAL + */ + list_for_each_entry(evt, ¶m->fault_param->faults, list) { + prm = &evt->fault.prm; + if (prm->grpid != msg->grpid) + continue; + + /* + * If the PASID is required, the corresponding request is + * matched using the group ID, the PASID valid bit and the PASID + * value. Otherwise only the group ID matches request and + * response. + */ + needs_pasid = prm->flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID; + if (needs_pasid && (!has_pasid || msg->pasid != prm->pasid)) + continue; + + if (!needs_pasid && has_pasid) { + /* No big deal, just clear it. */ + msg->flags &= ~IOMMU_PAGE_RESP_PASID_VALID; + msg->pasid = 0; + } + + ret = ops->page_response(dev, evt, msg); + list_del(&evt->list); + kfree(evt); + break; + } + +done_unlock: + mutex_unlock(¶m->fault_param->lock); + return ret; } +EXPORT_SYMBOL_GPL(iommu_page_response); /** * iopf_queue_flush_dev - Ensure that all queued faults have been processed @@ -258,6 +317,31 @@ int iopf_queue_flush_dev(struct device *dev) } EXPORT_SYMBOL_GPL(iopf_queue_flush_dev); +/** + * iopf_group_response - Respond a group of page faults + * @group: the group of faults with the same group id + * @status: the response code + * + * Return 0 on success and <0 on error. + */ +int iopf_group_response(struct iopf_group *group, + enum iommu_page_response_code status) +{ + struct iopf_fault *iopf = &group->last_fault; + struct iommu_page_response resp = { + .pasid = iopf->fault.prm.pasid, + .grpid = iopf->fault.prm.grpid, + .code = status, + }; + + if ((iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) && + (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID)) + resp.flags = IOMMU_PAGE_RESP_PASID_VALID; + + return iommu_page_response(group->dev, &resp); +} +EXPORT_SYMBOL_GPL(iopf_group_response); + /** * iopf_queue_discard_partial - Remove all pending partial fault * @queue: the queue whose partial faults need to be discarded diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index fcae7308fcb7..9de878e40413 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -7,7 +7,7 @@ #include #include -#include "iommu-sva.h" +#include "iommu-priv.h" static DEFINE_MUTEX(iommu_sva_lock); @@ -159,10 +159,21 @@ u32 iommu_sva_get_pasid(struct iommu_sva *handle) } EXPORT_SYMBOL_GPL(iommu_sva_get_pasid); +void mm_pasid_drop(struct mm_struct *mm) +{ + struct iommu_mm_data *iommu_mm = mm->iommu_mm; + + if (!iommu_mm) + return; + + iommu_free_global_pasid(iommu_mm->pasid); + kfree(iommu_mm); +} + /* * I/O page fault handler for SVA */ -enum iommu_page_response_code +static enum iommu_page_response_code iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm) { vm_fault_t ret; @@ -216,13 +227,54 @@ iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm) return status; } -void mm_pasid_drop(struct mm_struct *mm) +static void iommu_sva_handle_iopf(struct work_struct *work) { - struct iommu_mm_data *iommu_mm = mm->iommu_mm; + struct iopf_fault *iopf; + struct iopf_group *group; + enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; - if (!iommu_mm) - return; + group = container_of(work, struct iopf_group, work); + list_for_each_entry(iopf, &group->faults, list) { + /* + * For the moment, errors are sticky: don't handle subsequent + * faults in the group if there is an error. + */ + if (status != IOMMU_PAGE_RESP_SUCCESS) + break; - iommu_free_global_pasid(iommu_mm->pasid); - kfree(iommu_mm); + status = iommu_sva_handle_mm(&iopf->fault, group->domain->mm); + } + + iopf_group_response(group, status); + iopf_free_group(group); +} + +static int iommu_sva_iopf_handler(struct iopf_group *group) +{ + struct iommu_fault_param *fault_param = group->dev->iommu->fault_param; + + INIT_WORK(&group->work, iommu_sva_handle_iopf); + if (!queue_work(fault_param->queue->wq, &group->work)) + return -EBUSY; + + return 0; +} + +struct iommu_domain *iommu_sva_domain_alloc(struct device *dev, + struct mm_struct *mm) +{ + const struct iommu_ops *ops = dev_iommu_ops(dev); + struct iommu_domain *domain; + + domain = ops->domain_alloc(IOMMU_DOMAIN_SVA); + if (!domain) + return NULL; + + domain->type = IOMMU_DOMAIN_SVA; + mmgrab(mm); + domain->mm = mm; + domain->owner = ops; + domain->iopf_handler = iommu_sva_iopf_handler; + + return domain; } diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 2bfdacdee8aa..502779d57b68 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -36,8 +36,6 @@ #include "dma-iommu.h" #include "iommu-priv.h" -#include "iommu-sva.h" - static struct kset *iommu_group_kset; static DEFINE_IDA(iommu_group_ida); static DEFINE_IDA(iommu_global_pasid_ida); @@ -1330,117 +1328,6 @@ void iommu_group_put(struct iommu_group *group) } EXPORT_SYMBOL_GPL(iommu_group_put); -/** - * iommu_report_device_fault() - Report fault event to device driver - * @dev: the device - * @evt: fault event data - * - * Called by IOMMU drivers when a fault is detected, typically in a threaded IRQ - * handler. When this function fails and the fault is recoverable, it is the - * caller's responsibility to complete the fault. - * - * Return 0 on success, or an error. - */ -int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) -{ - struct dev_iommu *param = dev->iommu; - struct iopf_fault *evt_pending = NULL; - struct iommu_fault_param *fparam; - int ret = 0; - - if (!param || !evt) - return -EINVAL; - - /* we only report device fault if there is a handler registered */ - mutex_lock(¶m->lock); - fparam = param->fault_param; - - if (evt->fault.type == IOMMU_FAULT_PAGE_REQ && - (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { - evt_pending = kmemdup(evt, sizeof(struct iopf_fault), - GFP_KERNEL); - if (!evt_pending) { - ret = -ENOMEM; - goto done_unlock; - } - mutex_lock(&fparam->lock); - list_add_tail(&evt_pending->list, &fparam->faults); - mutex_unlock(&fparam->lock); - } - - ret = iommu_queue_iopf(&evt->fault, dev); - if (ret && evt_pending) { - mutex_lock(&fparam->lock); - list_del(&evt_pending->list); - mutex_unlock(&fparam->lock); - kfree(evt_pending); - } -done_unlock: - mutex_unlock(¶m->lock); - return ret; -} -EXPORT_SYMBOL_GPL(iommu_report_device_fault); - -int iommu_page_response(struct device *dev, - struct iommu_page_response *msg) -{ - bool needs_pasid; - int ret = -EINVAL; - struct iopf_fault *evt; - struct iommu_fault_page_request *prm; - struct dev_iommu *param = dev->iommu; - const struct iommu_ops *ops = dev_iommu_ops(dev); - bool has_pasid = msg->flags & IOMMU_PAGE_RESP_PASID_VALID; - - if (!ops->page_response) - return -ENODEV; - - if (!param || !param->fault_param) - return -EINVAL; - - /* Only send response if there is a fault report pending */ - mutex_lock(¶m->fault_param->lock); - if (list_empty(¶m->fault_param->faults)) { - dev_warn_ratelimited(dev, "no pending PRQ, drop response\n"); - goto done_unlock; - } - /* - * Check if we have a matching page request pending to respond, - * otherwise return -EINVAL - */ - list_for_each_entry(evt, ¶m->fault_param->faults, list) { - prm = &evt->fault.prm; - if (prm->grpid != msg->grpid) - continue; - - /* - * If the PASID is required, the corresponding request is - * matched using the group ID, the PASID valid bit and the PASID - * value. Otherwise only the group ID matches request and - * response. - */ - needs_pasid = prm->flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID; - if (needs_pasid && (!has_pasid || msg->pasid != prm->pasid)) - continue; - - if (!needs_pasid && has_pasid) { - /* No big deal, just clear it. */ - msg->flags &= ~IOMMU_PAGE_RESP_PASID_VALID; - msg->pasid = 0; - } - - ret = ops->page_response(dev, evt, msg); - list_del(&evt->list); - kfree(evt); - break; - } - -done_unlock: - mutex_unlock(¶m->fault_param->lock); - return ret; -} -EXPORT_SYMBOL_GPL(iommu_page_response); - /** * iommu_group_id - Return ID for a group * @group: the group to ID @@ -3515,26 +3402,6 @@ struct iommu_domain *iommu_get_domain_for_dev_pasid(struct device *dev, } EXPORT_SYMBOL_GPL(iommu_get_domain_for_dev_pasid); -struct iommu_domain *iommu_sva_domain_alloc(struct device *dev, - struct mm_struct *mm) -{ - const struct iommu_ops *ops = dev_iommu_ops(dev); - struct iommu_domain *domain; - - domain = ops->domain_alloc(IOMMU_DOMAIN_SVA); - if (!domain) - return NULL; - - domain->type = IOMMU_DOMAIN_SVA; - mmgrab(mm); - domain->mm = mm; - domain->owner = ops; - domain->iopf_handler = iommu_sva_handle_iopf; - domain->fault_data = mm; - - return domain; -} - ioasid_t iommu_alloc_global_pasid(struct device *dev) { int ret; diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 9a29d742617e..d9ed5ad129f2 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -163,6 +163,9 @@ config IOMMU_SVA select IOMMU_MM_DATA bool +config IOMMU_IOPF + bool + config FSL_PAMU bool "Freescale IOMMU support" depends on PCI @@ -398,6 +401,7 @@ config ARM_SMMU_V3_SVA bool "Shared Virtual Addressing support for the ARM SMMUv3" depends on ARM_SMMU_V3 select IOMMU_SVA + select IOMMU_IOPF select MMU_NOTIFIER help Support for sharing process address spaces with devices using the diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 95ad9dbfbda0..542760d963ec 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -26,6 +26,7 @@ obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o obj-$(CONFIG_S390_IOMMU) += s390-iommu.o obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o -obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o io-pgfault.o +obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o +obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o obj-$(CONFIG_SPRD_IOMMU) += sprd-iommu.o obj-$(CONFIG_APPLE_DART) += apple-dart.o diff --git a/drivers/iommu/intel/Kconfig b/drivers/iommu/intel/Kconfig index 012cd2541a68..a4a125666293 100644 --- a/drivers/iommu/intel/Kconfig +++ b/drivers/iommu/intel/Kconfig @@ -51,6 +51,7 @@ config INTEL_IOMMU_SVM depends on X86_64 select MMU_NOTIFIER select IOMMU_SVA + select IOMMU_IOPF help Shared Virtual Memory (SVM) provides a facility for devices to access DMA resources through process address space by From patchwork Wed Dec 20 01:23:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181411 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2352054dyi; Tue, 19 Dec 2023 17:32:49 -0800 (PST) X-Google-Smtp-Source: AGHT+IFtN4TZ/oPJxqieNsfUvGUji91wsoCqkwAXiMOHZ1XgBvS3TY4bKZiYqgb6tkQ3tXGZ7J9I X-Received: by 2002:a05:6870:3c84:b0:1fb:75c:4000 with SMTP id gl4-20020a0568703c8400b001fb075c4000mr24766607oab.96.1703035968867; Tue, 19 Dec 2023 17:32:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035968; cv=none; d=google.com; s=arc-20160816; b=Lwt95MFVnNq+rGY6FvS0nnYi3Ni2jkHwSDmw1RoR+qvOqU0oZvBEj66tLNKpH49rPy m6RzCP4wXTsisbOJUgPIPg/i/osWc77tyys0kwAw+BxTkJRyDHOg3gzI8TwGWoYST+8T t/nTMveSPs83OYk1NJxbtX+Qyn4IuvjCFxpMNvjhw1JPEFcaBXggoeHweRpEDZsuGq0e 7AubVzdKcIyBzSTCmvdatXaSkOxUIjIpvYCIZxdtzrPVZ6GQcSXxwlOs/QZfKuhUsBgJ vCI0jurOMM45BD2xpiJQGSTTuLcQbD+3m+VAtlbaYjXL5yqVHOEKzem5LF65ChDwFRmF LvZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=rYZfneW1Cpi0gDNA3bCUB2zkZSDptLP7ppZvreco2V4=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=MPSpq+/MpyUnmwvbLCyTT2Cg/f7BA0R1gEWWCd9gag+rzE9MrhguCoa15ds3rorPcy 1b4Q41spR2Tu2bWYrsVJfXwt+sbTdnfJ85GOQAFXKp6dDKU8mlEN+rvELlETAx+0tlrK p1aFVRl4mbzyJ9q1VOwDxEFdBVbWFHXpYUlTarDD2v1PgJpdhpVyTtO53whdNsN76cCQ seHWwkcRBaoI+b8muv8z/D8On5Bm1svHOQIboCBgrvxcBUDrAj0jzkasEVrFCb+fLryv KvFaFI1C313mLis3GRcu+JnfEvkNJjv+1ICndT12Ywa/tYCOzA4YwZE9PYXWsDUmMjnG G9XQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="ND7o/50D"; spf=pass (google.com: domain of linux-kernel+bounces-6245-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6245-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id p62-20020a634241000000b005aebc9096d4si20150838pga.150.2023.12.19.17.32.48 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:32:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6245-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="ND7o/50D"; spf=pass (google.com: domain of linux-kernel+bounces-6245-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6245-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 9143B28249C for ; Wed, 20 Dec 2023 01:32:48 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B1110208A8; Wed, 20 Dec 2023 01:29:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ND7o/50D" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 405611F95D; Wed, 20 Dec 2023 01:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035754; x=1734571754; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xJjYsaefgZGHvEwRQ3CV5IkekvSpCZPTHHD42unBWt0=; b=ND7o/50Da8VHDEc/S+QPR4S4hL2rXSkxxiUnSVv+Y6d/tb3BIUJEtsK6 V4G8+MQDiq8TJI0deS+/e8/Y2S/o50blF/ordqQdu/rKAA1C20ts9Zf3n sC+tnn9LKvBJ4T8ZbdKv3Y5BhVau+xfnKdGwoGwfyXGEceCsCRMvuRWkK etR83LRvFXxdf/BRlUhAmhaUUUOB2nLoTlXGjILICqddhxqwr7SkJxH1g 1b02ikGu+ufpeci6HJ12tvixk+9BWeKfUbigAk+SboKs4jr2r7S8AjFcT 3hxG2YM/GkXuldakxa5O1PilVYDitwREiiS5bgMfKXCAYvK66irmd6+UP A==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965842" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965842" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:29:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319196" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319196" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:29:10 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 11/14] iommu: Refine locking for per-device fault data management Date: Wed, 20 Dec 2023 09:23:29 +0800 Message-Id: <20231220012332.168188-12-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762643772996683 X-GMAIL-MSGID: 1785762643772996683 The per-device fault data is a data structure that is used to store information about faults that occur on a device. This data is allocated when IOPF is enabled on the device and freed when IOPF is disabled. The data is used in the paths of iopf reporting, handling, responding, and draining. The fault data is protected by two locks: - dev->iommu->lock: This lock is used to protect the allocation and freeing of the fault data. - dev->iommu->fault_parameter->lock: This lock is used to protect the fault data itself. Apply the locking mechanism to the fault reporting and responding paths. The fault_parameter->lock is also added in iopf_queue_discard_partial(). It does not fix any real issue, as iopf_queue_discard_partial() is only used in the VT-d driver's prq_event_thread(), which is a single-threaded path that reports the IOPFs. Signed-off-by: Lu Baolu Reviewed-by: Kevin Tian Reviewed-by: Jason Gunthorpe Tested-by: Yan Zhao Tested-by: Longfang Liu --- drivers/iommu/io-pgfault.c | 61 +++++++++++++++++++------------------- 1 file changed, 30 insertions(+), 31 deletions(-) diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index c1e88da973ce..5aea8402be47 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -53,7 +53,7 @@ static struct iommu_domain *get_domain_for_iopf(struct device *dev, /** * iommu_handle_iopf - IO Page Fault handler * @fault: fault event - * @dev: struct device. + * @iopf_param: the fault parameter of the device. * * Add a fault to the device workqueue, to be handled by mm. * @@ -90,29 +90,21 @@ static struct iommu_domain *get_domain_for_iopf(struct device *dev, * * Return: 0 on success and <0 on error. */ -static int iommu_handle_iopf(struct iommu_fault *fault, struct device *dev) +static int iommu_handle_iopf(struct iommu_fault *fault, + struct iommu_fault_param *iopf_param) { int ret; struct iopf_group *group; struct iommu_domain *domain; struct iopf_fault *iopf, *next; - struct iommu_fault_param *iopf_param; - struct dev_iommu *param = dev->iommu; + struct device *dev = iopf_param->dev; - lockdep_assert_held(¶m->lock); + lockdep_assert_held(&iopf_param->lock); if (fault->type != IOMMU_FAULT_PAGE_REQ) /* Not a recoverable page fault */ return -EOPNOTSUPP; - /* - * As long as we're holding param->lock, the queue can't be unlinked - * from the device and therefore cannot disappear. - */ - iopf_param = param->fault_param; - if (!iopf_param) - return -ENODEV; - if (!(fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { iopf = kzalloc(sizeof(*iopf), GFP_KERNEL); if (!iopf) @@ -186,18 +178,19 @@ static int iommu_handle_iopf(struct iommu_fault *fault, struct device *dev) */ int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) { - struct dev_iommu *param = dev->iommu; + struct iommu_fault_param *fault_param; struct iopf_fault *evt_pending = NULL; - struct iommu_fault_param *fparam; + struct dev_iommu *param = dev->iommu; int ret = 0; - if (!param || !evt) - return -EINVAL; - - /* we only report device fault if there is a handler registered */ mutex_lock(¶m->lock); - fparam = param->fault_param; + fault_param = param->fault_param; + if (!fault_param) { + mutex_unlock(¶m->lock); + return -EINVAL; + } + mutex_lock(&fault_param->lock); if (evt->fault.type == IOMMU_FAULT_PAGE_REQ && (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { evt_pending = kmemdup(evt, sizeof(struct iopf_fault), @@ -206,20 +199,18 @@ int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) ret = -ENOMEM; goto done_unlock; } - mutex_lock(&fparam->lock); - list_add_tail(&evt_pending->list, &fparam->faults); - mutex_unlock(&fparam->lock); + list_add_tail(&evt_pending->list, &fault_param->faults); } - ret = iommu_handle_iopf(&evt->fault, dev); + ret = iommu_handle_iopf(&evt->fault, fault_param); if (ret && evt_pending) { - mutex_lock(&fparam->lock); list_del(&evt_pending->list); - mutex_unlock(&fparam->lock); kfree(evt_pending); } done_unlock: + mutex_unlock(&fault_param->lock); mutex_unlock(¶m->lock); + return ret; } EXPORT_SYMBOL_GPL(iommu_report_device_fault); @@ -232,18 +223,23 @@ int iommu_page_response(struct device *dev, struct iopf_fault *evt; struct iommu_fault_page_request *prm; struct dev_iommu *param = dev->iommu; + struct iommu_fault_param *fault_param; const struct iommu_ops *ops = dev_iommu_ops(dev); bool has_pasid = msg->flags & IOMMU_PAGE_RESP_PASID_VALID; if (!ops->page_response) return -ENODEV; - if (!param || !param->fault_param) + mutex_lock(¶m->lock); + fault_param = param->fault_param; + if (!fault_param) { + mutex_unlock(¶m->lock); return -EINVAL; + } /* Only send response if there is a fault report pending */ - mutex_lock(¶m->fault_param->lock); - if (list_empty(¶m->fault_param->faults)) { + mutex_lock(&fault_param->lock); + if (list_empty(&fault_param->faults)) { dev_warn_ratelimited(dev, "no pending PRQ, drop response\n"); goto done_unlock; } @@ -251,7 +247,7 @@ int iommu_page_response(struct device *dev, * Check if we have a matching page request pending to respond, * otherwise return -EINVAL */ - list_for_each_entry(evt, ¶m->fault_param->faults, list) { + list_for_each_entry(evt, &fault_param->faults, list) { prm = &evt->fault.prm; if (prm->grpid != msg->grpid) continue; @@ -279,7 +275,8 @@ int iommu_page_response(struct device *dev, } done_unlock: - mutex_unlock(¶m->fault_param->lock); + mutex_unlock(&fault_param->lock); + mutex_unlock(¶m->lock); return ret; } EXPORT_SYMBOL_GPL(iommu_page_response); @@ -362,11 +359,13 @@ int iopf_queue_discard_partial(struct iopf_queue *queue) mutex_lock(&queue->lock); list_for_each_entry(iopf_param, &queue->devices, queue_list) { + mutex_lock(&iopf_param->lock); list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { list_del(&iopf->list); kfree(iopf); } + mutex_unlock(&iopf_param->lock); } mutex_unlock(&queue->lock); return 0; From patchwork Wed Dec 20 01:23:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181412 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2352208dyi; Tue, 19 Dec 2023 17:33:13 -0800 (PST) X-Google-Smtp-Source: AGHT+IH39qD+lJOqyDUzD5yhP/NtpUmT37gacEf2YwLXsGQzzWABgadN5wvs+5UhzDOD7MAPKpUh X-Received: by 2002:a17:906:7390:b0:a23:4b31:565d with SMTP id f16-20020a170906739000b00a234b31565dmr1200754ejl.166.1703035993254; Tue, 19 Dec 2023 17:33:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035993; cv=none; d=google.com; s=arc-20160816; b=RGx2lrAz16zyl2pzSmoLBsJeB3ONJt3Be1oSWlkVwmxFgpb8xPSiLd9ueM0Rdw3DYw nSXaJvnK3ap16tDgB0YIdSisKciIkftQBsA2PnBWioB7cClouL4r7hLQ8vTnKnvdVdJS XfELX43GlH+TqWqSJfx20bEDNjQPo+uoNf65n3Wg6J6cDtQp7tqp+8qap3eqZRWnY1jI yEm2RqoDUCcN2G4Z1xZ0IeflCP3XQubshqjUVTxBZ+AEPjhgqua82zYPLcFmjASY4Qkv fTU6b/Cj1ijT9ICBXR+OhqXsTOHFRrrtliCdMKXumPM5upoC0JGZNQkIo7dYZ0KSgksA eAjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=NTgMwqetbAF+YK2RTiWrjEbBXs7Li8DFSPjoqFtT4z4=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=Bv+lj1Sd9u6LLDl/ye5pu+9McPWAXD4nO6CYJO09wh28rUyOMRsQf2eN9338CfJigN gTJFfAmbQ4WIX+5lorXkEK4qy/fXyPe13QCaahEwcZgigeZNOfhHLW/8mehf+SHBnv/R it4BT2CWIV4bLynZbkfE/agg8p/JmOuJm5PKKXMy7jPt9rGBz2jHAw1OKg9WQs0+WWTx 0l7V0uTvQCX4nupurneSFHukjX7SOHURdvdxaZXXKSv8qz6OylZGClsOCSLbd+eEjb9n bf5jPWGD1jvhGDstVLKfj63lAfT9UAsKlsCyANKAHGcrAUsanUU3MI5265PB+Lp8HweN QhKw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=a4pqoZSa; spf=pass (google.com: domain of linux-kernel+bounces-6246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6246-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id ay10-20020a170906d28a00b00a2373875b8bsi1132488ejb.464.2023.12.19.17.33.13 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:33:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=a4pqoZSa; spf=pass (google.com: domain of linux-kernel+bounces-6246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6246-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id A85D91F26067 for ; Wed, 20 Dec 2023 01:33:12 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 373658F76; Wed, 20 Dec 2023 01:29:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="a4pqoZSa" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 335412033F; Wed, 20 Dec 2023 01:29:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035758; x=1734571758; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dNWjTkTwu0fVl5MQ2bItdZlWIfXllntPZIk4DSP0fII=; b=a4pqoZSadMhw0B2vY9cJ05idcmrlcli2DGnb7C/bZwnzC/e4pncAYXmn 3kXRx/xfcBTh8WdXBHQGvrW8R5VZVdvlBQvH46OjAAf5GrD1lfV+F/PPo FATcclgYSYqfxjEcaSDARWldNmdy2wgdXTz/9JpfG8SCzrp1BBU8sULRT VxuF8NMZXveOAIc9KH8UAdINuHnHG2y+C7LOpS9qX8OpxAJjmvlE75IQp OLtJSiS0EAVlKoLLyxSB12rmn9Htl06xIEmxFJ0LCjXfhJPhwUdLQYUqL hG66IzmoQ/+y9UnAzrSI5ousrALLrnnoLlXZ6hx7fqMOkvTa647agOoLL Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965854" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965854" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:29:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319199" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319199" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:29:14 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 12/14] iommu: Use refcount for fault data access Date: Wed, 20 Dec 2023 09:23:30 +0800 Message-Id: <20231220012332.168188-13-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762669640080591 X-GMAIL-MSGID: 1785762669640080591 The per-device fault data structure stores information about faults occurring on a device. Its lifetime spans from IOPF enablement to disablement. Multiple paths, including IOPF reporting, handling, and responding, may access it concurrently. Previously, a mutex protected the fault data from use after free. But this is not performance friendly due to the critical nature of IOPF handling paths. Refine this with a refcount-based approach. The fault data pointer is obtained within an RCU read region with a refcount. The fault data pointer is returned for usage only when the pointer is valid and a refcount is successfully obtained. The fault data is freed with kfree_rcu(), ensuring data is only freed after all RCU critical regions complete. An iopf handling work starts once an iopf group is created. The handling work continues until iommu_page_response() is called to respond to the iopf and the iopf group is freed. During this time, the device fault parameter should always be available. Add a pointer to the device fault parameter in the iopf_group structure and hold the reference until the iopf_group is freed. Make iommu_page_response() static as it is only used in io-pgfault.c. Co-developed-by: Jason Gunthorpe Signed-off-by: Jason Gunthorpe Signed-off-by: Lu Baolu Tested-by: Yan Zhao Reviewed-by: Jason Gunthorpe --- include/linux/iommu.h | 17 +++-- drivers/iommu/io-pgfault.c | 129 +++++++++++++++++++++++-------------- drivers/iommu/iommu-sva.c | 2 +- 3 files changed, 90 insertions(+), 58 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index dbad2cb9eca2..c2416aa79922 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -41,6 +41,7 @@ struct iommu_dirty_ops; struct notifier_block; struct iommu_sva; struct iommu_dma_cookie; +struct iommu_fault_param; #define IOMMU_FAULT_PERM_READ (1 << 0) /* read */ #define IOMMU_FAULT_PERM_WRITE (1 << 1) /* write */ @@ -129,8 +130,9 @@ struct iopf_group { struct iopf_fault last_fault; struct list_head faults; struct work_struct work; - struct device *dev; struct iommu_domain *domain; + /* The device's fault data parameter. */ + struct iommu_fault_param *fault_param; }; /** @@ -602,6 +604,8 @@ struct iommu_device { /** * struct iommu_fault_param - per-device IOMMU fault data * @lock: protect pending faults list + * @users: user counter to manage the lifetime of the data + * @ruc: rcu head for kfree_rcu() * @dev: the device that owns this param * @queue: IOPF queue * @queue_list: index into queue->devices @@ -611,6 +615,8 @@ struct iommu_device { */ struct iommu_fault_param { struct mutex lock; + refcount_t users; + struct rcu_head rcu; struct device *dev; struct iopf_queue *queue; @@ -638,7 +644,7 @@ struct iommu_fault_param { */ struct dev_iommu { struct mutex lock; - struct iommu_fault_param *fault_param; + struct iommu_fault_param __rcu *fault_param; struct iommu_fwspec *fwspec; struct iommu_device *iommu_dev; void *priv; @@ -1466,7 +1472,6 @@ void iopf_queue_free(struct iopf_queue *queue); int iopf_queue_discard_partial(struct iopf_queue *queue); void iopf_free_group(struct iopf_group *group); int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt); -int iommu_page_response(struct device *dev, struct iommu_page_response *msg); int iopf_group_response(struct iopf_group *group, enum iommu_page_response_code status); #else @@ -1511,12 +1516,6 @@ iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) return -ENODEV; } -static inline int -iommu_page_response(struct device *dev, struct iommu_page_response *msg) -{ - return -ENODEV; -} - static inline int iopf_group_response(struct iopf_group *group, enum iommu_page_response_code status) { diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 5aea8402be47..3a907bad2fcb 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -13,6 +13,32 @@ #include "iommu-priv.h" +/* + * Return the fault parameter of a device if it exists. Otherwise, return NULL. + * On a successful return, the caller takes a reference of this parameter and + * should put it after use by calling iopf_put_dev_fault_param(). + */ +static struct iommu_fault_param *iopf_get_dev_fault_param(struct device *dev) +{ + struct dev_iommu *param = dev->iommu; + struct iommu_fault_param *fault_param; + + rcu_read_lock(); + fault_param = rcu_dereference(param->fault_param); + if (fault_param && !refcount_inc_not_zero(&fault_param->users)) + fault_param = NULL; + rcu_read_unlock(); + + return fault_param; +} + +/* Caller must hold a reference of the fault parameter. */ +static void iopf_put_dev_fault_param(struct iommu_fault_param *fault_param) +{ + if (refcount_dec_and_test(&fault_param->users)) + kfree_rcu(fault_param, rcu); +} + void iopf_free_group(struct iopf_group *group) { struct iopf_fault *iopf, *next; @@ -22,6 +48,8 @@ void iopf_free_group(struct iopf_group *group) kfree(iopf); } + /* Pair with iommu_report_device_fault(). */ + iopf_put_dev_fault_param(group->fault_param); kfree(group); } EXPORT_SYMBOL_GPL(iopf_free_group); @@ -135,7 +163,7 @@ static int iommu_handle_iopf(struct iommu_fault *fault, goto cleanup_partial; } - group->dev = dev; + group->fault_param = iopf_param; group->last_fault.fault = *fault; INIT_LIST_HEAD(&group->faults); group->domain = domain; @@ -178,64 +206,61 @@ static int iommu_handle_iopf(struct iommu_fault *fault, */ int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) { + bool last_prq = evt->fault.type == IOMMU_FAULT_PAGE_REQ && + (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE); struct iommu_fault_param *fault_param; - struct iopf_fault *evt_pending = NULL; - struct dev_iommu *param = dev->iommu; - int ret = 0; + struct iopf_fault *evt_pending; + int ret; - mutex_lock(¶m->lock); - fault_param = param->fault_param; - if (!fault_param) { - mutex_unlock(¶m->lock); + fault_param = iopf_get_dev_fault_param(dev); + if (!fault_param) return -EINVAL; - } mutex_lock(&fault_param->lock); - if (evt->fault.type == IOMMU_FAULT_PAGE_REQ && - (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { + if (last_prq) { evt_pending = kmemdup(evt, sizeof(struct iopf_fault), GFP_KERNEL); if (!evt_pending) { ret = -ENOMEM; - goto done_unlock; + goto err_unlock; } list_add_tail(&evt_pending->list, &fault_param->faults); } ret = iommu_handle_iopf(&evt->fault, fault_param); - if (ret && evt_pending) { + if (ret) + goto err_free; + + mutex_unlock(&fault_param->lock); + /* The reference count of fault_param is now held by iopf_group. */ + if (!last_prq) + iopf_put_dev_fault_param(fault_param); + + return 0; +err_free: + if (last_prq) { list_del(&evt_pending->list); kfree(evt_pending); } -done_unlock: +err_unlock: mutex_unlock(&fault_param->lock); - mutex_unlock(¶m->lock); + iopf_put_dev_fault_param(fault_param); return ret; } EXPORT_SYMBOL_GPL(iommu_report_device_fault); -int iommu_page_response(struct device *dev, - struct iommu_page_response *msg) +static int iommu_page_response(struct iopf_group *group, + struct iommu_page_response *msg) { bool needs_pasid; int ret = -EINVAL; struct iopf_fault *evt; struct iommu_fault_page_request *prm; - struct dev_iommu *param = dev->iommu; - struct iommu_fault_param *fault_param; + struct device *dev = group->fault_param->dev; const struct iommu_ops *ops = dev_iommu_ops(dev); bool has_pasid = msg->flags & IOMMU_PAGE_RESP_PASID_VALID; - - if (!ops->page_response) - return -ENODEV; - - mutex_lock(¶m->lock); - fault_param = param->fault_param; - if (!fault_param) { - mutex_unlock(¶m->lock); - return -EINVAL; - } + struct iommu_fault_param *fault_param = group->fault_param; /* Only send response if there is a fault report pending */ mutex_lock(&fault_param->lock); @@ -276,10 +301,9 @@ int iommu_page_response(struct device *dev, done_unlock: mutex_unlock(&fault_param->lock); - mutex_unlock(¶m->lock); + return ret; } -EXPORT_SYMBOL_GPL(iommu_page_response); /** * iopf_queue_flush_dev - Ensure that all queued faults have been processed @@ -295,22 +319,20 @@ EXPORT_SYMBOL_GPL(iommu_page_response); */ int iopf_queue_flush_dev(struct device *dev) { - int ret = 0; struct iommu_fault_param *iopf_param; - struct dev_iommu *param = dev->iommu; - if (!param) + /* + * It's a driver bug to be here after iopf_queue_remove_device(). + * Therefore, it's safe to dereference the fault parameter without + * holding the lock. + */ + iopf_param = rcu_dereference_check(dev->iommu->fault_param, true); + if (WARN_ON(!iopf_param)) return -ENODEV; - mutex_lock(¶m->lock); - iopf_param = param->fault_param; - if (iopf_param) - flush_workqueue(iopf_param->queue->wq); - else - ret = -ENODEV; - mutex_unlock(¶m->lock); + flush_workqueue(iopf_param->queue->wq); - return ret; + return 0; } EXPORT_SYMBOL_GPL(iopf_queue_flush_dev); @@ -335,7 +357,7 @@ int iopf_group_response(struct iopf_group *group, (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID)) resp.flags = IOMMU_PAGE_RESP_PASID_VALID; - return iommu_page_response(group->dev, &resp); + return iommu_page_response(group, &resp); } EXPORT_SYMBOL_GPL(iopf_group_response); @@ -384,10 +406,15 @@ int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) int ret = 0; struct dev_iommu *param = dev->iommu; struct iommu_fault_param *fault_param; + const struct iommu_ops *ops = dev_iommu_ops(dev); + + if (!ops->page_response) + return -ENODEV; mutex_lock(&queue->lock); mutex_lock(¶m->lock); - if (param->fault_param) { + if (rcu_dereference_check(param->fault_param, + lockdep_is_held(¶m->lock))) { ret = -EBUSY; goto done_unlock; } @@ -402,10 +429,12 @@ int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) INIT_LIST_HEAD(&fault_param->faults); INIT_LIST_HEAD(&fault_param->partial); fault_param->dev = dev; + refcount_set(&fault_param->users, 1); + init_rcu_head(&fault_param->rcu); list_add(&fault_param->queue_list, &queue->devices); fault_param->queue = queue; - param->fault_param = fault_param; + rcu_assign_pointer(param->fault_param, fault_param); done_unlock: mutex_unlock(¶m->lock); @@ -429,10 +458,12 @@ int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) int ret = 0; struct iopf_fault *iopf, *next; struct dev_iommu *param = dev->iommu; - struct iommu_fault_param *fault_param = param->fault_param; + struct iommu_fault_param *fault_param; mutex_lock(&queue->lock); mutex_lock(¶m->lock); + fault_param = rcu_dereference_check(param->fault_param, + lockdep_is_held(¶m->lock)); if (!fault_param) { ret = -ENODEV; goto unlock; @@ -454,8 +485,10 @@ int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) list_for_each_entry_safe(iopf, next, &fault_param->partial, list) kfree(iopf); - param->fault_param = NULL; - kfree(fault_param); + /* dec the ref owned by iopf_queue_add_device() */ + rcu_assign_pointer(param->fault_param, NULL); + if (refcount_dec_and_test(&fault_param->users)) + kfree_rcu(fault_param, rcu); unlock: mutex_unlock(¶m->lock); mutex_unlock(&queue->lock); diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index 9de878e40413..b51995b4fe90 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -251,7 +251,7 @@ static void iommu_sva_handle_iopf(struct work_struct *work) static int iommu_sva_iopf_handler(struct iopf_group *group) { - struct iommu_fault_param *fault_param = group->dev->iommu->fault_param; + struct iommu_fault_param *fault_param = group->fault_param; INIT_WORK(&group->work, iommu_sva_handle_iopf); if (!queue_work(fault_param->queue->wq, &group->work)) From patchwork Wed Dec 20 01:23:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181413 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2352316dyi; Tue, 19 Dec 2023 17:33:34 -0800 (PST) X-Google-Smtp-Source: AGHT+IEVHY3sgPNA92/MZ9YRo2ibdu5jwY+Cq0oxo1ky2P1wXnNLr8Rsq5aCeYSGXj9BlkPJBila X-Received: by 2002:a17:902:e74a:b0:1d3:dfef:52f2 with SMTP id p10-20020a170902e74a00b001d3dfef52f2mr2593833plf.13.1703036014044; Tue, 19 Dec 2023 17:33:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703036014; cv=none; d=google.com; s=arc-20160816; b=Xee3iFqgYcDoFCqEK334ZQZGxPg2ILeDZOIg0uKdNBlUF+VSstNL0GeaTAkMf0l/1r 2ALRVxA4GO0MyG+3i3F/exihAuU2JymBqiRtOeRjGC2RNDb7V4FaFCWym0MvtHUvPagD 1TQdQRbI+1EtroNZxtNi8Ap8XAVTxER/8cMtNwRqZ4MlewnTMEMey6UU8424RCdDTR9k 8Q9vkFTCQsa/MtdR4AXjOgk5njULlV+v4toCJ5qNIrAUjyhGGAwI24URwXbrdsPJ/vLQ YRA5Nec8NcLMeNWqjl2DMFgx63ZHl9X31NeK46SdTIRJD/lA82EkugQ/7Ibaee1BAweP Gtgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=DMJWS4dNkL6pg9qHpA4JqYjwkAXPEfa3VnqiRT32vgY=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=BiBpKIoc8CBOgock/D8V3zWASl40p1C74C07e+LJQ7Dt9XQ3x4R7oIw7XCBlSRXhB9 Hf4vqIuLPVU0ysMk6tuGKaQuU+XoNlv7NOLrdNK7gQtKNakRdUpHF8aNHVpejv3uUnGg qIB44IhTGQIn8myE2uEFjzHalnEhwjjoX+P5YY7cX7Ia3zrxClYVd4Sjc0ByMHPsUWE4 /4FqDKNx8HucsLb4unom8E7wmkLi/rpR9cZLQlogb15DihbL2SDuIicQRiNy2p1bn31n /WVrLvJlasSutpN+RFsGzuQ1KaDl5Pg8p7QK00sgZpohByB8qpB/F6yPl5H8GXOIsc4F ckZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=iwxlJzof; spf=pass (google.com: domain of linux-kernel+bounces-6247-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6247-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id m9-20020a170902768900b001d3c3b750f7si3740641pll.332.2023.12.19.17.33.33 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:33:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6247-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=iwxlJzof; spf=pass (google.com: domain of linux-kernel+bounces-6247-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6247-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id A6ED0283C30 for ; Wed, 20 Dec 2023 01:33:33 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 02C05BE6F; Wed, 20 Dec 2023 01:29:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="iwxlJzof" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41F1EBE5E; Wed, 20 Dec 2023 01:29:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035762; x=1734571762; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WKlKS3p/SZedIro35MaEj5PNDGzsO2BvzmYV3MpEgxg=; b=iwxlJzof1xYp2d/Ne5XERot8F2+PKg/jMrB6Uyy98mwwrIlBtQIjVuom nf9HXQ2zn4oh1p85q4zFgSBTrZM1a8bjU0pUHrkC/XZgHDy2HsyZNYJlA Jcj82AoBZqZPjjy9lds6uJ9h8cYxYDgeTYM0L5NCxoi3l9NN1p80eENoS JUS6fTUmWNbiNt/8La/SLM636QJyaDAWHtldUFCLD2OytLiSohBgyP0or mVhmNnwoCkAVPX8RRpaoFHCcIUG9PzP2Tqu3g23COWWUTwhGM0AFhjlIs 14MtwJshDeJvR4d/LZInsrHUEj8UBAt9sJjYdnfQwmO8f3RsrrbfQzm59 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965864" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965864" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:29:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319208" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319208" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:29:17 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 13/14] iommu: Improve iopf_queue_remove_device() Date: Wed, 20 Dec 2023 09:23:31 +0800 Message-Id: <20231220012332.168188-14-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762691032684358 X-GMAIL-MSGID: 1785762691032684358 Convert iopf_queue_remove_device() to return void instead of an error code, as the return value is never used. This removal helper is designed to be never-failed, so there's no need for error handling. Ack all outstanding page requests from the device with the response code of IOMMU_PAGE_RESP_INVALID, indicating device should not attempt any retry. Add comments to this helper explaining the steps involved in removing a device from the iopf queue and disabling its PRI. Suggested-by: Jason Gunthorpe Signed-off-by: Lu Baolu Tested-by: Yan Zhao Reviewed-by: Jason Gunthorpe --- include/linux/iommu.h | 5 ++-- drivers/iommu/intel/iommu.c | 7 +---- drivers/iommu/io-pgfault.c | 59 ++++++++++++++++++++++++------------- 3 files changed, 41 insertions(+), 30 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index c2416aa79922..d8d173309469 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -1465,7 +1465,7 @@ iommu_sva_domain_alloc(struct device *dev, struct mm_struct *mm) #ifdef CONFIG_IOMMU_IOPF int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev); -int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev); +void iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev); int iopf_queue_flush_dev(struct device *dev); struct iopf_queue *iopf_queue_alloc(const char *name); void iopf_queue_free(struct iopf_queue *queue); @@ -1481,10 +1481,9 @@ iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) return -ENODEV; } -static inline int +static inline void iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) { - return -ENODEV; } static inline int iopf_queue_flush_dev(struct device *dev) diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 29a12f289e2e..a81a2be9b870 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4455,12 +4455,7 @@ static int intel_iommu_disable_iopf(struct device *dev) */ pci_disable_pri(to_pci_dev(dev)); info->pri_enabled = 0; - - /* - * With PRI disabled and outstanding PRQs drained, removing device - * from iopf queue should never fail. - */ - WARN_ON(iopf_queue_remove_device(iommu->iopf_queue, dev)); + iopf_queue_remove_device(iommu->iopf_queue, dev); return 0; } diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 3a907bad2fcb..3221f6387beb 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -449,42 +449,61 @@ EXPORT_SYMBOL_GPL(iopf_queue_add_device); * @queue: IOPF queue * @dev: device to remove * - * Caller makes sure that no more faults are reported for this device. + * Removing a device from an iopf_queue. It's recommended to follow these + * steps when removing a device: * - * Return: 0 on success and <0 on error. + * - Disable new PRI reception: Turn off PRI generation in the IOMMU hardware + * and flush any hardware page request queues. This should be done before + * calling into this helper. + * - Acknowledge all outstanding PRQs to the device: Respond to all outstanding + * page requests with IOMMU_PAGE_RESP_INVALID, indicating the device should + * not retry. This helper function handles this. + * - Disable PRI on the device: After calling this helper, the caller could + * then disable PRI on the device. + * - Tear down the iopf infrastructure: Calling iopf_queue_remove_device() + * essentially disassociates the device. The fault_param might still exist, + * but iommu_page_response() will do nothing. The device fault parameter + * reference count has been properly passed from iommu_report_device_fault() + * to the fault handling work, and will eventually be released after + * iommu_page_response(). */ -int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) +void iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) { - int ret = 0; struct iopf_fault *iopf, *next; + struct iommu_page_response resp; struct dev_iommu *param = dev->iommu; struct iommu_fault_param *fault_param; + const struct iommu_ops *ops = dev_iommu_ops(dev); mutex_lock(&queue->lock); mutex_lock(¶m->lock); fault_param = rcu_dereference_check(param->fault_param, lockdep_is_held(¶m->lock)); - if (!fault_param) { - ret = -ENODEV; - goto unlock; - } - - if (fault_param->queue != queue) { - ret = -EINVAL; - goto unlock; - } - if (!list_empty(&fault_param->faults)) { - ret = -EBUSY; + if (WARN_ON(!fault_param || fault_param->queue != queue)) goto unlock; - } - - list_del(&fault_param->queue_list); - /* Just in case some faults are still stuck */ + mutex_lock(&fault_param->lock); list_for_each_entry_safe(iopf, next, &fault_param->partial, list) kfree(iopf); + list_for_each_entry_safe(iopf, next, &fault_param->faults, list) { + memset(&resp, 0, sizeof(struct iommu_page_response)); + resp.pasid = iopf->fault.prm.pasid; + resp.grpid = iopf->fault.prm.grpid; + resp.code = IOMMU_PAGE_RESP_INVALID; + + if (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID) + resp.flags = IOMMU_PAGE_RESP_PASID_VALID; + + ops->page_response(dev, iopf, &resp); + list_del(&iopf->list); + kfree(iopf); + } + mutex_unlock(&fault_param->lock); + + list_del(&fault_param->queue_list); + /* dec the ref owned by iopf_queue_add_device() */ rcu_assign_pointer(param->fault_param, NULL); if (refcount_dec_and_test(&fault_param->users)) @@ -492,8 +511,6 @@ int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) unlock: mutex_unlock(¶m->lock); mutex_unlock(&queue->lock); - - return ret; } EXPORT_SYMBOL_GPL(iopf_queue_remove_device); From patchwork Wed Dec 20 01:23:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 181414 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2352442dyi; Tue, 19 Dec 2023 17:33:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IGT3vfFwDV6VTWpAcGUb59y8zupcdq+Du6WccTpFpgbzX9I68IanQPX76p0dO+5wt+Wqx02 X-Received: by 2002:a05:6a21:1f03:b0:18f:97c:8a51 with SMTP id ry3-20020a056a211f0300b0018f097c8a51mr20753173pzb.124.1703036036362; Tue, 19 Dec 2023 17:33:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703036036; cv=none; d=google.com; s=arc-20160816; b=Wb29eAZ/M+N4rdxIs222pDCZmXZPkhVek5k9LucwmFPNXtBmHjIssxeX64XVVmsiVd m368CldiOchpWl7baMjo4wc1nmJJbCoELQvpX4CIQPciMBhfcCQpbdb0gOhG1Kk9RWxK yisBINVe9ktuGQWHleNcnIlo6UUVgtsY+Ik5xpyfqokyuXSQuSPzzz+MS814F4EG99L+ pOxdHI494uisBBf8xWgPY9yAwy/Gw+KtBrd/BcN7zrIkQnwUCXsYmUxZMQ067TxSFHMv IA0l0zHKiRsIN3xkOHunh+4ha/u5WUMcVS0hJcYruG3aTMDRxY0UNedUL+qmoBpwX8C+ VFlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=m/r981KZUrGL9JP1mpY5IZOIuu1erA37F0Q8C8Ay20k=; fh=K/qmpcI9AI+O4asH00/RnUuMoQe0aGwja9gZ0+Y3GjE=; b=EuhmboEhUczSLJOYb51zEEPEZqzm4acsqH9Rmnk/Jnajf4Q2hcu+nusw2nZdBtPIBV Y6YFOmlpI8FJkIDvOnMRDFur9688bJHuNXpdKBkCrvehlCRxGOufA8nQQYxqX8Sep626 Ls2MDnGFjw3ypbfpv1vcOnJ2kEAV432bSpbXX6J9ggEoAxO4YPFB2qj3VvCkUegUtGWR GUtyCRLwCAYkNEqW+Sz93oV5Q5X9VwigauvFatJqcJ1KnCaBCIKqWV6wUM/WFboR+K0m nEE1on2/+9Q3k3xfIfl5Ao6uxxspvWu83cLEfN6UbqOM1j5j1VdXm62f5E5eBLg5Ldvr fxRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="IjT/Is9s"; spf=pass (google.com: domain of linux-kernel+bounces-6248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6248-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id e3-20020a656483000000b005c24211ca87si20342133pgv.243.2023.12.19.17.33.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:33:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="IjT/Is9s"; spf=pass (google.com: domain of linux-kernel+bounces-6248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6248-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 1E121284DD9 for ; Wed, 20 Dec 2023 01:33:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3982420DF3; Wed, 20 Dec 2023 01:29:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IjT/Is9s" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E93ED20B0C; Wed, 20 Dec 2023 01:29:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035766; x=1734571766; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I7usIqEvI+BlUy2AV/Tl95lT59UUjSl8Gal1Jpnr6uI=; b=IjT/Is9sBX7SCGLaNOCiHbxPZkFOg2qZ+q2FWetWsupZUWIvDJjdcszH adg43JpEVf+OUIBQ9L91MiMmQDpCWDppf3daUzLvpXyA506l9tC+uiYGv 6kSKcCxm1cvHouJ/mwvaE9TiwVmvm6hdmfzFrTTtKM1Nz7wYs9em/S6mM p0NFwxQp+ADceJlT/fdU7hyeJQFOSF96NFV15oeiI0bfiwOezdk0f5mnn oldKDkaAQXtPQeg3fJyELj1olh3VowcbmlHRpMHRAQmA0NY8mHW/XDch2 X4YBdR5PLklnhe0QQZO2aL+FqEJADqjwRg4wgKTFH4rj/uPQHKC8aHngS w==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965879" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965879" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:29:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319213" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319213" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:29:21 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v9 14/14] iommu: Track iopf group instead of last fault Date: Wed, 20 Dec 2023 09:23:32 +0800 Message-Id: <20231220012332.168188-15-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785762714830723852 X-GMAIL-MSGID: 1785762714830723852 Previously, before a group of page faults was passed to the domain's iopf handler, the last page fault of the group was kept in the list of iommu_fault_param::faults. In the page fault response path, the group's last page fault was used to look up the list, and the page faults were responded to device only if there was a matched fault. The previous approach seems unnecessarily complex and not performance friendly. Put the page fault group itself to the outstanding fault list. It can be removed in the page fault response path or in the iopf_queue_remove_device() path. The pending list is protected by iommu_fault_param::lock. To allow checking for the group's presence in the list using list_empty(), the iopf group should be removed from the list with list_del_init(). Signed-off-by: Lu Baolu Tested-by: Yan Zhao Reviewed-by: Jason Gunthorpe --- include/linux/iommu.h | 2 + drivers/iommu/io-pgfault.c | 220 +++++++++++++------------------------ 2 files changed, 78 insertions(+), 144 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index d8d173309469..2f765ae06021 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -129,6 +129,8 @@ struct iopf_fault { struct iopf_group { struct iopf_fault last_fault; struct list_head faults; + /* list node for iommu_fault_param::faults */ + struct list_head pending_node; struct work_struct work; struct iommu_domain *domain; /* The device's fault data parameter. */ diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 3221f6387beb..7d11b74e4048 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -78,12 +78,33 @@ static struct iommu_domain *get_domain_for_iopf(struct device *dev, return domain; } +/* Non-last request of a group. Postpone until the last one. */ +static int report_partial_fault(struct iommu_fault_param *fault_param, + struct iommu_fault *fault) +{ + struct iopf_fault *iopf; + + iopf = kzalloc(sizeof(*iopf), GFP_KERNEL); + if (!iopf) + return -ENOMEM; + + iopf->fault = *fault; + + mutex_lock(&fault_param->lock); + list_add(&iopf->list, &fault_param->partial); + mutex_unlock(&fault_param->lock); + + return 0; +} + /** - * iommu_handle_iopf - IO Page Fault handler - * @fault: fault event - * @iopf_param: the fault parameter of the device. + * iommu_report_device_fault() - Report fault event to device driver + * @dev: the device + * @evt: fault event data * - * Add a fault to the device workqueue, to be handled by mm. + * Called by IOMMU drivers when a fault is detected, typically in a threaded IRQ + * handler. When this function fails and the fault is recoverable, it is the + * caller's responsibility to complete the fault. * * This module doesn't handle PCI PASID Stop Marker; IOMMU drivers must discard * them before reporting faults. A PASID Stop Marker (LRW = 0b100) doesn't @@ -118,34 +139,37 @@ static struct iommu_domain *get_domain_for_iopf(struct device *dev, * * Return: 0 on success and <0 on error. */ -static int iommu_handle_iopf(struct iommu_fault *fault, - struct iommu_fault_param *iopf_param) +int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) { - int ret; + struct iommu_fault *fault = &evt->fault; + struct iommu_fault_param *iopf_param; + struct iopf_fault *iopf, *next; + struct iommu_domain *domain; struct iopf_group *group; - struct iommu_domain *domain; - struct iopf_fault *iopf, *next; - struct device *dev = iopf_param->dev; - - lockdep_assert_held(&iopf_param->lock); + int ret; if (fault->type != IOMMU_FAULT_PAGE_REQ) - /* Not a recoverable page fault */ return -EOPNOTSUPP; + iopf_param = iopf_get_dev_fault_param(dev); + if (!iopf_param) + return -ENODEV; + if (!(fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { - iopf = kzalloc(sizeof(*iopf), GFP_KERNEL); - if (!iopf) - return -ENOMEM; + ret = report_partial_fault(iopf_param, fault); + iopf_put_dev_fault_param(iopf_param); - iopf->fault = *fault; - - /* Non-last request of a group. Postpone until the last one */ - list_add(&iopf->list, &iopf_param->partial); - - return 0; + return ret; } + /* + * This is the last page fault of a group. Allocate an iopf group and + * pass it to domain's page fault handler. The group holds a reference + * count of the fault parameter. It will be released after response or + * error path of this function. If an error is returned, the caller + * will send a response to the hardware. We need to clean up before + * leaving, otherwise partial faults will be stuck. + */ domain = get_domain_for_iopf(dev, fault); if (!domain) { ret = -EINVAL; @@ -154,11 +178,6 @@ static int iommu_handle_iopf(struct iommu_fault *fault, group = kzalloc(sizeof(*group), GFP_KERNEL); if (!group) { - /* - * The caller will send a response to the hardware. But we do - * need to clean up before leaving, otherwise partial faults - * will be stuck. - */ ret = -ENOMEM; goto cleanup_partial; } @@ -166,145 +185,45 @@ static int iommu_handle_iopf(struct iommu_fault *fault, group->fault_param = iopf_param; group->last_fault.fault = *fault; INIT_LIST_HEAD(&group->faults); + INIT_LIST_HEAD(&group->pending_node); group->domain = domain; list_add(&group->last_fault.list, &group->faults); /* See if we have partial faults for this group */ + mutex_lock(&iopf_param->lock); list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { if (iopf->fault.prm.grpid == fault->prm.grpid) /* Insert *before* the last fault */ list_move(&iopf->list, &group->faults); } - + list_add(&group->pending_node, &iopf_param->faults); mutex_unlock(&iopf_param->lock); + ret = domain->iopf_handler(group); - mutex_lock(&iopf_param->lock); - if (ret) + if (ret) { + mutex_lock(&iopf_param->lock); + list_del_init(&group->pending_node); + mutex_unlock(&iopf_param->lock); iopf_free_group(group); + } return ret; + cleanup_partial: + mutex_lock(&iopf_param->lock); list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { if (iopf->fault.prm.grpid == fault->prm.grpid) { list_del(&iopf->list); kfree(iopf); } } - return ret; -} - -/** - * iommu_report_device_fault() - Report fault event to device driver - * @dev: the device - * @evt: fault event data - * - * Called by IOMMU drivers when a fault is detected, typically in a threaded IRQ - * handler. When this function fails and the fault is recoverable, it is the - * caller's responsibility to complete the fault. - * - * Return 0 on success, or an error. - */ -int iommu_report_device_fault(struct device *dev, struct iopf_fault *evt) -{ - bool last_prq = evt->fault.type == IOMMU_FAULT_PAGE_REQ && - (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE); - struct iommu_fault_param *fault_param; - struct iopf_fault *evt_pending; - int ret; - - fault_param = iopf_get_dev_fault_param(dev); - if (!fault_param) - return -EINVAL; - - mutex_lock(&fault_param->lock); - if (last_prq) { - evt_pending = kmemdup(evt, sizeof(struct iopf_fault), - GFP_KERNEL); - if (!evt_pending) { - ret = -ENOMEM; - goto err_unlock; - } - list_add_tail(&evt_pending->list, &fault_param->faults); - } - - ret = iommu_handle_iopf(&evt->fault, fault_param); - if (ret) - goto err_free; - - mutex_unlock(&fault_param->lock); - /* The reference count of fault_param is now held by iopf_group. */ - if (!last_prq) - iopf_put_dev_fault_param(fault_param); - - return 0; -err_free: - if (last_prq) { - list_del(&evt_pending->list); - kfree(evt_pending); - } -err_unlock: - mutex_unlock(&fault_param->lock); - iopf_put_dev_fault_param(fault_param); + mutex_unlock(&iopf_param->lock); + iopf_put_dev_fault_param(iopf_param); return ret; } EXPORT_SYMBOL_GPL(iommu_report_device_fault); -static int iommu_page_response(struct iopf_group *group, - struct iommu_page_response *msg) -{ - bool needs_pasid; - int ret = -EINVAL; - struct iopf_fault *evt; - struct iommu_fault_page_request *prm; - struct device *dev = group->fault_param->dev; - const struct iommu_ops *ops = dev_iommu_ops(dev); - bool has_pasid = msg->flags & IOMMU_PAGE_RESP_PASID_VALID; - struct iommu_fault_param *fault_param = group->fault_param; - - /* Only send response if there is a fault report pending */ - mutex_lock(&fault_param->lock); - if (list_empty(&fault_param->faults)) { - dev_warn_ratelimited(dev, "no pending PRQ, drop response\n"); - goto done_unlock; - } - /* - * Check if we have a matching page request pending to respond, - * otherwise return -EINVAL - */ - list_for_each_entry(evt, &fault_param->faults, list) { - prm = &evt->fault.prm; - if (prm->grpid != msg->grpid) - continue; - - /* - * If the PASID is required, the corresponding request is - * matched using the group ID, the PASID valid bit and the PASID - * value. Otherwise only the group ID matches request and - * response. - */ - needs_pasid = prm->flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID; - if (needs_pasid && (!has_pasid || msg->pasid != prm->pasid)) - continue; - - if (!needs_pasid && has_pasid) { - /* No big deal, just clear it. */ - msg->flags &= ~IOMMU_PAGE_RESP_PASID_VALID; - msg->pasid = 0; - } - - ret = ops->page_response(dev, evt, msg); - list_del(&evt->list); - kfree(evt); - break; - } - -done_unlock: - mutex_unlock(&fault_param->lock); - - return ret; -} - /** * iopf_queue_flush_dev - Ensure that all queued faults have been processed * @dev: the endpoint whose faults need to be flushed. @@ -346,18 +265,30 @@ EXPORT_SYMBOL_GPL(iopf_queue_flush_dev); int iopf_group_response(struct iopf_group *group, enum iommu_page_response_code status) { + struct iommu_fault_param *fault_param = group->fault_param; struct iopf_fault *iopf = &group->last_fault; + struct device *dev = group->fault_param->dev; + const struct iommu_ops *ops = dev_iommu_ops(dev); struct iommu_page_response resp = { .pasid = iopf->fault.prm.pasid, .grpid = iopf->fault.prm.grpid, .code = status, }; + int ret = -EINVAL; if ((iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) && (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID)) resp.flags = IOMMU_PAGE_RESP_PASID_VALID; - return iommu_page_response(group, &resp); + /* Only send response if there is a fault report pending */ + mutex_lock(&fault_param->lock); + if (!list_empty(&group->pending_node)) { + ret = ops->page_response(dev, &group->last_fault, &resp); + list_del_init(&group->pending_node); + } + mutex_unlock(&fault_param->lock); + + return ret; } EXPORT_SYMBOL_GPL(iopf_group_response); @@ -470,6 +401,7 @@ EXPORT_SYMBOL_GPL(iopf_queue_add_device); void iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) { struct iopf_fault *iopf, *next; + struct iopf_group *group, *temp; struct iommu_page_response resp; struct dev_iommu *param = dev->iommu; struct iommu_fault_param *fault_param; @@ -487,8 +419,9 @@ void iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) list_for_each_entry_safe(iopf, next, &fault_param->partial, list) kfree(iopf); - list_for_each_entry_safe(iopf, next, &fault_param->faults, list) { + list_for_each_entry_safe(group, temp, &fault_param->faults, pending_node) { memset(&resp, 0, sizeof(struct iommu_page_response)); + iopf = &group->last_fault; resp.pasid = iopf->fault.prm.pasid; resp.grpid = iopf->fault.prm.grpid; resp.code = IOMMU_PAGE_RESP_INVALID; @@ -497,8 +430,7 @@ void iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) resp.flags = IOMMU_PAGE_RESP_PASID_VALID; ops->page_response(dev, iopf, &resp); - list_del(&iopf->list); - kfree(iopf); + list_del_init(&group->pending_node); } mutex_unlock(&fault_param->lock);