From patchwork Mon Feb 12 01:22:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 199589 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp2258110dyd; Sun, 11 Feb 2024 22:14:26 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWn14tody1i9cWJdy1x7HgEIpzGtcLxINB2TcngrLkJuLZ9phzcfRxP4T7Xsu4s9z7z5Len9rlOwrZd9oYaq0HSb0qplQ== X-Google-Smtp-Source: AGHT+IHm0/QWfEA1xcL3tbFG8s8ikdVJ9owhXAnWPaFaTcoAW0k3uAiNTZjbrMBGky9vZ35i/hfu X-Received: by 2002:a05:6871:b1f:b0:218:d4de:7fe9 with SMTP id fq31-20020a0568710b1f00b00218d4de7fe9mr4675116oab.43.1707718466489; Sun, 11 Feb 2024 22:14:26 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707718466; cv=pass; d=google.com; s=arc-20160816; b=X+ZPbjQdZz8Bszstd3MkSsWYmv6cIzT1G2ofkq8OTCsSQH6E7oxNjWjb2Ia08KEKFd RoXNBA6rpqWfl8Zkw1SIh984sBAB4wKSSBF3t3ij8F6FiPZo1LmSYJV1aSkOdi8avvy4 TtCwyAjfty/IHKaVq/XV3CGX570tyT8X7OQ4tAaoRV/UkbKhmA134XOGHSMOu4X0AHa8 0p6hG9sckfH9CK40SWeLOIVsi8RPSuH/QFsatqiCkc3zWzz/ODYkRBL+t0PJAdFkeQ9Y 8D49XQ4gdSf5w4qzgZyuJ8Vjnkmo3Rk1Je5tGvLI0ZmmL1Z9QlExrxUptlcEnsxLHphX LLJA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=WwKgJB9cPLNjrJbPurkkxGOkv26ksh4NAopBD8eaa3E=; fh=Okw3udkXZYZKzXG39fi/SDPDc2BhxHjJvkuldumZuz4=; b=KcCWlYxn9AA3MTtDEazIsfj9RUH5K0ncN5TChc0TBCwwP+hCt0USztiTNhC6p2Az5A K7Q4NaNV9V9AaOns7lBgV2bqiA2o03//38mMeoREbzrfXSpxFoezuVAohpGdnqJdbFe1 kChEteO0cLFCIVVw6yF8cYdDGxDhcwSKx/nW2W3O2jJZhxwieK7kgDsWPzsIuLvBtMLa D8hAr4WdEjePzf9syLLEvJQJSJn24TTB81ozapIZMNM9MNVuuL0mi0dVa7BykiADcQRv shNUAqQaA0s1szY3GI4WJ28qNdt68brwC1kIVGmwDjamdt1VYVwTdp+E8wZlUmxxStKd ZOKQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BUhZ5SxH; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-61049-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-61049-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=2; AJvYcCWkEwvyiw3S1OlsfrJhpYHmuyDfzVXCE5ypNhehLLvjeiEonq/6rmm6ZyMElBUdbjfMopsaNdBYtCxcxFepQPjTomlyFg== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id q17-20020a637511000000b005dc3c49442esi5795793pgc.732.2024.02.11.22.14.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Feb 2024 22:14:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-61049-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BUhZ5SxH; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-61049-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-61049-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id B953D2862EC for ; Mon, 12 Feb 2024 01:31:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 158BD17550; Mon, 12 Feb 2024 01:28:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BUhZ5SxH" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13E5913FF1; Mon, 12 Feb 2024 01:28:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707701333; cv=none; b=GLxx2bmAnriVMNDxzZRax4n2KbMsVLMIKMwz/pNfZ8CECpzN4Nc8neXF5o9nm1vZHzsiizm/T/nLspfnAcIgs5QDwr6pu9t5r/56HAe48QefZ+/b0wT2B5nftpRy9ZQkyRUlwgOGZ5BhJUT5GvXZO4aop8167z6SfIoa6ymP4mI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707701333; c=relaxed/simple; bh=bvC+OaeVGZBkw57HVYNtbxVZI0/qIpKMxs2hERl8p8s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aKJXr4G7f72W0AA6YBdNBoBcixtusoq2I591mclnTFZMKARYi1UeOtUZx9fpvNtsmFuWReUniN2mxoS3vC7QS9jFrMblOKWfKN15vO9uYYKxLTxykFSp5XTGdVjG4BgfGWgzJ49RpuD/uG0v3LQzt4J3k627ZMq0z7kV1EKW6GE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BUhZ5SxH; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707701333; x=1739237333; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bvC+OaeVGZBkw57HVYNtbxVZI0/qIpKMxs2hERl8p8s=; b=BUhZ5SxH8Bvh+zOqB7u1lzCRMA3JaCjx30vGDDxKrTRLlZ7NqBGD8O5F s5JKRmiZoHs3bZnDw0eXJq2L+yU6CY9iP31VwRJ0np3XOam7sHRJn5j78 REjETy2e5+SuToLAiapo3Jn2aHmVVKUFpnDtcMtr946UnXIe3e/e2h+36 7qHFRo8miRGnAhCa3GdHfsBQQablz9blzaHbjKTlsYz74bXfrkTp4GIba ZT5jcAbGZC92o1x5SMyS1XaIL1BLOkd5b8O0jDt55KP0oUqpLkRCPjFnj Arbc9dTP4ceaFEpwCNAOzuyJPl8QDSSPvHQ9E+ogHxW1QkTO0ohMcKH8k g==; X-IronPort-AV: E=McAfee;i="6600,9927,10981"; a="5502147" X-IronPort-AV: E=Sophos;i="6.05,261,1701158400"; d="scan'208";a="5502147" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Feb 2024 17:28:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,261,1701158400"; d="scan'208";a="7132221" Received: from allen-box.sh.intel.com ([10.239.159.127]) by orviesa005.jf.intel.com with ESMTP; 11 Feb 2024 17:28:48 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , Joel Granados , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v13 09/16] iommu: Make iommu_queue_iopf() more generic Date: Mon, 12 Feb 2024 09:22:20 +0800 Message-Id: <20240212012227.119381-10-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240212012227.119381-1-baolu.lu@linux.intel.com> References: <20240212012227.119381-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790672598459334500 X-GMAIL-MSGID: 1790672598459334500 Make iommu_queue_iopf() more generic by making the iopf_group a minimal set of iopf's that an iopf handler of domain should handle and respond to. Add domain parameter to struct iopf_group so that the handler can retrieve and use it directly. Change iommu_queue_iopf() to forward groups of iopf's to the domain's iopf handler. This is also a necessary step to decouple the sva iopf handling code from this interface. Signed-off-by: Lu Baolu Reviewed-by: Kevin Tian Reviewed-by: Jason Gunthorpe Reviewed-by: Yi Liu Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 4 +-- drivers/iommu/iommu-sva.h | 6 ++-- drivers/iommu/io-pgfault.c | 68 +++++++++++++++++++++++++++++++------- drivers/iommu/iommu-sva.c | 3 +- 4 files changed, 61 insertions(+), 20 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index b352198cb030..b9fdd09d7b0e 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -130,6 +130,7 @@ struct iopf_group { struct list_head faults; struct work_struct work; struct device *dev; + struct iommu_domain *domain; }; /** @@ -209,8 +210,7 @@ struct iommu_domain { unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ struct iommu_domain_geometry geometry; struct iommu_dma_cookie *iova_cookie; - enum iommu_page_response_code (*iopf_handler)(struct iommu_fault *fault, - void *data); + int (*iopf_handler)(struct iopf_group *group); void *fault_data; union { struct { diff --git a/drivers/iommu/iommu-sva.h b/drivers/iommu/iommu-sva.h index de7819c796ce..27c8da115b41 100644 --- a/drivers/iommu/iommu-sva.h +++ b/drivers/iommu/iommu-sva.h @@ -22,8 +22,7 @@ int iopf_queue_flush_dev(struct device *dev); struct iopf_queue *iopf_queue_alloc(const char *name); void iopf_queue_free(struct iopf_queue *queue); int iopf_queue_discard_partial(struct iopf_queue *queue); -enum iommu_page_response_code -iommu_sva_handle_iopf(struct iommu_fault *fault, void *data); +int iommu_sva_handle_iopf(struct iopf_group *group); #else /* CONFIG_IOMMU_SVA */ static inline int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) @@ -62,8 +61,7 @@ static inline int iopf_queue_discard_partial(struct iopf_queue *queue) return -ENODEV; } -static inline enum iommu_page_response_code -iommu_sva_handle_iopf(struct iommu_fault *fault, void *data) +static inline int iommu_sva_handle_iopf(struct iopf_group *group) { return IOMMU_PAGE_RESP_INVALID; } diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index c7e6bbed5c05..13cd0929e766 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -13,6 +13,9 @@ #include "iommu-sva.h" +enum iommu_page_response_code +iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm); + static void iopf_free_group(struct iopf_group *group) { struct iopf_fault *iopf, *next; @@ -45,29 +48,48 @@ static void iopf_handler(struct work_struct *work) { struct iopf_fault *iopf; struct iopf_group *group; - struct iommu_domain *domain; enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; group = container_of(work, struct iopf_group, work); - domain = iommu_get_domain_for_dev_pasid(group->dev, - group->last_fault.fault.prm.pasid, 0); - if (!domain || !domain->iopf_handler) - status = IOMMU_PAGE_RESP_INVALID; - list_for_each_entry(iopf, &group->faults, list) { /* * For the moment, errors are sticky: don't handle subsequent * faults in the group if there is an error. */ - if (status == IOMMU_PAGE_RESP_SUCCESS) - status = domain->iopf_handler(&iopf->fault, - domain->fault_data); + if (status != IOMMU_PAGE_RESP_SUCCESS) + break; + + status = iommu_sva_handle_mm(&iopf->fault, group->domain->mm); } iopf_complete_group(group->dev, &group->last_fault, status); iopf_free_group(group); } +static struct iommu_domain *get_domain_for_iopf(struct device *dev, + struct iommu_fault *fault) +{ + struct iommu_domain *domain; + + if (fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) { + domain = iommu_get_domain_for_dev_pasid(dev, fault->prm.pasid, 0); + if (IS_ERR(domain)) + domain = NULL; + } else { + domain = iommu_get_domain_for_dev(dev); + } + + if (!domain || !domain->iopf_handler) { + dev_warn_ratelimited(dev, + "iopf (pasid %d) without domain attached or handler installed\n", + fault->prm.pasid); + + return NULL; + } + + return domain; +} + /** * iommu_queue_iopf - IO Page Fault handler * @fault: fault event @@ -112,6 +134,7 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) { int ret; struct iopf_group *group; + struct iommu_domain *domain; struct iopf_fault *iopf, *next; struct iommu_fault_param *iopf_param; struct dev_iommu *param = dev->iommu; @@ -143,6 +166,12 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) return 0; } + domain = get_domain_for_iopf(dev, fault); + if (!domain) { + ret = -EINVAL; + goto cleanup_partial; + } + group = kzalloc(sizeof(*group), GFP_KERNEL); if (!group) { /* @@ -157,8 +186,8 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) group->dev = dev; group->last_fault.fault = *fault; INIT_LIST_HEAD(&group->faults); + group->domain = domain; list_add(&group->last_fault.list, &group->faults); - INIT_WORK(&group->work, iopf_handler); /* See if we have partial faults for this group */ list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { @@ -167,9 +196,13 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) list_move(&iopf->list, &group->faults); } - queue_work(iopf_param->queue->wq, &group->work); - return 0; + mutex_unlock(&iopf_param->lock); + ret = domain->iopf_handler(group); + mutex_lock(&iopf_param->lock); + if (ret) + iopf_free_group(group); + return ret; cleanup_partial: list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { if (iopf->fault.prm.grpid == fault->prm.grpid) { @@ -181,6 +214,17 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev) } EXPORT_SYMBOL_GPL(iommu_queue_iopf); +int iommu_sva_handle_iopf(struct iopf_group *group) +{ + struct iommu_fault_param *fault_param = group->dev->iommu->fault_param; + + INIT_WORK(&group->work, iopf_handler); + if (!queue_work(fault_param->queue->wq, &group->work)) + return -EBUSY; + + return 0; +} + /** * iopf_queue_flush_dev - Ensure that all queued faults have been processed * @dev: the endpoint whose faults need to be flushed. diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index c3fc9201d0be..fcae7308fcb7 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -163,11 +163,10 @@ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid); * I/O page fault handler for SVA */ enum iommu_page_response_code -iommu_sva_handle_iopf(struct iommu_fault *fault, void *data) +iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm) { vm_fault_t ret; struct vm_area_struct *vma; - struct mm_struct *mm = data; unsigned int access_flags = 0; unsigned int fault_flags = FAULT_FLAG_REMOTE; struct iommu_fault_page_request *prm = &fault->prm;