From patchwork Thu Feb 8 08:23:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi Liu X-Patchwork-Id: 198197 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp20902dyd; Thu, 8 Feb 2024 00:25:01 -0800 (PST) X-Google-Smtp-Source: AGHT+IFg/dT5Iv6lMyQDA/BDZMHtllc+fuzkXw13mvqqg356wFVGKnF5hIxeqkopekPr8KcuMMY7 X-Received: by 2002:a05:620a:4d3:b0:785:60d6:36f7 with SMTP id 19-20020a05620a04d300b0078560d636f7mr8430617qks.69.1707380701133; Thu, 08 Feb 2024 00:25:01 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707380701; cv=pass; d=google.com; s=arc-20160816; b=XFcZZe8ugPjpgKLyhK4HQpDl1miLwu6mWIVza5l2BCfsNbZgcBQQrbIoAcSYiXmof3 WQxo8Og72+yU3G3kC7jxehvHKIdUodltoUR4M22Uhg2KeyFD5oC1tFEDrcwT0pW0s2+F DzBPAwo4QHIR+4/KcSuYFcoPD+3Pvx9TXXgRa9Iap3yHRbWUgnbQ9S1h/z/3xw2v2dYs 5YUDif4dOCuPKOQ9+oR8u+MZZu1zsBDuMGDXw2/Jt/ftOnAK+0NwKJWFvypAfG99BYp8 wcbVybmUrvbfz5O9HzsbxY8kG/HoP3OznPTgeKgc48/qbO3ZxeNtW8AqZJqGsvl/kpjG 0siw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=6/LgyhAnm/AlEvoUCvBe12XiB0hNFdsiQwexWD2KdEs=; fh=hiEZnwfx7vxWJgL7k1+Ngd60SNEz/U6PtwNBJLokJ/A=; b=A5QNmC9cT3j/hpuT2t3AHiLF68uGlJBo546zZdOv4MXWTNudIznPlYVdiPpdGiH332 812S8ioB6RspuaszTPLOKYrBhLPlH+7B/MSA9CGkUSVzTVEvVjmFYNZGalicT68u5mbw bSh9HMDZdtVInx66pegr9iXSbJiG/2nRaoQyE64aYUDhUvewuRCJPeLxlf2Xif9a0cjW A5UXqH3KQzdeswUoJyB0se7MTljE5plM2Yl7+Z7+WKr8HFtFapLmdJN8OSHqDp6XFRYi 5siu9GVyWxt4L9HLJOf6ZXfRNvDWgKdkl4zatm/RtcDJR3tUwiOGBgItg/uaxkm/RN3N 4HLQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Q8tpHErF; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-57606-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-57606-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=2; AJvYcCVVR4CJiJYpeF1pkbwmSHeJ1saOzq2GxdMsGZifJKWFTD6V6drQR3H/e+s2jhDkKrU8PYdfn6KEJZ0uXRg78CIrUNmHAg== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id d19-20020a05620a241300b00785a3e5283fsi1500420qkn.225.2024.02.08.00.25.01 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Feb 2024 00:25:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-57606-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Q8tpHErF; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-57606-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-57606-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id E76AB1C25B14 for ; Thu, 8 Feb 2024 08:25:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 89EAD6D1B8; Thu, 8 Feb 2024 08:23:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Q8tpHErF" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 598956A331; Thu, 8 Feb 2024 08:23:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707380594; cv=none; b=BJKfRkuUMd0km4SbYYJpTJn909q35QzJ99OaMGgLDOIffnN8Goa5Pp8TgNWG7zbIlTOrYyHX65Km08z+AP5eOlEeOQT5e1oMEuaHTIwo7ynOTNwOjGsGTpwlm76hVXdPVYeKimnFPrdGPeZwPDT8pUvGJ8deIraNKK+HrqFWUVs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707380594; c=relaxed/simple; bh=6WL8z2aMLDo6O3Bz7gM5e9uPczPryXbWDTbbIrVy8k8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=L5VSc7Kd8p9EIrHJqkWRQrL90nXOoDu8eefuhIQLi/cjyc82obJsrO8mjCsnkVcwj1AzsMbvL+myXCfdDuPu6BBuLAHaqUjjCDoEdBQekapmjBlZ+9qyLczGNuRCvHeV6ndGOyLAURlwjJndKsiuRX3LMA92rSR1Lqkx9gas+2A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Q8tpHErF; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707380593; x=1738916593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6WL8z2aMLDo6O3Bz7gM5e9uPczPryXbWDTbbIrVy8k8=; b=Q8tpHErFodD8JZMW6uM6wIveJD9cvhJNGXn/E1xOy/IAYd2Uwgu4WVRl DrFZDDHOb5TKjzcjNHRupQ8tS2Ka+butHNCEwfZu/Xk3FjI1R0iZ+mZ7h 4Ugzw3d95EophpDnXjLhu6edGgfLfhhRUHN//YO9Y1/W/cI/Pq72my6Qw /OghZ+/TJlJvPzwFQ93ShU9dLSropVv9xprixjeE/hdhsKJhH4TGDBC5z 2noIkNIHdvUciMkOqRojjeCP76D2Aa2jWmrcmFiK3o3InOSbdBtpvDbfh LhbJvgvtsZkquIQhH2bJTZH1M2+4ThhnOnWarokefyMXafs0keF+iO6VX w==; X-IronPort-AV: E=McAfee;i="6600,9927,10977"; a="5036309" X-IronPort-AV: E=Sophos;i="6.05,253,1701158400"; d="scan'208";a="5036309" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2024 00:23:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,253,1701158400"; d="scan'208";a="6252102" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orviesa003.jf.intel.com with ESMTP; 08 Feb 2024 00:23:11 -0800 From: Yi Liu To: joro@8bytes.org, jgg@nvidia.com, kevin.tian@intel.com, baolu.lu@linux.intel.com Cc: alex.williamson@redhat.com, robin.murphy@arm.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com Subject: [PATCH rc 2/8] iommu/vt-d: Add __iommu_flush_iotlb_psi() Date: Thu, 8 Feb 2024 00:23:01 -0800 Message-Id: <20240208082307.15759-3-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240208082307.15759-1-yi.l.liu@intel.com> References: <20240208082307.15759-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790318425767086735 X-GMAIL-MSGID: 1790318425767086735 Add __iommu_flush_iotlb_psi() to do the psi iotlb flush with a DID input rather than calculating it within the helper. This is useful when flushing cache for parent domain which reuses DIDs of its nested domains. Signed-off-by: Yi Liu Reviewed-by: Kevin Tian --- drivers/iommu/intel/iommu.c | 79 +++++++++++++++++++++---------------- 1 file changed, 44 insertions(+), 35 deletions(-) diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index e393c62776f3..eef6a187b651 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1368,6 +1368,47 @@ static void domain_flush_pasid_iotlb(struct intel_iommu *iommu, spin_unlock_irqrestore(&domain->lock, flags); } +static void __iommu_flush_iotlb_psi(struct intel_iommu *iommu, u16 did, + unsigned long pfn, unsigned int pages, + int ih) +{ + unsigned int aligned_pages = __roundup_pow_of_two(pages); + unsigned int mask = ilog2(aligned_pages); + uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT; + unsigned long bitmask = aligned_pages - 1; + + /* + * PSI masks the low order bits of the base address. If the + * address isn't aligned to the mask, then compute a mask value + * needed to ensure the target range is flushed. + */ + if (unlikely(bitmask & pfn)) { + unsigned long end_pfn = pfn + pages - 1, shared_bits; + + /* + * Since end_pfn <= pfn + bitmask, the only way bits + * higher than bitmask can differ in pfn and end_pfn is + * by carrying. This means after masking out bitmask, + * high bits starting with the first set bit in + * shared_bits are all equal in both pfn and end_pfn. + */ + shared_bits = ~(pfn ^ end_pfn) & ~bitmask; + mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG; + } + + /* + * Fallback to domain selective flush if no PSI support or + * the size is too big. + */ + if (!cap_pgsel_inv(iommu->cap) || + mask > cap_max_amask_val(iommu->cap)) + iommu->flush.flush_iotlb(iommu, did, 0, 0, + DMA_TLB_DSI_FLUSH); + else + iommu->flush.flush_iotlb(iommu, did, addr | ih, mask, + DMA_TLB_PSI_FLUSH); +} + static void iommu_flush_iotlb_psi(struct intel_iommu *iommu, struct dmar_domain *domain, unsigned long pfn, unsigned int pages, @@ -1384,42 +1425,10 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu, if (ih) ih = 1 << 6; - if (domain->use_first_level) { + if (domain->use_first_level) domain_flush_pasid_iotlb(iommu, domain, addr, pages, ih); - } else { - unsigned long bitmask = aligned_pages - 1; - - /* - * PSI masks the low order bits of the base address. If the - * address isn't aligned to the mask, then compute a mask value - * needed to ensure the target range is flushed. - */ - if (unlikely(bitmask & pfn)) { - unsigned long end_pfn = pfn + pages - 1, shared_bits; - - /* - * Since end_pfn <= pfn + bitmask, the only way bits - * higher than bitmask can differ in pfn and end_pfn is - * by carrying. This means after masking out bitmask, - * high bits starting with the first set bit in - * shared_bits are all equal in both pfn and end_pfn. - */ - shared_bits = ~(pfn ^ end_pfn) & ~bitmask; - mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG; - } - - /* - * Fallback to domain selective flush if no PSI support or - * the size is too big. - */ - if (!cap_pgsel_inv(iommu->cap) || - mask > cap_max_amask_val(iommu->cap)) - iommu->flush.flush_iotlb(iommu, did, 0, 0, - DMA_TLB_DSI_FLUSH); - else - iommu->flush.flush_iotlb(iommu, did, addr | ih, mask, - DMA_TLB_PSI_FLUSH); - } + else + __iommu_flush_iotlb_psi(iommu, did, pfn, pages, ih); /* * In caching mode, changes of pages from non-present to present require