From patchwork Sat Oct 28 10:20:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Justin He X-Patchwork-Id: 159252 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1140865vqb; Sat, 28 Oct 2023 03:23:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFBPO+r9XR+lix2kxG7CXrjvkMJYUAA+xIZT5AECwkpViTeNAG7QU+5RJnbR81ODTM4LfwK X-Received: by 2002:a17:902:cf41:b0:1ca:8252:a91 with SMTP id e1-20020a170902cf4100b001ca82520a91mr5738268plg.41.1698488596486; Sat, 28 Oct 2023 03:23:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698488596; cv=none; d=google.com; s=arc-20160816; b=bdBwJ3H5kx3Wuu8QPqv7pN9kLQUJoBoKq71IJ2b4wvYgrHz4D4Yh78pV0NpJwe0Hfq 7CCZHifF4S+q/8PPrUU5ZxH4WtLx5DBYo7FYfzd3gG48feNsJHIZkYmoy/Fc4QHbtzmi o9Ezy6sxK8qEWy3sbGw8usAhee3o/wLTkm5Vo+E3RcH95fG81C3iUcuZrrHqDyIomHx2 bgTqfnd7EwTwYxRSdb2NLQGEUpwmYDq5L3dH86mn/o8uHtyOLmAsCKZ/rRzdbqTOjbgx YreUJnRVbdFNhTrOu42+5stgK38GkjogjKzihgMC6PTllEaMXclnlcTzFFLKo9b7RGfo 4QJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=mMzgyjMPdy8FNx4q5PaPpH4RNFGSoMSV/fpFVzS+4b4=; fh=68tg6b1bPtmZVBq+B2APHMHzj3ZViBb0z45z2Kpdrwo=; b=ErGF1beXS0MnunEKrChhHWSUuvtm90FLp+4NXVkuZLg16oSjZzGaOgpgOqKavxznFN 5X8AJgrfwxxYZ0aGHh1Fi/4zdnxisPpYCusyoRrEF1cz+OAqHxgQuFsgR5HU/NsCauJP 5LAG6ecRMec2upyCb+MS0bfliRcIGHWgsN6ek6VJyEEWinlUkZ23PobBd4mgxcQGmeFt A0hFblLYI2lG7pKB/0H0aNcxZ12vbFGfj8EGDvCGW2jzpDmQGDExViav1Houb2qFEO9S CJam+q6/nxlaAQ66JyZHoShwY95pn6qbyYXk7vteWiBcAQxnlp0Tr+sntxj5jQLKEM1b H2fw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id u6-20020a170902e5c600b001c878b52cf4si665788plf.365.2023.10.28.03.23.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 28 Oct 2023 03:23:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id EB6288048C0D; Sat, 28 Oct 2023 03:23:13 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229703AbjJ1KVR (ORCPT + 28 others); Sat, 28 Oct 2023 06:21:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229460AbjJ1KVR (ORCPT ); Sat, 28 Oct 2023 06:21:17 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 09637E5 for ; Sat, 28 Oct 2023 03:21:15 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F558143D; Sat, 28 Oct 2023 03:21:56 -0700 (PDT) Received: from entos-ampere02.shanghai.arm.com (unknown [10.169.212.228]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4B7DE3F64C; Sat, 28 Oct 2023 03:21:12 -0700 (PDT) From: Jia He To: Christoph Hellwig , Marek Szyprowski , Robin Murphy , iommu@lists.linux.dev Cc: linux-kernel@vger.kernel.org, nd@arm.com, Jia He Subject: [PATCH v4 1/2] dma-mapping: move dma_addressing_limited() out of line Date: Sat, 28 Oct 2023 10:20:58 +0000 Message-Id: <20231028102059.66891-2-justin.he@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231028102059.66891-1-justin.he@arm.com> References: <20231028102059.66891-1-justin.he@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Sat, 28 Oct 2023 03:23:14 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780994378154402182 X-GMAIL-MSGID: 1780994378154402182 This patch moves dma_addressing_limited() out of line, serving as a preliminary step to prevent the introduction of a new publicly accessible low-level helper when validating whether all system RAM is mapped within the DMA mapping range. Suggested-by: Christoph Hellwig Signed-off-by: Jia He --- include/linux/dma-mapping.h | 19 +++++-------------- kernel/dma/mapping.c | 15 +++++++++++++++ 2 files changed, 20 insertions(+), 14 deletions(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f0ccca16a0ac..4a658de44ee9 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -144,6 +144,7 @@ bool dma_pci_p2pdma_supported(struct device *dev); int dma_set_mask(struct device *dev, u64 mask); int dma_set_coherent_mask(struct device *dev, u64 mask); u64 dma_get_required_mask(struct device *dev); +bool dma_addressing_limited(struct device *dev); size_t dma_max_mapping_size(struct device *dev); size_t dma_opt_mapping_size(struct device *dev); bool dma_need_sync(struct device *dev, dma_addr_t dma_addr); @@ -264,6 +265,10 @@ static inline u64 dma_get_required_mask(struct device *dev) { return 0; } +static inline bool dma_addressing_limited(struct device *dev) +{ + return false; +} static inline size_t dma_max_mapping_size(struct device *dev) { return 0; @@ -465,20 +470,6 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask) return dma_set_mask_and_coherent(dev, mask); } -/** - * dma_addressing_limited - return if the device is addressing limited - * @dev: device to check - * - * Return %true if the devices DMA mask is too small to address all memory in - * the system, else %false. Lack of addressing bits is the prime reason for - * bounce buffering, but might not be the only one. - */ -static inline bool dma_addressing_limited(struct device *dev) -{ - return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < - dma_get_required_mask(dev); -} - static inline unsigned int dma_get_max_seg_size(struct device *dev) { if (dev->dma_parms && dev->dma_parms->max_segment_size) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index e323ca48f7f2..5bfe782f9a7f 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -793,6 +793,21 @@ int dma_set_coherent_mask(struct device *dev, u64 mask) } EXPORT_SYMBOL(dma_set_coherent_mask); +/** + * dma_addressing_limited - return if the device is addressing limited + * @dev: device to check + * + * Return %true if the devices DMA mask is too small to address all memory in + * the system, else %false. Lack of addressing bits is the prime reason for + * bounce buffering, but might not be the only one. + */ +bool dma_addressing_limited(struct device *dev) +{ + return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < + dma_get_required_mask(dev); +} +EXPORT_SYMBOL(dma_addressing_limited); + size_t dma_max_mapping_size(struct device *dev) { const struct dma_map_ops *ops = get_dma_ops(dev);