From patchwork Mon Oct 16 12:52:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Justin He X-Patchwork-Id: 153372 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp3436473vqb; Mon, 16 Oct 2023 05:53:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFLljdJS/9W25P/uc6lzaBFcYai5bDWhy4D5AYmb7v/C3m6GXwVPtbi+K7zuzySRw9z2sox X-Received: by 2002:a17:90b:3b41:b0:27d:2762:2728 with SMTP id ot1-20020a17090b3b4100b0027d27622728mr10542915pjb.0.1697460809179; Mon, 16 Oct 2023 05:53:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697460809; cv=none; d=google.com; s=arc-20160816; b=SNmP6q58IbjeVU6YL/gprzKfsGCMiiWEz8wPdEQ4AuALQ9RnDDb4WcFkCCdbBgd7lw Qj2VZ/qavhDEYIdIlnmSpQvcB9CXPY7YKATmm67wfvfDiMJXnG8tqVU/HOBGjAU1pbdf c5svjZLMiz+ew8XEMgSUkDmQMf4+AXL1c8JDAI0rSzVVKze2YjEMne5kKWd/6jvwyW97 xXOcbn07bWzn6RvAB8Q7DmcnQygzLLya1Vy4lQEF/wz/K89E5aMruhK9ppF2T671OK7V 2fMaUvbFWaPZ68TW3I+v5xvvTGMDY8ksn9AIYOEvGTM8n/wWFsPULnjdmf0Xp0AZf/Lu sy1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=TG1wcK3sxM55d/N2AP5JGxS/SQtcH2GPbN+ggnh9UpE=; fh=68tg6b1bPtmZVBq+B2APHMHzj3ZViBb0z45z2Kpdrwo=; b=hTeorlVILRu0amjQwmZKSoaWprl9ASKFUyoSE5FYr7I6SkJYXR0o0fMNHZTHOOPQfD 1Z+ZFarZ+vFb770bufPW5ch54G/272tZ3ffsZWaeO341OcSQNeXNV8eCqUOlzJin/HBZ m6/00qkCIxIFC6LHREyPZjt7vA4c/7x6lTOon+d31+YVJs8gpSCggcsldHOUAHE70jjb oZHkPAyqXGrhYFt+093dFSU4AVnT55dwlvJI8rgJyB+iv7vzUWHcFmyBQ8Dh8XjvCYCA SzasGRRJIvjy8Qf7Qg0w1VpLqdaY8Cjca36TN/09VLJ+vJLgeNp3Jmc/CynbQ4AuZ1GC 24Ow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id 25-20020a17090a199900b0025eeb3cc4b2si6089683pji.9.2023.10.16.05.53.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Oct 2023 05:53:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 25E8380A8534; Mon, 16 Oct 2023 05:53:27 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233502AbjJPMxN (ORCPT + 18 others); Mon, 16 Oct 2023 08:53:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233491AbjJPMxK (ORCPT ); Mon, 16 Oct 2023 08:53:10 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 03113E6 for ; Mon, 16 Oct 2023 05:53:09 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D77C1FB; Mon, 16 Oct 2023 05:53:49 -0700 (PDT) Received: from entos-ampere02.shanghai.arm.com (unknown [10.169.212.212]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5232B3F5A1; Mon, 16 Oct 2023 05:53:06 -0700 (PDT) From: Jia He To: Christoph Hellwig , Marek Szyprowski , Robin Murphy , iommu@lists.linux.dev Cc: linux-kernel@vger.kernel.org, nd@arm.com, Jia He Subject: [PATCH v3 1/2] dma-mapping: export dma_addressing_limited() Date: Mon, 16 Oct 2023 12:52:53 +0000 Message-Id: <20231016125254.1875-2-justin.he@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231016125254.1875-1-justin.he@arm.com> References: <20231016125254.1875-1-justin.he@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Mon, 16 Oct 2023 05:53:27 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779916665077310293 X-GMAIL-MSGID: 1779916665077310293 This is a preparatory patch to move dma_addressing_limited so that it is exported instead of a new low-level helper. Suggested-by: Christoph Hellwig Signed-off-by: Jia He --- include/linux/dma-mapping.h | 19 +++++-------------- kernel/dma/mapping.c | 15 +++++++++++++++ 2 files changed, 20 insertions(+), 14 deletions(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f0ccca16a0ac..4a658de44ee9 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -144,6 +144,7 @@ bool dma_pci_p2pdma_supported(struct device *dev); int dma_set_mask(struct device *dev, u64 mask); int dma_set_coherent_mask(struct device *dev, u64 mask); u64 dma_get_required_mask(struct device *dev); +bool dma_addressing_limited(struct device *dev); size_t dma_max_mapping_size(struct device *dev); size_t dma_opt_mapping_size(struct device *dev); bool dma_need_sync(struct device *dev, dma_addr_t dma_addr); @@ -264,6 +265,10 @@ static inline u64 dma_get_required_mask(struct device *dev) { return 0; } +static inline bool dma_addressing_limited(struct device *dev) +{ + return false; +} static inline size_t dma_max_mapping_size(struct device *dev) { return 0; @@ -465,20 +470,6 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask) return dma_set_mask_and_coherent(dev, mask); } -/** - * dma_addressing_limited - return if the device is addressing limited - * @dev: device to check - * - * Return %true if the devices DMA mask is too small to address all memory in - * the system, else %false. Lack of addressing bits is the prime reason for - * bounce buffering, but might not be the only one. - */ -static inline bool dma_addressing_limited(struct device *dev) -{ - return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < - dma_get_required_mask(dev); -} - static inline unsigned int dma_get_max_seg_size(struct device *dev) { if (dev->dma_parms && dev->dma_parms->max_segment_size) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index e323ca48f7f2..5bfe782f9a7f 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -793,6 +793,21 @@ int dma_set_coherent_mask(struct device *dev, u64 mask) } EXPORT_SYMBOL(dma_set_coherent_mask); +/** + * dma_addressing_limited - return if the device is addressing limited + * @dev: device to check + * + * Return %true if the devices DMA mask is too small to address all memory in + * the system, else %false. Lack of addressing bits is the prime reason for + * bounce buffering, but might not be the only one. + */ +bool dma_addressing_limited(struct device *dev) +{ + return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < + dma_get_required_mask(dev); +} +EXPORT_SYMBOL(dma_addressing_limited); + size_t dma_max_mapping_size(struct device *dev) { const struct dma_map_ops *ops = get_dma_ops(dev);