From patchwork Tue Dec 19 17:50:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 181182 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2120439dyi; Tue, 19 Dec 2023 10:04:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IGMy5i4KPzKvX1piOLNr5YUNdouyAxpMyMTGfoloqtMfxXxiiuM6Q+/KvG+uFEywZhnq+W8 X-Received: by 2002:a05:620a:438c:b0:780:fad7:9f5f with SMTP id a12-20020a05620a438c00b00780fad79f5fmr2235938qkp.68.1703009054857; Tue, 19 Dec 2023 10:04:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703009054; cv=none; d=google.com; s=arc-20160816; b=KBJhfLqTcBtj1bry+sYOy+VufxzKsJm6TxARw9C45vlwZ3B3cMR4SAxgnwRV/RhaD2 A7oiVdtEzcmk+l6V4dGrJLMi0FWNTDlf18PHaDCfBgCSLvSxZXHsHSBIG/Qe8kPo5E72 P2NzjzijdwJzQZu77vEX+896PznZXVUsFJ4OVe+bt+LCsnXTzzFuoE0Nw94mPAf1RhBv DlTa4pAq1xTODrx0LFxdXD26Vz3+/+vtBDfENXqfIPKpyox7MrUzKvwyEpjfjqHG8CNy ZoCVvkLuxoPpj+0EwV8kXLcpXtdFp5gCRr/otjJqX5Xqy/E2Y4YPWD9tHN/4xCUoRwPa hl7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ozqlHfd+iDCV8pBHY/VR7pPpuWXuYSgWYa3OF+jTDco=; fh=0Glk/ayqI8+vZzfC3nNOQX076ObCaIyGiP/4knLjRm0=; b=yArIt0hjD0RU04oOQJOLCRRaxiZa4zWQx8BFvp4ko0K1mU298UDTxj8d42+QQj89PC APW6jzEYnxN5iFQIdVZhvExI+ujZJqkb0dye3byNdjrQmW7IdyF5m5HGk/T15bBMwz45 r0faQTi26MfkHh99BHlqlLN+JXvcH8qXb5AwZWivyj2ZmGxz9/cnv+FWKzwv1HSAU+xI TFYBBfiP75QJgpXc6X/yhueV+ySZpPJV4P1EVfFxtjX8vFnbP/bxB2XA7yWMvB8VYW0B mvqnk+Mczc9XcMKuNt0khg/Wrps9qa9WmnEny2YjmUuLHe6IpsecupxcNdeaXOTsbRbI vSQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=qZeIMWFF; spf=pass (google.com: domain of linux-kernel+bounces-5759-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5759-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id l5-20020a05620a28c500b0077d8a0d4053si26606815qkp.631.2023.12.19.10.04.14 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 10:04:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5759-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=qZeIMWFF; spf=pass (google.com: domain of linux-kernel+bounces-5759-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5759-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id A19901C24D62 for ; Tue, 19 Dec 2023 18:04:14 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2678739AC0; Tue, 19 Dec 2023 18:03:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qZeIMWFF" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C90438FB3; Tue, 19 Dec 2023 18:03:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F40EC433CB; Tue, 19 Dec 2023 18:03:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703009007; bh=DsAEkljb9nELe7pOkNsFuedvY7TlIkqM0TVa9ONTLHQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qZeIMWFFttxnxsz70F8UouiwH0y4TqZqUR0au/SLYyPTk1mvMoENsBc4ab890iYVP BLByYZiQvFaJaD70giLF0L/HZSuSnQoA27jZ9O0sTHDMS+CV7k5vLv971aK+ISHT0h Q723BOdVl4693wzHnanJbWjMnp8eMfvvBuZL+MqIdTyv4cvHil7xR5nCM9zK5aZIBl duobbluzAihDUQKnsNK7gbSfhRYl4rIktzJyXgaa/afOM+eBTBTEvSSO+NmqV+Zt0X kyvovCVxXTiaOcuXTRROcZOvZHg5P/fqLv8479zyhTaEDOiXIlorhkYKPkcN5VeNZg uIXfSnq44E5/Q== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/4] riscv: tlb: convert __p*d_free_tlb() to inline functions Date: Wed, 20 Dec 2023 01:50:44 +0800 Message-Id: <20231219175046.2496-3-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20231219175046.2496-1-jszhang@kernel.org> References: <20231219175046.2496-1-jszhang@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785734422803449065 X-GMAIL-MSGID: 1785734422803449065 This is to prepare for enabling MMU_GATHER_RCU_TABLE_FREE. No functionality changes. Signed-off-by: Jisheng Zhang Reviewed-by: Alexandre Ghiti --- arch/riscv/include/asm/pgalloc.h | 54 +++++++++++++++++++------------- 1 file changed, 32 insertions(+), 22 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index a12fb83fa1f5..3c5e3bd15f46 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -95,13 +95,16 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) __pud_free(mm, pud); } -#define __pud_free_tlb(tlb, pud, addr) \ -do { \ - if (pgtable_l4_enabled) { \ - pagetable_pud_dtor(virt_to_ptdesc(pud)); \ - tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ - } \ -} while (0) +static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, + unsigned long addr) +{ + if (pgtable_l4_enabled) { + struct ptdesc *ptdesc = virt_to_ptdesc(pud); + + pagetable_pud_dtor(ptdesc); + tlb_remove_page_ptdesc(tlb, ptdesc); + } +} #define p4d_alloc_one p4d_alloc_one static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) @@ -130,11 +133,12 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) __p4d_free(mm, p4d); } -#define __p4d_free_tlb(tlb, p4d, addr) \ -do { \ - if (pgtable_l5_enabled) \ - tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d)); \ -} while (0) +static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, + unsigned long addr) +{ + if (pgtable_l5_enabled) + tlb_remove_page_ptdesc(tlb, virt_to_ptdesc(p4d)); +} #endif /* __PAGETABLE_PMD_FOLDED */ static inline void sync_kernel_mappings(pgd_t *pgd) @@ -159,19 +163,25 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #ifndef __PAGETABLE_PMD_FOLDED -#define __pmd_free_tlb(tlb, pmd, addr) \ -do { \ - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ - tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ -} while (0) +static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, + unsigned long addr) +{ + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); + + pagetable_pmd_dtor(ptdesc); + tlb_remove_page_ptdesc(tlb, ptdesc); +} #endif /* __PAGETABLE_PMD_FOLDED */ -#define __pte_free_tlb(tlb, pte, buf) \ -do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));\ -} while (0) +static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, + unsigned long addr) +{ + struct ptdesc *ptdesc = page_ptdesc(pte); + + pagetable_pte_dtor(ptdesc); + tlb_remove_page_ptdesc(tlb, ptdesc); +} #endif /* CONFIG_MMU */ #endif /* _ASM_RISCV_PGALLOC_H */