Message ID | 20231219175046.2496-2-jszhang@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2120369dyi; Tue, 19 Dec 2023 10:04:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjFzjHfSn0GrKoItNGm84JWuBXHDpkoL8BJFy9PJpsDDPmjKK+aTJZc3Dx4fByYRbrqncE X-Received: by 2002:a05:6808:23c2:b0:3b8:b0ee:53d4 with SMTP id bq2-20020a05680823c200b003b8b0ee53d4mr504629oib.57.1703009050747; Tue, 19 Dec 2023 10:04:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703009050; cv=none; d=google.com; s=arc-20160816; b=IQudwIOOLkHAMvK1QdTHfHK3Cj1xzg8NEZHiKH3RNGoIMYwIddgUsAuqrB1KdJliJH O5MBPU9P/BUBVZTUZg6SNm7F/wy/p2H0foPvY0LGD7E4b08/tMwYY0g2oPupkRDal1pm HxL7lfLDwljgp9ZssX6RuPyhJr1v08+t8bWoRgeKb+7WTUajEYQkJ1tLTngu7Zfl0KwL QDeTEfAafynikw7YNbIb3HUbiHkf9YMUyyLoGpLbfOjgyLBh2PDNzJ4z+IOdjy/aBJDJ 89yXUrZi+hl1NPXiKwrUMkkaANl2z760U5ZwXAoSaXEQc5gj/nWB0be6bMtyDRFNUorW 21fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=RQg7suMb/9Jnz+wAQ9WdVfRyNeBfCryQ9+DO8mQ1BQw=; fh=0Glk/ayqI8+vZzfC3nNOQX076ObCaIyGiP/4knLjRm0=; b=Nib2mfbwkaKsXZy1m+5OGH4n8C+IpL1s9KyRVEvhDqTZeSSMsQu/n11RMtuoTpE9Za 4oSdDu/0bq05vAKAUCxfL+VqqChRGB5FjgNvpWybPeT9nN/Ow9kznF35fCqx8IEQY2w5 Ip4aIPb/Kj5ZoQcbphw792RjsjH6rdwxMF/QnVMXwkW5ILXN0B5b0IaOde0tHtDg1sb0 1QOjxA+vyWDHbqTpFb0TlvHB1kLt8i6u+GybgsxdoxBnneC7RDst1I9JXlhj9jRyyoV9 t6Zc/my7JkywfdXMTRJGAljRPBv/2RxYwf8I++dfca1JBrfnA4npV4lBiJ1x3bJVEjV8 pnOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NZ+HplTE; spf=pass (google.com: domain of linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id e1-20020a056808148100b003b6cda99f33si7945707oiw.48.2023.12.19.10.04.09 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 10:04:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NZ+HplTE; spf=pass (google.com: domain of linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 1FEF7284FFB for <ouuuleilei@gmail.com>; Tue, 19 Dec 2023 18:03:58 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6694338FA7; Tue, 19 Dec 2023 18:03:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NZ+HplTE" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95B56381A5; Tue, 19 Dec 2023 18:03:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D62CDC433C9; Tue, 19 Dec 2023 18:03:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703009005; bh=39JAuH598M6uo4CmVtLFeafqNHIttPmqjwfQDvfhOjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NZ+HplTEbztnxahsANz3uYQlJxV91QZtNCH9i3pK9wbRFkjwTt4VzJXgiEfG1XQyS BXqJkmD+ZuHI3pJbq7hpWh4FazAdi4Y3yKeIPf67IpkIahu9t2PTGodx/3z22CFrV7 m+cNydw6UBPt2db6BACnS7usZ86CqOzEjfRnK1vUXTwLBuz8h9WArwgkfMCsadNvpZ ADKvoV3VZLzOxA8w0qKo27CtkfmEvEQi0IGRe7dYSov3cTESJzZ5pYlOHNLP8W51Mp QrorLyrRGypL6qE5j1GbuY+013Qauw6noSlblzldFoZ80Yl2nqBBjJIFW2VycqEPjS p8/QIHPpMR03g== From: Jisheng Zhang <jszhang@kernel.org> To: Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, Will Deacon <will@kernel.org>, "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>, Andrew Morton <akpm@linux-foundation.org>, Nick Piggin <npiggin@gmail.com>, Peter Zijlstra <peterz@infradead.org> Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/4] riscv: tlb: fix __p*d_free_tlb() Date: Wed, 20 Dec 2023 01:50:43 +0800 Message-Id: <20231219175046.2496-2-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20231219175046.2496-1-jszhang@kernel.org> References: <20231219175046.2496-1-jszhang@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785734418479804867 X-GMAIL-MSGID: 1785734418479804867 |
Series | riscv: support fast gup | |
Commit Message
Jisheng Zhang
Dec. 19, 2023, 5:50 p.m. UTC
If non-leaf PTEs I.E pmd, pud or p4d is modified, a sfence.vma is
a must for safe, imagine if an implementation caches the non-leaf
translation in TLB, although I didn't meet this HW so far, but it's
possible in theory.
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
---
arch/riscv/include/asm/pgalloc.h | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
Comments
Hi Jisheng, On 19/12/2023 18:50, Jisheng Zhang wrote: > If non-leaf PTEs I.E pmd, pud or p4d is modified, a sfence.vma is > a must for safe, imagine if an implementation caches the non-leaf > translation in TLB, although I didn't meet this HW so far, but it's > possible in theory. > > Signed-off-by: Jisheng Zhang<jszhang@kernel.org> > --- > arch/riscv/include/asm/pgalloc.h | 20 +++++++++++++++++--- > 1 file changed, 17 insertions(+), 3 deletions(-) > > diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h > index d169a4f41a2e..a12fb83fa1f5 100644 > --- a/arch/riscv/include/asm/pgalloc.h > +++ b/arch/riscv/include/asm/pgalloc.h > @@ -95,7 +95,13 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) > __pud_free(mm, pud); > } > > -#define __pud_free_tlb(tlb, pud, addr) pud_free((tlb)->mm, pud) > +#define __pud_free_tlb(tlb, pud, addr) \ > +do { \ > + if (pgtable_l4_enabled) { \ > + pagetable_pud_dtor(virt_to_ptdesc(pud)); \ > + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ The specification indeed states that an sfence.vma must be emitted after a page directory modification. Your change is not enough though since eventually tlb_flush() is called and in this function we should add: if (tlb->freed_tables) tlb_flush_mm(); otherwise we are not guaranteed that a "global" sfence.vma is called. Would you be able to benchmark this change and see the performance impact? Thanks, Alex > + } \ > +} while (0) > > #define p4d_alloc_one p4d_alloc_one > static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) > @@ -124,7 +130,11 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) > __p4d_free(mm, p4d); > } > > -#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) > +#define __p4d_free_tlb(tlb, p4d, addr) \ > +do { \ > + if (pgtable_l5_enabled) \ > + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d)); \ > +} while (0) > #endif /* __PAGETABLE_PMD_FOLDED */ > > static inline void sync_kernel_mappings(pgd_t *pgd) > @@ -149,7 +159,11 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) > > #ifndef __PAGETABLE_PMD_FOLDED > > -#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) > +#define __pmd_free_tlb(tlb, pmd, addr) \ > +do { \ > + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ > + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ > +} while (0) > > #endif /* __PAGETABLE_PMD_FOLDED */ >
On 19/12/2023 18:50, Jisheng Zhang wrote: > If non-leaf PTEs I.E pmd, pud or p4d is modified, a sfence.vma is > a must for safe, imagine if an implementation caches the non-leaf > translation in TLB, although I didn't meet this HW so far, but it's > possible in theory. And since this is a fix, it would be worth trying to add a Fixes tag here. Not easy I agree because it fixes several commits (I have 07037db5d479f, e8a62cc26ddf5, d10efa21a9374 and c5e9b2c2ae822 if you implement tlb_flush() as I suggested). So I would add the latest commit as the Fixes commit (which would be c5e9b2c2ae822), and then I'd send a patch to stable for each commit with the right Fixes tag...@Conor: let me know if you have a simpler idea or if this is wrong. Thanks, Alex > Signed-off-by: Jisheng Zhang <jszhang@kernel.org> > --- > arch/riscv/include/asm/pgalloc.h | 20 +++++++++++++++++--- > 1 file changed, 17 insertions(+), 3 deletions(-) > > diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h > index d169a4f41a2e..a12fb83fa1f5 100644 > --- a/arch/riscv/include/asm/pgalloc.h > +++ b/arch/riscv/include/asm/pgalloc.h > @@ -95,7 +95,13 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) > __pud_free(mm, pud); > } > > -#define __pud_free_tlb(tlb, pud, addr) pud_free((tlb)->mm, pud) > +#define __pud_free_tlb(tlb, pud, addr) \ > +do { \ > + if (pgtable_l4_enabled) { \ > + pagetable_pud_dtor(virt_to_ptdesc(pud)); \ > + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ > + } \ > +} while (0) > > #define p4d_alloc_one p4d_alloc_one > static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) > @@ -124,7 +130,11 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) > __p4d_free(mm, p4d); > } > > -#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) > +#define __p4d_free_tlb(tlb, p4d, addr) \ > +do { \ > + if (pgtable_l5_enabled) \ > + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d)); \ > +} while (0) > #endif /* __PAGETABLE_PMD_FOLDED */ > > static inline void sync_kernel_mappings(pgd_t *pgd) > @@ -149,7 +159,11 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) > > #ifndef __PAGETABLE_PMD_FOLDED > > -#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) > +#define __pmd_free_tlb(tlb, pmd, addr) \ > +do { \ > + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ > + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ > +} while (0) > > #endif /* __PAGETABLE_PMD_FOLDED */ >
On Thu, 04 Jan 2024 02:55:40 PST (-0800), alex@ghiti.fr wrote: > On 19/12/2023 18:50, Jisheng Zhang wrote: >> If non-leaf PTEs I.E pmd, pud or p4d is modified, a sfence.vma is >> a must for safe, imagine if an implementation caches the non-leaf >> translation in TLB, although I didn't meet this HW so far, but it's >> possible in theory. > > > And since this is a fix, it would be worth trying to add a Fixes tag > here. Not easy I agree because it fixes several commits (I have > 07037db5d479f, e8a62cc26ddf5, d10efa21a9374 and c5e9b2c2ae822 if you > implement tlb_flush() as I suggested). > > So I would add the latest commit as the Fixes commit (which would be > c5e9b2c2ae822), and then I'd send a patch to stable for each commit with > the right Fixes tag...@Conor: let me know if you have a simpler idea or > if this is wrong. I just went with Fixes: c5e9b2c2ae82 ("riscv: Improve tlb_flush()") Cc: stable@vger.kernel.org hopefully that's fine. It's still getting tested, it's batched up with some other stuff and I managed to find a bad merge so it might take a bit... > > Thanks, > > Alex > > >> Signed-off-by: Jisheng Zhang <jszhang@kernel.org> >> --- >> arch/riscv/include/asm/pgalloc.h | 20 +++++++++++++++++--- >> 1 file changed, 17 insertions(+), 3 deletions(-) >> >> diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h >> index d169a4f41a2e..a12fb83fa1f5 100644 >> --- a/arch/riscv/include/asm/pgalloc.h >> +++ b/arch/riscv/include/asm/pgalloc.h >> @@ -95,7 +95,13 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) >> __pud_free(mm, pud); >> } >> >> -#define __pud_free_tlb(tlb, pud, addr) pud_free((tlb)->mm, pud) >> +#define __pud_free_tlb(tlb, pud, addr) \ >> +do { \ >> + if (pgtable_l4_enabled) { \ >> + pagetable_pud_dtor(virt_to_ptdesc(pud)); \ >> + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ >> + } \ >> +} while (0) >> >> #define p4d_alloc_one p4d_alloc_one >> static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) >> @@ -124,7 +130,11 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) >> __p4d_free(mm, p4d); >> } >> >> -#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) >> +#define __p4d_free_tlb(tlb, p4d, addr) \ >> +do { \ >> + if (pgtable_l5_enabled) \ >> + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d)); \ >> +} while (0) >> #endif /* __PAGETABLE_PMD_FOLDED */ >> >> static inline void sync_kernel_mappings(pgd_t *pgd) >> @@ -149,7 +159,11 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) >> >> #ifndef __PAGETABLE_PMD_FOLDED >> >> -#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) >> +#define __pmd_free_tlb(tlb, pmd, addr) \ >> +do { \ >> + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ >> + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ >> +} while (0) >> >> #endif /* __PAGETABLE_PMD_FOLDED */ >>
Hi Jisheng, On 31/12/2023 07:21, Alexandre Ghiti wrote: > Hi Jisheng, > > On 19/12/2023 18:50, Jisheng Zhang wrote: >> If non-leaf PTEs I.E pmd, pud or p4d is modified, a sfence.vma is >> a must for safe, imagine if an implementation caches the non-leaf >> translation in TLB, although I didn't meet this HW so far, but it's >> possible in theory. >> >> Signed-off-by: Jisheng Zhang<jszhang@kernel.org> >> --- >> arch/riscv/include/asm/pgalloc.h | 20 +++++++++++++++++--- >> 1 file changed, 17 insertions(+), 3 deletions(-) >> >> diff --git a/arch/riscv/include/asm/pgalloc.h >> b/arch/riscv/include/asm/pgalloc.h >> index d169a4f41a2e..a12fb83fa1f5 100644 >> --- a/arch/riscv/include/asm/pgalloc.h >> +++ b/arch/riscv/include/asm/pgalloc.h >> @@ -95,7 +95,13 @@ static inline void pud_free(struct mm_struct *mm, >> pud_t *pud) >> __pud_free(mm, pud); >> } >> -#define __pud_free_tlb(tlb, pud, addr) pud_free((tlb)->mm, pud) >> +#define __pud_free_tlb(tlb, pud, addr) \ >> +do { \ >> + if (pgtable_l4_enabled) { \ >> + pagetable_pud_dtor(virt_to_ptdesc(pud)); \ >> + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ > > > The specification indeed states that an sfence.vma must be emitted > after a page directory modification. Your change is not enough though > since eventually tlb_flush() is called and in this function we should > add: > > if (tlb->freed_tables) > tlb_flush_mm(); I sent a patch for that here https://lore.kernel.org/linux-riscv/20240128120405.25876-1-alexghiti@rivosinc.com/ You can add: Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Thanks, Alex > > otherwise we are not guaranteed that a "global" sfence.vma is called. > > Would you be able to benchmark this change and see the performance > impact? > > Thanks, > > Alex > > >> + } \ >> +} while (0) >> #define p4d_alloc_one p4d_alloc_one >> static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned >> long addr) >> @@ -124,7 +130,11 @@ static inline void p4d_free(struct mm_struct >> *mm, p4d_t *p4d) >> __p4d_free(mm, p4d); >> } >> -#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) >> +#define __p4d_free_tlb(tlb, p4d, addr) \ >> +do { \ >> + if (pgtable_l5_enabled) \ >> + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d)); \ >> +} while (0) >> #endif /* __PAGETABLE_PMD_FOLDED */ >> static inline void sync_kernel_mappings(pgd_t *pgd) >> @@ -149,7 +159,11 @@ static inline pgd_t *pgd_alloc(struct mm_struct >> *mm) >> #ifndef __PAGETABLE_PMD_FOLDED >> -#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) >> +#define __pmd_free_tlb(tlb, pmd, addr) \ >> +do { \ >> + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ >> + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ >> +} while (0) >> #endif /* __PAGETABLE_PMD_FOLDED */ >
diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index d169a4f41a2e..a12fb83fa1f5 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -95,7 +95,13 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) __pud_free(mm, pud); } -#define __pud_free_tlb(tlb, pud, addr) pud_free((tlb)->mm, pud) +#define __pud_free_tlb(tlb, pud, addr) \ +do { \ + if (pgtable_l4_enabled) { \ + pagetable_pud_dtor(virt_to_ptdesc(pud)); \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ + } \ +} while (0) #define p4d_alloc_one p4d_alloc_one static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) @@ -124,7 +130,11 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) __p4d_free(mm, p4d); } -#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) +#define __p4d_free_tlb(tlb, p4d, addr) \ +do { \ + if (pgtable_l5_enabled) \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d)); \ +} while (0) #endif /* __PAGETABLE_PMD_FOLDED */ static inline void sync_kernel_mappings(pgd_t *pgd) @@ -149,7 +159,11 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #ifndef __PAGETABLE_PMD_FOLDED -#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) +#define __pmd_free_tlb(tlb, pmd, addr) \ +do { \ + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ +} while (0) #endif /* __PAGETABLE_PMD_FOLDED */