Message ID | 20230922115804.2043771-3-ryan.roberts@arm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp5528814vqi; Fri, 22 Sep 2023 05:24:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEoUT53bR+YthaIEmheQnMxjSpwCFkHpfNTZXfbydAQGR0YK2Il/ed4EzDBjMbsv0npYMIk X-Received: by 2002:a17:902:e5c9:b0:1b6:649b:92cc with SMTP id u9-20020a170902e5c900b001b6649b92ccmr9791406plf.69.1695385484332; Fri, 22 Sep 2023 05:24:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695385484; cv=none; d=google.com; s=arc-20160816; b=yX2+VexoogAPmwotsfmLzfLQRhI2Y1capZOQpZxr/4ZI41h9H9QKRy1PAYE8yltK+q vT2z3YpcltdeN7WnfLiEeJ4n+9sKWn/WXanojR+xJxm4KXbC+EGFvsdWvauPTDCU+o1G CCPtYzoMzRwzgVPh1+14TNMyEN/ZHGnqL2ChBQIqgWjvAR6llu/nRTbcSUbAnR7gEVTK 7gscp2ppiatKeKDQc/ybaOFdW/LUAVtrW8sBCJVMpQsOVxmJ/Ka6djPd6pWZYf5Wu9I1 sy4JUuptbCdjSNQHN+h4SAk5zeaweKfuGVCXGoH0dLIDOky60aIrQUVh5uSRBfZJpI7m DfHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=uKNHikKoL/69PLUZ0UilyO+gsLiuu6zxjvuJd+I9ZZ4=; fh=q/eoddAVbbeWDzUVs4Vwk85GEJr6Ex1kH1y0q+v38Hg=; b=AkGEPht6VTk4Ac1lYyXTFxiM8zcwMIjPeRwKitMJrzkjhqDkYsvsB/6la7djtmNNBJ ZRqM4tWy9yWuLaGAG/GHgxApG6ue6Tk6QjpQuuBMdN/0b3r5qEp2EyqnHZ7wb3dJA2Wi GDz5knDJeB+Epyu/+LiTikasenlWhqcx1abkkfWjAD3YcV3gHJo1L+3nUMFONzxZ06es TZZUFLApdewmuEc8tfGGK9qik5IqELGG7zNbqr9fxbeJ72c/lrb3ZvrMiTnIvXQMogWA g/g3rG/hcuO0igjNx0nGRVM5JHT7wAhkb+2lZCyrQ3MA9YQNhmio7JKFAHDs9i3AZSga S8+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id w3-20020a170902e88300b001bdf69e297bsi3948323plg.588.2023.09.22.05.24.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 05:24:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 52A6683CDA76; Fri, 22 Sep 2023 04:59:06 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233836AbjIVL6h (ORCPT <rfc822;chrisfriedt@gmail.com> + 29 others); Fri, 22 Sep 2023 07:58:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233977AbjIVL6c (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 22 Sep 2023 07:58:32 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F36221AD; Fri, 22 Sep 2023 04:58:25 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A520143D; Fri, 22 Sep 2023 04:59:02 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C40783F67D; Fri, 22 Sep 2023 04:58:20 -0700 (PDT) From: Ryan Roberts <ryan.roberts@arm.com> To: Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>, Nicholas Piggin <npiggin@gmail.com>, Christophe Leroy <christophe.leroy@csgroup.eu>, Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, Alexander Gordeev <agordeev@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, Sven Schnelle <svens@linux.ibm.com>, Gerald Schaefer <gerald.schaefer@linux.ibm.com>, "David S. Miller" <davem@davemloft.net>, Arnd Bergmann <arnd@arndb.de>, Mike Kravetz <mike.kravetz@oracle.com>, Muchun Song <muchun.song@linux.dev>, SeongJae Park <sj@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, Uladzislau Rezki <urezki@gmail.com>, Christoph Hellwig <hch@infradead.org>, Lorenzo Stoakes <lstoakes@gmail.com>, Anshuman Khandual <anshuman.khandual@arm.com>, Peter Xu <peterx@redhat.com>, Axel Rasmussen <axelrasmussen@google.com>, Qi Zheng <zhengqi.arch@bytedance.com> Cc: Ryan Roberts <ryan.roberts@arm.com>, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org Subject: [PATCH v2 2/2] arm64: hugetlb: Fix set_huge_pte_at() to work with all swap entries Date: Fri, 22 Sep 2023 12:58:04 +0100 Message-Id: <20230922115804.2043771-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230922115804.2043771-1-ryan.roberts@arm.com> References: <20230922115804.2043771-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Fri, 22 Sep 2023 04:59:06 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777740529967853821 X-GMAIL-MSGID: 1777740529967853821 |
Series |
Fix set_huge_pte_at() panic on arm64
|
|
Commit Message
Ryan Roberts
Sept. 22, 2023, 11:58 a.m. UTC
When called with a swap entry that does not embed a PFN (e.g.
PTE_MARKER_POISONED or PTE_MARKER_UFFD_WP), the previous implementation
of set_huge_pte_at() would either cause a BUG() to fire (if
CONFIG_DEBUG_VM is enabled) or cause a dereference of an invalid address
and subsequent panic.
arm64's huge pte implementation supports multiple huge page sizes, some
of which are implemented in the page table with multiple contiguous
entries. So set_huge_pte_at() needs to work out how big the logical pte
is, so that it can also work out how many physical ptes (or pmds) need
to be written. It previously did this by grabbing the folio out of the
pte and querying its size.
However, there are cases when the pte being set is actually a swap
entry. But this also used to work fine, because for huge ptes, we only
ever saw migration entries and hwpoison entries. And both of these types
of swap entries have a PFN embedded, so the code would grab that and
everything still worked out.
But over time, more calls to set_huge_pte_at() have been added that set
swap entry types that do not embed a PFN. And this causes the code to go
bang. The triggering case is for the uffd poison test, commit
99aa77215ad0 ("selftests/mm: add uffd unit test for UFFDIO_POISON"),
which causes a PTE_MARKER_POISONED swap entry to be set, coutesey of
commit 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for
hugetlbfs") - added in v6.5-rc7. Although review shows that there are
other call sites that set PTE_MARKER_UFFD_WP (which also has no PFN),
these don't trigger on arm64 because arm64 doesn't support UFFD WP.
Arguably, the root cause is really due to commit 18f3962953e4 ("mm:
hugetlb: kill set_huge_swap_pte_at()"), which aimed to simplify the
interface to the core code by removing set_huge_swap_pte_at() (which
took a page size parameter) and replacing it with calls to
set_huge_pte_at() where the size was inferred from the folio, as
descibed above. While that commit didn't break anything at the time, it
did break the interface because it couldn't handle swap entries without
PFNs. And since then new callers have come along which rely on this
working. But given the brokeness is only observable after commit
8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"),
that one gets the Fixes tag.
Now that we have modified the set_huge_pte_at() interface to pass the
huge page size in the previous patch, we can trivially fix this issue.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Fixes: 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs")
Cc: <stable@vger.kernel.org> # 6.5+
---
arch/arm64/mm/hugetlbpage.c | 17 +++--------------
1 file changed, 3 insertions(+), 14 deletions(-)
Comments
On Fri, Sep 22, 2023 at 12:58:04PM +0100, Ryan Roberts wrote: > When called with a swap entry that does not embed a PFN (e.g. > PTE_MARKER_POISONED or PTE_MARKER_UFFD_WP), the previous implementation > of set_huge_pte_at() would either cause a BUG() to fire (if > CONFIG_DEBUG_VM is enabled) or cause a dereference of an invalid address > and subsequent panic. > > arm64's huge pte implementation supports multiple huge page sizes, some > of which are implemented in the page table with multiple contiguous > entries. So set_huge_pte_at() needs to work out how big the logical pte > is, so that it can also work out how many physical ptes (or pmds) need > to be written. It previously did this by grabbing the folio out of the > pte and querying its size. > > However, there are cases when the pte being set is actually a swap > entry. But this also used to work fine, because for huge ptes, we only > ever saw migration entries and hwpoison entries. And both of these types > of swap entries have a PFN embedded, so the code would grab that and > everything still worked out. > > But over time, more calls to set_huge_pte_at() have been added that set > swap entry types that do not embed a PFN. And this causes the code to go > bang. The triggering case is for the uffd poison test, commit > 99aa77215ad0 ("selftests/mm: add uffd unit test for UFFDIO_POISON"), > which causes a PTE_MARKER_POISONED swap entry to be set, coutesey of > commit 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for > hugetlbfs") - added in v6.5-rc7. Although review shows that there are > other call sites that set PTE_MARKER_UFFD_WP (which also has no PFN), > these don't trigger on arm64 because arm64 doesn't support UFFD WP. > > Arguably, the root cause is really due to commit 18f3962953e4 ("mm: > hugetlb: kill set_huge_swap_pte_at()"), which aimed to simplify the > interface to the core code by removing set_huge_swap_pte_at() (which > took a page size parameter) and replacing it with calls to > set_huge_pte_at() where the size was inferred from the folio, as > descibed above. While that commit didn't break anything at the time, it > did break the interface because it couldn't handle swap entries without > PFNs. And since then new callers have come along which rely on this > working. But given the brokeness is only observable after commit > 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"), > that one gets the Fixes tag. > > Now that we have modified the set_huge_pte_at() interface to pass the > huge page size in the previous patch, we can trivially fix this issue. > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > Fixes: 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs") > Cc: <stable@vger.kernel.org> # 6.5+ > --- > arch/arm64/mm/hugetlbpage.c | 17 +++-------------- > 1 file changed, 3 insertions(+), 14 deletions(-) > > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > index a7f8c8db3425..13fd592228b1 100644 > --- a/arch/arm64/mm/hugetlbpage.c > +++ b/arch/arm64/mm/hugetlbpage.c > @@ -241,13 +241,6 @@ static void clear_flush(struct mm_struct *mm, > flush_tlb_range(&vma, saddr, addr); > } > > -static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) > -{ > - VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); > - > - return page_folio(pfn_to_page(swp_offset_pfn(entry))); > -} > - > void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte, unsigned long sz) > { > @@ -257,13 +250,10 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, > unsigned long pfn, dpfn; > pgprot_t hugeprot; > > - if (!pte_present(pte)) { > - struct folio *folio; > - > - folio = hugetlb_swap_entry_to_folio(pte_to_swp_entry(pte)); > - ncontig = num_contig_ptes(folio_size(folio), &pgsize); > + ncontig = num_contig_ptes(sz, &pgsize); > > - for (i = 0; i < ncontig; i++, ptep++) > + if (!pte_present(pte)) { > + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) > set_pte_at(mm, addr, ptep, pte); Our set_pte_at() doesn't use 'addr' for anything and the old code didn't even bother to increment it here! I'm fine adding that, but it feels unrelated to the issue which this patch is actually fixing. Either way: Acked-by: Will Deacon <will@kernel.org> Will
On 22/09/2023 17:14, Will Deacon wrote: > On Fri, Sep 22, 2023 at 12:58:04PM +0100, Ryan Roberts wrote: >> When called with a swap entry that does not embed a PFN (e.g. >> PTE_MARKER_POISONED or PTE_MARKER_UFFD_WP), the previous implementation >> of set_huge_pte_at() would either cause a BUG() to fire (if >> CONFIG_DEBUG_VM is enabled) or cause a dereference of an invalid address >> and subsequent panic. >> >> arm64's huge pte implementation supports multiple huge page sizes, some >> of which are implemented in the page table with multiple contiguous >> entries. So set_huge_pte_at() needs to work out how big the logical pte >> is, so that it can also work out how many physical ptes (or pmds) need >> to be written. It previously did this by grabbing the folio out of the >> pte and querying its size. >> >> However, there are cases when the pte being set is actually a swap >> entry. But this also used to work fine, because for huge ptes, we only >> ever saw migration entries and hwpoison entries. And both of these types >> of swap entries have a PFN embedded, so the code would grab that and >> everything still worked out. >> >> But over time, more calls to set_huge_pte_at() have been added that set >> swap entry types that do not embed a PFN. And this causes the code to go >> bang. The triggering case is for the uffd poison test, commit >> 99aa77215ad0 ("selftests/mm: add uffd unit test for UFFDIO_POISON"), >> which causes a PTE_MARKER_POISONED swap entry to be set, coutesey of >> commit 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for >> hugetlbfs") - added in v6.5-rc7. Although review shows that there are >> other call sites that set PTE_MARKER_UFFD_WP (which also has no PFN), >> these don't trigger on arm64 because arm64 doesn't support UFFD WP. >> >> Arguably, the root cause is really due to commit 18f3962953e4 ("mm: >> hugetlb: kill set_huge_swap_pte_at()"), which aimed to simplify the >> interface to the core code by removing set_huge_swap_pte_at() (which >> took a page size parameter) and replacing it with calls to >> set_huge_pte_at() where the size was inferred from the folio, as >> descibed above. While that commit didn't break anything at the time, it >> did break the interface because it couldn't handle swap entries without >> PFNs. And since then new callers have come along which rely on this >> working. But given the brokeness is only observable after commit >> 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"), >> that one gets the Fixes tag. >> >> Now that we have modified the set_huge_pte_at() interface to pass the >> huge page size in the previous patch, we can trivially fix this issue. >> >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >> Fixes: 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs") >> Cc: <stable@vger.kernel.org> # 6.5+ >> --- >> arch/arm64/mm/hugetlbpage.c | 17 +++-------------- >> 1 file changed, 3 insertions(+), 14 deletions(-) >> >> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c >> index a7f8c8db3425..13fd592228b1 100644 >> --- a/arch/arm64/mm/hugetlbpage.c >> +++ b/arch/arm64/mm/hugetlbpage.c >> @@ -241,13 +241,6 @@ static void clear_flush(struct mm_struct *mm, >> flush_tlb_range(&vma, saddr, addr); >> } >> >> -static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) >> -{ >> - VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); >> - >> - return page_folio(pfn_to_page(swp_offset_pfn(entry))); >> -} >> - >> void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, >> pte_t *ptep, pte_t pte, unsigned long sz) >> { >> @@ -257,13 +250,10 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, >> unsigned long pfn, dpfn; >> pgprot_t hugeprot; >> >> - if (!pte_present(pte)) { >> - struct folio *folio; >> - >> - folio = hugetlb_swap_entry_to_folio(pte_to_swp_entry(pte)); >> - ncontig = num_contig_ptes(folio_size(folio), &pgsize); >> + ncontig = num_contig_ptes(sz, &pgsize); >> >> - for (i = 0; i < ncontig; i++, ptep++) >> + if (!pte_present(pte)) { >> + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) >> set_pte_at(mm, addr, ptep, pte); > > Our set_pte_at() doesn't use 'addr' for anything and the old code didn't > even bother to increment it here! I'm fine adding that, but it feels > unrelated to the issue which this patch is actually fixing. True. I agree its not strictly necessary and will presumably be optimized out. But I'm not sure that having knowledge that the implementation doesn't use it is a good reason not to call the interface correctly. I'll leave it as I've done it if that's ok. > > Either way: > > Acked-by: Will Deacon <will@kernel.org> Thanks! > > Will
Looks correct to me - thanks for the fix! Reviewed-by: Axel Rasmussen <axelrasmussen@google.com> On Fri, Sep 22, 2023 at 9:41 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > > On 22/09/2023 17:14, Will Deacon wrote: > > On Fri, Sep 22, 2023 at 12:58:04PM +0100, Ryan Roberts wrote: > >> When called with a swap entry that does not embed a PFN (e.g. > >> PTE_MARKER_POISONED or PTE_MARKER_UFFD_WP), the previous implementation > >> of set_huge_pte_at() would either cause a BUG() to fire (if > >> CONFIG_DEBUG_VM is enabled) or cause a dereference of an invalid address > >> and subsequent panic. > >> > >> arm64's huge pte implementation supports multiple huge page sizes, some > >> of which are implemented in the page table with multiple contiguous > >> entries. So set_huge_pte_at() needs to work out how big the logical pte > >> is, so that it can also work out how many physical ptes (or pmds) need > >> to be written. It previously did this by grabbing the folio out of the > >> pte and querying its size. > >> > >> However, there are cases when the pte being set is actually a swap > >> entry. But this also used to work fine, because for huge ptes, we only > >> ever saw migration entries and hwpoison entries. And both of these types > >> of swap entries have a PFN embedded, so the code would grab that and > >> everything still worked out. > >> > >> But over time, more calls to set_huge_pte_at() have been added that set > >> swap entry types that do not embed a PFN. And this causes the code to go > >> bang. The triggering case is for the uffd poison test, commit > >> 99aa77215ad0 ("selftests/mm: add uffd unit test for UFFDIO_POISON"), > >> which causes a PTE_MARKER_POISONED swap entry to be set, coutesey of > >> commit 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for > >> hugetlbfs") - added in v6.5-rc7. Although review shows that there are > >> other call sites that set PTE_MARKER_UFFD_WP (which also has no PFN), > >> these don't trigger on arm64 because arm64 doesn't support UFFD WP. > >> > >> Arguably, the root cause is really due to commit 18f3962953e4 ("mm: > >> hugetlb: kill set_huge_swap_pte_at()"), which aimed to simplify the > >> interface to the core code by removing set_huge_swap_pte_at() (which > >> took a page size parameter) and replacing it with calls to > >> set_huge_pte_at() where the size was inferred from the folio, as > >> descibed above. While that commit didn't break anything at the time, it > >> did break the interface because it couldn't handle swap entries without > >> PFNs. And since then new callers have come along which rely on this > >> working. But given the brokeness is only observable after commit > >> 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"), > >> that one gets the Fixes tag. > >> > >> Now that we have modified the set_huge_pte_at() interface to pass the > >> huge page size in the previous patch, we can trivially fix this issue. > >> > >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > >> Fixes: 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs") > >> Cc: <stable@vger.kernel.org> # 6.5+ > >> --- > >> arch/arm64/mm/hugetlbpage.c | 17 +++-------------- > >> 1 file changed, 3 insertions(+), 14 deletions(-) > >> > >> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > >> index a7f8c8db3425..13fd592228b1 100644 > >> --- a/arch/arm64/mm/hugetlbpage.c > >> +++ b/arch/arm64/mm/hugetlbpage.c > >> @@ -241,13 +241,6 @@ static void clear_flush(struct mm_struct *mm, > >> flush_tlb_range(&vma, saddr, addr); > >> } > >> > >> -static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) > >> -{ > >> - VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); > >> - > >> - return page_folio(pfn_to_page(swp_offset_pfn(entry))); > >> -} > >> - > >> void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, > >> pte_t *ptep, pte_t pte, unsigned long sz) > >> { > >> @@ -257,13 +250,10 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, > >> unsigned long pfn, dpfn; > >> pgprot_t hugeprot; > >> > >> - if (!pte_present(pte)) { > >> - struct folio *folio; > >> - > >> - folio = hugetlb_swap_entry_to_folio(pte_to_swp_entry(pte)); > >> - ncontig = num_contig_ptes(folio_size(folio), &pgsize); > >> + ncontig = num_contig_ptes(sz, &pgsize); > >> > >> - for (i = 0; i < ncontig; i++, ptep++) > >> + if (!pte_present(pte)) { > >> + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) > >> set_pte_at(mm, addr, ptep, pte); > > > > Our set_pte_at() doesn't use 'addr' for anything and the old code didn't > > even bother to increment it here! I'm fine adding that, but it feels > > unrelated to the issue which this patch is actually fixing. > > True. I agree its not strictly necessary and will presumably be optimized out. > But I'm not sure that having knowledge that the implementation doesn't use it is > a good reason not to call the interface correctly. I'll leave it as I've done it > if that's ok. > > > > > Either way: > > > > Acked-by: Will Deacon <will@kernel.org> > > Thanks! > > > > > Will >
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index a7f8c8db3425..13fd592228b1 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -241,13 +241,6 @@ static void clear_flush(struct mm_struct *mm, flush_tlb_range(&vma, saddr, addr); } -static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) -{ - VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); - - return page_folio(pfn_to_page(swp_offset_pfn(entry))); -} - void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte, unsigned long sz) { @@ -257,13 +250,10 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, unsigned long pfn, dpfn; pgprot_t hugeprot; - if (!pte_present(pte)) { - struct folio *folio; - - folio = hugetlb_swap_entry_to_folio(pte_to_swp_entry(pte)); - ncontig = num_contig_ptes(folio_size(folio), &pgsize); + ncontig = num_contig_ptes(sz, &pgsize); - for (i = 0; i < ncontig; i++, ptep++) + if (!pte_present(pte)) { + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) set_pte_at(mm, addr, ptep, pte); return; } @@ -273,7 +263,6 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, return; } - ncontig = find_num_contig(mm, addr, ptep, &pgsize); pfn = pte_pfn(pte); dpfn = pgsize >> PAGE_SHIFT; hugeprot = pte_pgprot(pte);