Message ID | 20230626171430.3167004-2-ryan.roberts@arm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7643713vqr; Mon, 26 Jun 2023 10:34:40 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6DzUrk+oTcdPjP2qwEE8n2533aVCl/vfygydtd8sCf9AP2Muljc5mlz11mL5M8/i59xTu0 X-Received: by 2002:a05:6870:c904:b0:19f:45a1:b59d with SMTP id hj4-20020a056870c90400b0019f45a1b59dmr25745047oab.12.1687800880195; Mon, 26 Jun 2023 10:34:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687800880; cv=none; d=google.com; s=arc-20160816; b=idsraFVZ3b4IZ9CVBM8qNwkiiRyLxgnhddzyICN4V/Jg2sTKNAqOj8cjF+cY33tVj+ PnbqD/dPqB4K/HKm5xWzNI6HV7GfOuUJxwh8s+dqwtdm+TcDcz02k21k5FeOktHcHmvu 7tekUI5cgZ5VmHWA6YH1Z5TYAaVvFp0vHA6oFmGg+oppO8VfgRaCBqzEJdUipU1SsKNZ CddkzV91IQLzqJ52AWvbm36PyAhpL/sICBud810DNMeg0nRzdZaKBeK/f6HVTFV+9KDq KRvhH3drd+YcHO3pm4b6Xx9dYmtJ+/Z4yljSdupJidcb220Ka1Ku//nsRqLaxuYLuZNB v0hw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=zeQagWoBaXA3rXEOQANZ4Snh6DOQHHPxdX/QvLLvgp4=; fh=H2MVjBlipHVEN6kEAh1RDhnPLB9jpPNjGExTmo1/EvA=; b=kdFRphgOkwHXdKLWnjhOAJiAICb28Pn7qye/mIobVIwHwQKbuLB4W7ZH2Ot8m8lPX5 o/izWAYrxULm9EovDeIX2Yr5afam94PZ+TZs19qt9AKTL07bf9m9iA+dh+vIeuEUS00A WqD5TeDGhVE1e0+C8+mtWqyjfqZnYmRd5obtjdxKH1h2Uu8GCajVn2RwdTMkoCSNXyxu G5FGOfR9LJjAGnkXb/1fOfh5HngaUkZMJV5bLHHF4F4nmE50BKBsjf78S5laYF8l6Afn 6TxVMSYUBUK6VfGLHWleX0oDCaJUcht3SseOiF2MC1twPUQeHf2wRacCmqmyPeNI9+LV ZJMg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j23-20020a632317000000b005574480a875si5508202pgj.898.2023.06.26.10.34.26; Mon, 26 Jun 2023 10:34:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230493AbjFZRPG (ORCPT <rfc822;filip.gregor98@gmail.com> + 99 others); Mon, 26 Jun 2023 13:15:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230517AbjFZROu (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 26 Jun 2023 13:14:50 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 64DB81703; Mon, 26 Jun 2023 10:14:44 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 31D9613D5; Mon, 26 Jun 2023 10:15:28 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 484F93F663; Mon, 26 Jun 2023 10:14:41 -0700 (PDT) From: Ryan Roberts <ryan.roberts@arm.com> To: Andrew Morton <akpm@linux-foundation.org>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, Yin Fengwei <fengwei.yin@intel.com>, David Hildenbrand <david@redhat.com>, Yu Zhao <yuzhao@google.com>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Geert Uytterhoeven <geert@linux-m68k.org>, Christian Borntraeger <borntraeger@linux.ibm.com>, Sven Schnelle <svens@linux.ibm.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com> Cc: Ryan Roberts <ryan.roberts@arm.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org Subject: [PATCH v1 01/10] mm: Expose clear_huge_page() unconditionally Date: Mon, 26 Jun 2023 18:14:21 +0100 Message-Id: <20230626171430.3167004-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626171430.3167004-1-ryan.roberts@arm.com> References: <20230626171430.3167004-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769787496119987734?= X-GMAIL-MSGID: =?utf-8?q?1769787496119987734?= |
Series |
variable-order, large folios for anonymous memory
|
|
Commit Message
Ryan Roberts
June 26, 2023, 5:14 p.m. UTC
In preparation for extending vma_alloc_zeroed_movable_folio() to
allocate a arbitrary order folio, expose clear_huge_page()
unconditionally, so that it can be used to zero the allocated folio in
the generic implementation of vma_alloc_zeroed_movable_folio().
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/linux/mm.h | 3 ++-
mm/memory.c | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
Comments
On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > > In preparation for extending vma_alloc_zeroed_movable_folio() to > allocate a arbitrary order folio, expose clear_huge_page() > unconditionally, so that it can be used to zero the allocated folio in > the generic implementation of vma_alloc_zeroed_movable_folio(). > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > --- > include/linux/mm.h | 3 ++- > mm/memory.c | 2 +- > 2 files changed, 3 insertions(+), 2 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 7f1741bd870a..7e3bf45e6491 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3684,10 +3684,11 @@ enum mf_action_page_type { > */ > extern const struct attribute_group memory_failure_attr_group; > > -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) > extern void clear_huge_page(struct page *page, > unsigned long addr_hint, > unsigned int pages_per_huge_page); > + > +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) We might not want to depend on THP eventually. Right now, we still have to, unless splitting is optional, which seems to contradict 06/10. (deferred_split_folio() is a nop without THP.)
On 27/06/2023 02:55, Yu Zhao wrote: > On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote: >> >> In preparation for extending vma_alloc_zeroed_movable_folio() to >> allocate a arbitrary order folio, expose clear_huge_page() >> unconditionally, so that it can be used to zero the allocated folio in >> the generic implementation of vma_alloc_zeroed_movable_folio(). >> >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >> --- >> include/linux/mm.h | 3 ++- >> mm/memory.c | 2 +- >> 2 files changed, 3 insertions(+), 2 deletions(-) >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index 7f1741bd870a..7e3bf45e6491 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -3684,10 +3684,11 @@ enum mf_action_page_type { >> */ >> extern const struct attribute_group memory_failure_attr_group; >> >> -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) >> extern void clear_huge_page(struct page *page, >> unsigned long addr_hint, >> unsigned int pages_per_huge_page); >> + >> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) > > We might not want to depend on THP eventually. Right now, we still > have to, unless splitting is optional, which seems to contradict > 06/10. (deferred_split_folio() is a nop without THP.) Yes, I agree - for large anon folios to work, we depend on THP. But I don't think that helps us here. In the next patch, I give vma_alloc_zeroed_movable_folio() an extra `order` parameter. So the generic/default version of the function now needs a way to clear a compound page. I guess I could do something like: static inline struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr, gfp_t gfp, int order) { struct folio *folio; folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, order, vma, vaddr, false); if (folio) { #ifdef CONFIG_LARGE_FOLIO clear_huge_page(&folio->page, vaddr, 1U << order); #else BUG_ON(order != 0); clear_user_highpage(&folio->page, vaddr); #endif } return folio; } But that's pretty messy and there's no reason why other users might come along that pass order != 0 and will be surprised by the BUG_ON.
On Tue, Jun 27, 2023 at 1:21 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > > On 27/06/2023 02:55, Yu Zhao wrote: > > On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > >> > >> In preparation for extending vma_alloc_zeroed_movable_folio() to > >> allocate a arbitrary order folio, expose clear_huge_page() > >> unconditionally, so that it can be used to zero the allocated folio in > >> the generic implementation of vma_alloc_zeroed_movable_folio(). > >> > >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > >> --- > >> include/linux/mm.h | 3 ++- > >> mm/memory.c | 2 +- > >> 2 files changed, 3 insertions(+), 2 deletions(-) > >> > >> diff --git a/include/linux/mm.h b/include/linux/mm.h > >> index 7f1741bd870a..7e3bf45e6491 100644 > >> --- a/include/linux/mm.h > >> +++ b/include/linux/mm.h > >> @@ -3684,10 +3684,11 @@ enum mf_action_page_type { > >> */ > >> extern const struct attribute_group memory_failure_attr_group; > >> > >> -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) > >> extern void clear_huge_page(struct page *page, > >> unsigned long addr_hint, > >> unsigned int pages_per_huge_page); > >> + > >> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) > > > > We might not want to depend on THP eventually. Right now, we still > > have to, unless splitting is optional, which seems to contradict > > 06/10. (deferred_split_folio() is a nop without THP.) > > Yes, I agree - for large anon folios to work, we depend on THP. But I don't > think that helps us here. > > In the next patch, I give vma_alloc_zeroed_movable_folio() an extra `order` > parameter. So the generic/default version of the function now needs a way to > clear a compound page. > > I guess I could do something like: > > static inline > struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, > unsigned long vaddr, gfp_t gfp, int order) > { > struct folio *folio; > > folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, > order, vma, vaddr, false); > if (folio) { > #ifdef CONFIG_LARGE_FOLIO > clear_huge_page(&folio->page, vaddr, 1U << order); > #else > BUG_ON(order != 0); > clear_user_highpage(&folio->page, vaddr); > #endif > } > > return folio; > } > > But that's pretty messy and there's no reason why other users might come along > that pass order != 0 and will be surprised by the BUG_ON. #ifdef CONFIG_LARGE_ANON_FOLIO // depends on CONFIG_TRANSPARENT_HUGE_PAGE struct folio *alloc_anon_folio(struct vm_area_struct *vma, unsigned long vaddr, int order) { // how do_huge_pmd_anonymous_page() allocs and clears vma_alloc_folio(..., *true*); } #else #define alloc_anon_folio(vma, addr, order) vma_alloc_zeroed_movable_folio(vma, addr) #endif
On 27/06/2023 09:29, Yu Zhao wrote: > On Tue, Jun 27, 2023 at 1:21 AM Ryan Roberts <ryan.roberts@arm.com> wrote: >> >> On 27/06/2023 02:55, Yu Zhao wrote: >>> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote: >>>> >>>> In preparation for extending vma_alloc_zeroed_movable_folio() to >>>> allocate a arbitrary order folio, expose clear_huge_page() >>>> unconditionally, so that it can be used to zero the allocated folio in >>>> the generic implementation of vma_alloc_zeroed_movable_folio(). >>>> >>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>>> --- >>>> include/linux/mm.h | 3 ++- >>>> mm/memory.c | 2 +- >>>> 2 files changed, 3 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/include/linux/mm.h b/include/linux/mm.h >>>> index 7f1741bd870a..7e3bf45e6491 100644 >>>> --- a/include/linux/mm.h >>>> +++ b/include/linux/mm.h >>>> @@ -3684,10 +3684,11 @@ enum mf_action_page_type { >>>> */ >>>> extern const struct attribute_group memory_failure_attr_group; >>>> >>>> -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) >>>> extern void clear_huge_page(struct page *page, >>>> unsigned long addr_hint, >>>> unsigned int pages_per_huge_page); >>>> + >>>> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) >>> >>> We might not want to depend on THP eventually. Right now, we still >>> have to, unless splitting is optional, which seems to contradict >>> 06/10. (deferred_split_folio() is a nop without THP.) >> >> Yes, I agree - for large anon folios to work, we depend on THP. But I don't >> think that helps us here. >> >> In the next patch, I give vma_alloc_zeroed_movable_folio() an extra `order` >> parameter. So the generic/default version of the function now needs a way to >> clear a compound page. >> >> I guess I could do something like: >> >> static inline >> struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, >> unsigned long vaddr, gfp_t gfp, int order) >> { >> struct folio *folio; >> >> folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, >> order, vma, vaddr, false); >> if (folio) { >> #ifdef CONFIG_LARGE_FOLIO >> clear_huge_page(&folio->page, vaddr, 1U << order); >> #else >> BUG_ON(order != 0); >> clear_user_highpage(&folio->page, vaddr); >> #endif >> } >> >> return folio; >> } >> >> But that's pretty messy and there's no reason why other users might come along >> that pass order != 0 and will be surprised by the BUG_ON. > > #ifdef CONFIG_LARGE_ANON_FOLIO // depends on CONFIG_TRANSPARENT_HUGE_PAGE > struct folio *alloc_anon_folio(struct vm_area_struct *vma, unsigned > long vaddr, int order) > { > // how do_huge_pmd_anonymous_page() allocs and clears > vma_alloc_folio(..., *true*); This controls the mem allocation policy (see mempolicy.c::vma_alloc_folio()) not clearing. Clearing is done in __do_huge_pmd_anonymous_page(): clear_huge_page(page, vmf->address, HPAGE_PMD_NR); > } > #else > #define alloc_anon_folio(vma, addr, order) > vma_alloc_zeroed_movable_folio(vma, addr) > #endif Sorry I don't get this at all... If you are suggesting to bypass vma_alloc_zeroed_movable_folio() entirely for the LARGE_ANON_FOLIO case, I don't think that works because the arch code adds its own gfp flags there. For example, arm64 adds __GFP_ZEROTAGS for VM_MTE VMAs. Perhaps we can do away with an arch-owned vma_alloc_zeroed_movable_folio() and replace it with a new arch_get_zeroed_movable_gfp_flags() then alloc_anon_folio() add in those flags? But I still think the cleanest, simplest change is just to unconditionally expose clear_huge_page() as I've done it.
On Tue, Jun 27, 2023 at 3:41 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > > On 27/06/2023 09:29, Yu Zhao wrote: > > On Tue, Jun 27, 2023 at 1:21 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > >> > >> On 27/06/2023 02:55, Yu Zhao wrote: > >>> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > >>>> > >>>> In preparation for extending vma_alloc_zeroed_movable_folio() to > >>>> allocate a arbitrary order folio, expose clear_huge_page() > >>>> unconditionally, so that it can be used to zero the allocated folio in > >>>> the generic implementation of vma_alloc_zeroed_movable_folio(). > >>>> > >>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > >>>> --- > >>>> include/linux/mm.h | 3 ++- > >>>> mm/memory.c | 2 +- > >>>> 2 files changed, 3 insertions(+), 2 deletions(-) > >>>> > >>>> diff --git a/include/linux/mm.h b/include/linux/mm.h > >>>> index 7f1741bd870a..7e3bf45e6491 100644 > >>>> --- a/include/linux/mm.h > >>>> +++ b/include/linux/mm.h > >>>> @@ -3684,10 +3684,11 @@ enum mf_action_page_type { > >>>> */ > >>>> extern const struct attribute_group memory_failure_attr_group; > >>>> > >>>> -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) > >>>> extern void clear_huge_page(struct page *page, > >>>> unsigned long addr_hint, > >>>> unsigned int pages_per_huge_page); > >>>> + > >>>> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) > >>> > >>> We might not want to depend on THP eventually. Right now, we still > >>> have to, unless splitting is optional, which seems to contradict > >>> 06/10. (deferred_split_folio() is a nop without THP.) > >> > >> Yes, I agree - for large anon folios to work, we depend on THP. But I don't > >> think that helps us here. > >> > >> In the next patch, I give vma_alloc_zeroed_movable_folio() an extra `order` > >> parameter. So the generic/default version of the function now needs a way to > >> clear a compound page. > >> > >> I guess I could do something like: > >> > >> static inline > >> struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, > >> unsigned long vaddr, gfp_t gfp, int order) > >> { > >> struct folio *folio; > >> > >> folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, > >> order, vma, vaddr, false); > >> if (folio) { > >> #ifdef CONFIG_LARGE_FOLIO > >> clear_huge_page(&folio->page, vaddr, 1U << order); > >> #else > >> BUG_ON(order != 0); > >> clear_user_highpage(&folio->page, vaddr); > >> #endif > >> } > >> > >> return folio; > >> } > >> > >> But that's pretty messy and there's no reason why other users might come along > >> that pass order != 0 and will be surprised by the BUG_ON. > > > > #ifdef CONFIG_LARGE_ANON_FOLIO // depends on CONFIG_TRANSPARENT_HUGE_PAGE > > struct folio *alloc_anon_folio(struct vm_area_struct *vma, unsigned > > long vaddr, int order) > > { > > // how do_huge_pmd_anonymous_page() allocs and clears > > vma_alloc_folio(..., *true*); > > This controls the mem allocation policy (see mempolicy.c::vma_alloc_folio()) not > clearing. Clearing is done in __do_huge_pmd_anonymous_page(): > > clear_huge_page(page, vmf->address, HPAGE_PMD_NR); Sorry for rushing this previously. This is what I meant. The #ifdef makes it safe to use clear_huge_page() without 01/10. I highlighted the last parameter to vma_alloc_folio() only because it's different from what you chose (not implying it clears the folio). > > } > > #else > > #define alloc_anon_folio(vma, addr, order) > > vma_alloc_zeroed_movable_folio(vma, addr) > > #endif > > Sorry I don't get this at all... If you are suggesting to bypass > vma_alloc_zeroed_movable_folio() entirely for the LARGE_ANON_FOLIO case Correct. > I don't > think that works because the arch code adds its own gfp flags there. For > example, arm64 adds __GFP_ZEROTAGS for VM_MTE VMAs. I think it's the opposite: it should be safer to reuse the THP code because 1. It's an existing case that has been working for PMD_ORDER folios mapped by PTEs, and it's an arch-independent API which would be easier to review. 2. Use vma_alloc_zeroed_movable_folio() for large folios is a *new* case. It's an arch-*dependent* API which I have no idea what VM_MTE does (should do) to large folios and don't plan to answer that for now. > Perhaps we can do away with an arch-owned vma_alloc_zeroed_movable_folio() and > replace it with a new arch_get_zeroed_movable_gfp_flags() then > alloc_anon_folio() add in those flags? > > But I still think the cleanest, simplest change is just to unconditionally > expose clear_huge_page() as I've done it. The fundamental choice there as I see it is to whether the first step of large anon folios should lean toward the THP code base or the base page code base (I'm a big fan of the answer "Neither -- we should create something entirely new instead"). My POV is that the THP code base would allow us to move faster, since it's proven to work for a very similar case (PMD_ORDER folios mapped by PTEs).
On 27/06/2023 19:26, Yu Zhao wrote: > On Tue, Jun 27, 2023 at 3:41 AM Ryan Roberts <ryan.roberts@arm.com> wrote: >> >> On 27/06/2023 09:29, Yu Zhao wrote: >>> On Tue, Jun 27, 2023 at 1:21 AM Ryan Roberts <ryan.roberts@arm.com> wrote: >>>> >>>> On 27/06/2023 02:55, Yu Zhao wrote: >>>>> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote: >>>>>> >>>>>> In preparation for extending vma_alloc_zeroed_movable_folio() to >>>>>> allocate a arbitrary order folio, expose clear_huge_page() >>>>>> unconditionally, so that it can be used to zero the allocated folio in >>>>>> the generic implementation of vma_alloc_zeroed_movable_folio(). >>>>>> >>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >>>>>> --- >>>>>> include/linux/mm.h | 3 ++- >>>>>> mm/memory.c | 2 +- >>>>>> 2 files changed, 3 insertions(+), 2 deletions(-) >>>>>> >>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h >>>>>> index 7f1741bd870a..7e3bf45e6491 100644 >>>>>> --- a/include/linux/mm.h >>>>>> +++ b/include/linux/mm.h >>>>>> @@ -3684,10 +3684,11 @@ enum mf_action_page_type { >>>>>> */ >>>>>> extern const struct attribute_group memory_failure_attr_group; >>>>>> >>>>>> -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) >>>>>> extern void clear_huge_page(struct page *page, >>>>>> unsigned long addr_hint, >>>>>> unsigned int pages_per_huge_page); >>>>>> + >>>>>> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) >>>>> >>>>> We might not want to depend on THP eventually. Right now, we still >>>>> have to, unless splitting is optional, which seems to contradict >>>>> 06/10. (deferred_split_folio() is a nop without THP.) >>>> >>>> Yes, I agree - for large anon folios to work, we depend on THP. But I don't >>>> think that helps us here. >>>> >>>> In the next patch, I give vma_alloc_zeroed_movable_folio() an extra `order` >>>> parameter. So the generic/default version of the function now needs a way to >>>> clear a compound page. >>>> >>>> I guess I could do something like: >>>> >>>> static inline >>>> struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, >>>> unsigned long vaddr, gfp_t gfp, int order) >>>> { >>>> struct folio *folio; >>>> >>>> folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, >>>> order, vma, vaddr, false); >>>> if (folio) { >>>> #ifdef CONFIG_LARGE_FOLIO >>>> clear_huge_page(&folio->page, vaddr, 1U << order); >>>> #else >>>> BUG_ON(order != 0); >>>> clear_user_highpage(&folio->page, vaddr); >>>> #endif >>>> } >>>> >>>> return folio; >>>> } >>>> >>>> But that's pretty messy and there's no reason why other users might come along >>>> that pass order != 0 and will be surprised by the BUG_ON. >>> >>> #ifdef CONFIG_LARGE_ANON_FOLIO // depends on CONFIG_TRANSPARENT_HUGE_PAGE >>> struct folio *alloc_anon_folio(struct vm_area_struct *vma, unsigned >>> long vaddr, int order) >>> { >>> // how do_huge_pmd_anonymous_page() allocs and clears >>> vma_alloc_folio(..., *true*); >> >> This controls the mem allocation policy (see mempolicy.c::vma_alloc_folio()) not >> clearing. Clearing is done in __do_huge_pmd_anonymous_page(): >> >> clear_huge_page(page, vmf->address, HPAGE_PMD_NR); > > Sorry for rushing this previously. This is what I meant. The #ifdef > makes it safe to use clear_huge_page() without 01/10. I highlighted > the last parameter to vma_alloc_folio() only because it's different > from what you chose (not implying it clears the folio).> >>> } >>> #else >>> #define alloc_anon_folio(vma, addr, order) >>> vma_alloc_zeroed_movable_folio(vma, addr) >>> #endif >> >> Sorry I don't get this at all... If you are suggesting to bypass >> vma_alloc_zeroed_movable_folio() entirely for the LARGE_ANON_FOLIO case > > Correct. > >> I don't >> think that works because the arch code adds its own gfp flags there. For >> example, arm64 adds __GFP_ZEROTAGS for VM_MTE VMAs. > > I think it's the opposite: it should be safer to reuse the THP code because > 1. It's an existing case that has been working for PMD_ORDER folios > mapped by PTEs, and it's an arch-independent API which would be easier > to review. > 2. Use vma_alloc_zeroed_movable_folio() for large folios is a *new* > case. It's an arch-*dependent* API which I have no idea what VM_MTE > does (should do) to large folios and don't plan to answer that for > now. I've done some archaology on this now, and convinced myself that your suggestion is a good one - sorry for doubting it! If you are interested here are the details: Only arm64 and ia64 do something non-standard in vma_alloc_zeroed_movable_folio(). ia64 flushes the dcache for the folio - but given it does not support THP this is not a problem for the THP path. arm64 adds the __GFP_ZEROTAGS flag which means that the MTE tags will be zeroed at the same time as the page is zeroed. This is a perf optimization - if its not performed then it will be done at set_pte_at(), which is how this works for the THP path. So on that basis, I agree we can use your proposed alloc_anon_folio() approach. arm64 will lose the MTE optimization but that can be added back later if needed. So no need to unconditionally expose clear_huge_page() and no need to modify all the arch vma_alloc_zeroed_movable_folio() implementations. Thanks, Ryan > >> Perhaps we can do away with an arch-owned vma_alloc_zeroed_movable_folio() and >> replace it with a new arch_get_zeroed_movable_gfp_flags() then >> alloc_anon_folio() add in those flags? >> >> But I still think the cleanest, simplest change is just to unconditionally >> expose clear_huge_page() as I've done it. > > The fundamental choice there as I see it is to whether the first step > of large anon folios should lean toward the THP code base or the base > page code base (I'm a big fan of the answer "Neither -- we should > create something entirely new instead"). My POV is that the THP code > base would allow us to move faster, since it's proven to work for a > very similar case (PMD_ORDER folios mapped by PTEs).
diff --git a/include/linux/mm.h b/include/linux/mm.h index 7f1741bd870a..7e3bf45e6491 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3684,10 +3684,11 @@ enum mf_action_page_type { */ extern const struct attribute_group memory_failure_attr_group; -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) extern void clear_huge_page(struct page *page, unsigned long addr_hint, unsigned int pages_per_huge_page); + +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) int copy_user_large_folio(struct folio *dst, struct folio *src, unsigned long addr_hint, struct vm_area_struct *vma); diff --git a/mm/memory.c b/mm/memory.c index fb30f7523550..3d4ea668c4d1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5741,7 +5741,6 @@ void __might_fault(const char *file, int line) EXPORT_SYMBOL(__might_fault); #endif -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) /* * Process all subpages of the specified huge page with the specified * operation. The target subpage will be processed last to keep its @@ -5839,6 +5838,7 @@ void clear_huge_page(struct page *page, process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page); } +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) static int copy_user_gigantic_page(struct folio *dst, struct folio *src, unsigned long addr, struct vm_area_struct *vma,