Message ID | 20230703135330.1865927-2-ryan.roberts@arm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9f45:0:b0:3ea:f831:8777 with SMTP id v5csp555897vqx; Mon, 3 Jul 2023 07:14:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5bf1g6WVcuD4H/j4OL+rQqaBOh5htV4a87vRCcW519WHVCEbJY6B5k0mevbxsZ+IMqYn5C X-Received: by 2002:a05:6a20:4413:b0:11b:3e33:d2ce with SMTP id ce19-20020a056a20441300b0011b3e33d2cemr16234664pzb.1.1688393682231; Mon, 03 Jul 2023 07:14:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688393682; cv=none; d=google.com; s=arc-20160816; b=TddTwgsBGBnFC8i6HDlb3gO27qLvYLocLqB2pThk5bPVKKH37nLEobhT1Nf0mKwJIF fGMkRm4JngTSUJANnj9NgqeVWFtt4V71vq6FC6IFrIUTfsLJOagFrR/5FdRCY6PQAu6m wm5NuZ/w+6Jtoi6ACQMYyILz+5Vxx4gnnqjxVwVUEB0y1qOS/1wO//SdE4dRP2w+iR+P CYH8dLj3+RuteIvCIVK2btr4d5/igLBBZkeFgiBOQLXmVUwKeeuNOn1r75wFvcoTDc5Y 2rnnjgjyJzn7l1o+mx/HSdAy/T99u4AM819q+F8mCJZ3ma+KfkK8DX4SW/dUllEdVsq+ NdSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=oKmEDpPRks/ilB7GL5ZE6RM4bSXQ4CYbCYmtoGtRwVc=; fh=W2gTym8Tes+4XZMJyNdq9yGC4n1yJ+vTSxJnRC1NAwA=; b=xFOTIhZl2qOEsCHe6GDljlCTR+0dWxHzfCK3DgSgCvJMNrBb68GZv7MrZ9J9vNkg4y RelLSIgX2uBxE/ahEdqzwts3OBs7hfrFfCdIT3c2Vp0QRJMbTy0qlv5VKrtGUXlbjGad wJ72ab6cbx0i6impXa7wVD+jH69n7os2Fp3rf1tdtG7CWjG2ns5mJfoHcM3tsmmUeTwY dk5c2yN0GQWiyc9a32Eg5oaRo2XG5aPau3mCbNOcvhzvOwt0KlPz6oqf5kK6aUI77gye yNwS/9LBte5sqKf0sNTsUOkdwhr2NHJDmhIEYlxcyDwF18i+dXk0rd97Ldytxo+mLW2M giTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c194-20020a621ccb000000b0062565210347si6746800pfc.275.2023.07.03.07.14.29; Mon, 03 Jul 2023 07:14:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231232AbjGCNx4 (ORCPT <rfc822;ivan.orlov0322@gmail.com> + 99 others); Mon, 3 Jul 2023 09:53:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230252AbjGCNxw (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 3 Jul 2023 09:53:52 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0716010C8 for <linux-kernel@vger.kernel.org>; Mon, 3 Jul 2023 06:53:45 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 71661C14; Mon, 3 Jul 2023 06:54:27 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EDAF83F73F; Mon, 3 Jul 2023 06:53:42 -0700 (PDT) From: Ryan Roberts <ryan.roberts@arm.com> To: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, Yin Fengwei <fengwei.yin@intel.com>, David Hildenbrand <david@redhat.com>, Yu Zhao <yuzhao@google.com>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Anshuman Khandual <anshuman.khandual@arm.com>, Yang Shi <shy828301@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com>, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 1/5] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Date: Mon, 3 Jul 2023 14:53:26 +0100 Message-Id: <20230703135330.1865927-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230703135330.1865927-1-ryan.roberts@arm.com> References: <20230703135330.1865927-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770409093444780390?= X-GMAIL-MSGID: =?utf-8?q?1770409093444780390?= |
Series |
variable-order, large folios for anonymous memory
|
|
Commit Message
Ryan Roberts
July 3, 2023, 1:53 p.m. UTC
In preparation for FLEXIBLE_THP support, improve
folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be
passed to it. In this case, all contained pages are accounted using the
"small" pages scheme.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/rmap.c | 26 +++++++++++++++++++-------
1 file changed, 19 insertions(+), 7 deletions(-)
Comments
On Mon, Jul 3, 2023 at 7:53 AM Ryan Roberts <ryan.roberts@arm.com> wrote: > > In preparation for FLEXIBLE_THP support, improve > folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be > passed to it. In this case, all contained pages are accounted using the > "small" pages scheme. Nit: In this case, all *subpages* are accounted using the *order-0 folio* (or base page) scheme. > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Yu Zhao <yuzhao@google.com> > mm/rmap.c | 26 +++++++++++++++++++------- > 1 file changed, 19 insertions(+), 7 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 1d8369549424..82ef5ba363d1 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1278,31 +1278,43 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, > * This means the inc-and-test can be bypassed. > * The folio does not have to be locked. > * > - * If the folio is large, it is accounted as a THP. As the folio > + * If the folio is pmd-mappable, it is accounted as a THP. As the folio > * is new, it's assumed to be mapped exclusively by a single process. > */ > void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, > unsigned long address) > { > - int nr; > + int nr = folio_nr_pages(folio); > + int i; > + struct page *page; > > - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); > + VM_BUG_ON_VMA(address < vma->vm_start || > + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); > __folio_set_swapbacked(folio); > > - if (likely(!folio_test_pmd_mappable(folio))) { > + if (!folio_test_large(folio)) { > /* increment count (starts at -1) */ > atomic_set(&folio->_mapcount, 0); > - nr = 1; > + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); > + } else if (!folio_test_pmd_mappable(folio)) { > + /* increment count (starts at 0) */ > + atomic_set(&folio->_nr_pages_mapped, nr); > + > + page = &folio->page; > + for (i = 0; i < nr; i++, page++, address += PAGE_SIZE) { > + /* increment count (starts at -1) */ > + atomic_set(&page->_mapcount, 0); > + __page_set_anon_rmap(folio, page, vma, address, 1); > + } Nit: use folio_page(), e.g., } else if (!folio_test_pmd_mappable(folio)) { int i; for (i = 0; i < nr; i++) { struct page *page = folio_page(folio, i); /* increment count (starts at -1) */ atomic_set(&page->_mapcount, 0); __page_set_anon_rmap(folio, page, vma, address + PAGE_SIZE * i, 1); } /* increment count (starts at 0) */ atomic_set(&folio->_nr_pages_mapped, nr); } else { > } else { > /* increment count (starts at -1) */ > atomic_set(&folio->_entire_mapcount, 0); > atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); > - nr = folio_nr_pages(folio); > __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); > + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); > } > > __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); > - __page_set_anon_rmap(folio, &folio->page, vma, address, 1); > }
On 7/4/2023 3:05 AM, Yu Zhao wrote: > On Mon, Jul 3, 2023 at 7:53 AM Ryan Roberts <ryan.roberts@arm.com> wrote: >> >> In preparation for FLEXIBLE_THP support, improve >> folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be >> passed to it. In this case, all contained pages are accounted using the >> "small" pages scheme. > > Nit: In this case, all *subpages* are accounted using the *order-0 > folio* (or base page) scheme. Matthew suggested not to use subpage with folio. Using page with folio: https://lore.kernel.org/linux-mm/Y9qiS%2FIxZOMx62t6@casper.infradead.org/ > >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > > Reviewed-by: Yu Zhao <yuzhao@google.com> > >> mm/rmap.c | 26 +++++++++++++++++++------- >> 1 file changed, 19 insertions(+), 7 deletions(-) >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 1d8369549424..82ef5ba363d1 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1278,31 +1278,43 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, >> * This means the inc-and-test can be bypassed. >> * The folio does not have to be locked. >> * >> - * If the folio is large, it is accounted as a THP. As the folio >> + * If the folio is pmd-mappable, it is accounted as a THP. As the folio >> * is new, it's assumed to be mapped exclusively by a single process. >> */ >> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, >> unsigned long address) >> { >> - int nr; >> + int nr = folio_nr_pages(folio); >> + int i; >> + struct page *page; >> >> - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); >> + VM_BUG_ON_VMA(address < vma->vm_start || >> + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); >> __folio_set_swapbacked(folio); >> >> - if (likely(!folio_test_pmd_mappable(folio))) { >> + if (!folio_test_large(folio)) { >> /* increment count (starts at -1) */ >> atomic_set(&folio->_mapcount, 0); >> - nr = 1; >> + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); >> + } else if (!folio_test_pmd_mappable(folio)) { >> + /* increment count (starts at 0) */ >> + atomic_set(&folio->_nr_pages_mapped, nr); >> + >> + page = &folio->page; >> + for (i = 0; i < nr; i++, page++, address += PAGE_SIZE) { >> + /* increment count (starts at -1) */ >> + atomic_set(&page->_mapcount, 0); >> + __page_set_anon_rmap(folio, page, vma, address, 1); >> + } > > Nit: use folio_page(), e.g., > > } else if (!folio_test_pmd_mappable(folio)) { > int i; > > for (i = 0; i < nr; i++) { > struct page *page = folio_page(folio, i); > > /* increment count (starts at -1) */ > atomic_set(&page->_mapcount, 0); > __page_set_anon_rmap(folio, page, vma, address + PAGE_SIZE * i, 1); > } > /* increment count (starts at 0) */ > atomic_set(&folio->_nr_pages_mapped, nr); > } else { > >> } else { >> /* increment count (starts at -1) */ >> atomic_set(&folio->_entire_mapcount, 0); >> atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); >> - nr = folio_nr_pages(folio); >> __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); >> + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); >> } >> >> __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); >> - __page_set_anon_rmap(folio, &folio->page, vma, address, 1); >> }
On 7/3/2023 9:53 PM, Ryan Roberts wrote: > In preparation for FLEXIBLE_THP support, improve > folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be > passed to it. In this case, all contained pages are accounted using the > "small" pages scheme. > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Yin, Fengwei <fengwei.yin@intel.com> > --- > mm/rmap.c | 26 +++++++++++++++++++------- > 1 file changed, 19 insertions(+), 7 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 1d8369549424..82ef5ba363d1 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1278,31 +1278,43 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, > * This means the inc-and-test can be bypassed. > * The folio does not have to be locked. > * > - * If the folio is large, it is accounted as a THP. As the folio > + * If the folio is pmd-mappable, it is accounted as a THP. As the folio > * is new, it's assumed to be mapped exclusively by a single process. > */ > void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, > unsigned long address) > { > - int nr; > + int nr = folio_nr_pages(folio); > + int i; > + struct page *page; > > - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); > + VM_BUG_ON_VMA(address < vma->vm_start || > + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); > __folio_set_swapbacked(folio); > > - if (likely(!folio_test_pmd_mappable(folio))) { > + if (!folio_test_large(folio)) { > /* increment count (starts at -1) */ > atomic_set(&folio->_mapcount, 0); > - nr = 1; > + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); > + } else if (!folio_test_pmd_mappable(folio)) { > + /* increment count (starts at 0) */ > + atomic_set(&folio->_nr_pages_mapped, nr); > + > + page = &folio->page; > + for (i = 0; i < nr; i++, page++, address += PAGE_SIZE) { > + /* increment count (starts at -1) */ > + atomic_set(&page->_mapcount, 0); > + __page_set_anon_rmap(folio, page, vma, address, 1); > + } > } else { > /* increment count (starts at -1) */ > atomic_set(&folio->_entire_mapcount, 0); > atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); > - nr = folio_nr_pages(folio); > __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); > + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); > } > > __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); > - __page_set_anon_rmap(folio, &folio->page, vma, address, 1); > } > > /**
On 04/07/2023 03:13, Yin, Fengwei wrote: > > > On 7/4/2023 3:05 AM, Yu Zhao wrote: >> On Mon, Jul 3, 2023 at 7:53 AM Ryan Roberts <ryan.roberts@arm.com> wrote: >>> >>> In preparation for FLEXIBLE_THP support, improve >>> folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be >>> passed to it. In this case, all contained pages are accounted using the >>> "small" pages scheme. >> >> Nit: In this case, all *subpages* are accounted using the *order-0 >> folio* (or base page) scheme. > Matthew suggested not to use subpage with folio. Using page with folio: > https://lore.kernel.org/linux-mm/Y9qiS%2FIxZOMx62t6@casper.infradead.org/ OK, I'll change this to "In this case, all contained pages are accounted using the *order-0 folio* (or base page) scheme." > >> >>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> >> >> Reviewed-by: Yu Zhao <yuzhao@google.com> Thanks! >> >>> mm/rmap.c | 26 +++++++++++++++++++------- >>> 1 file changed, 19 insertions(+), 7 deletions(-) >>> >>> diff --git a/mm/rmap.c b/mm/rmap.c >>> index 1d8369549424..82ef5ba363d1 100644 >>> --- a/mm/rmap.c >>> +++ b/mm/rmap.c >>> @@ -1278,31 +1278,43 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, >>> * This means the inc-and-test can be bypassed. >>> * The folio does not have to be locked. >>> * >>> - * If the folio is large, it is accounted as a THP. As the folio >>> + * If the folio is pmd-mappable, it is accounted as a THP. As the folio >>> * is new, it's assumed to be mapped exclusively by a single process. >>> */ >>> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, >>> unsigned long address) >>> { >>> - int nr; >>> + int nr = folio_nr_pages(folio); >>> + int i; >>> + struct page *page; >>> >>> - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); >>> + VM_BUG_ON_VMA(address < vma->vm_start || >>> + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); >>> __folio_set_swapbacked(folio); >>> >>> - if (likely(!folio_test_pmd_mappable(folio))) { >>> + if (!folio_test_large(folio)) { >>> /* increment count (starts at -1) */ >>> atomic_set(&folio->_mapcount, 0); >>> - nr = 1; >>> + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); >>> + } else if (!folio_test_pmd_mappable(folio)) { >>> + /* increment count (starts at 0) */ >>> + atomic_set(&folio->_nr_pages_mapped, nr); >>> + >>> + page = &folio->page; >>> + for (i = 0; i < nr; i++, page++, address += PAGE_SIZE) { >>> + /* increment count (starts at -1) */ >>> + atomic_set(&page->_mapcount, 0); >>> + __page_set_anon_rmap(folio, page, vma, address, 1); >>> + } >> >> Nit: use folio_page(), e.g., Yep, will change for v3. >> >> } else if (!folio_test_pmd_mappable(folio)) { >> int i; >> >> for (i = 0; i < nr; i++) { >> struct page *page = folio_page(folio, i); >> >> /* increment count (starts at -1) */ >> atomic_set(&page->_mapcount, 0); >> __page_set_anon_rmap(folio, page, vma, address + PAGE_SIZE * i, 1); >> } >> /* increment count (starts at 0) */ >> atomic_set(&folio->_nr_pages_mapped, nr); >> } else { >> >>> } else { >>> /* increment count (starts at -1) */ >>> atomic_set(&folio->_entire_mapcount, 0); >>> atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); >>> - nr = folio_nr_pages(folio); >>> __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); >>> + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); >>> } >>> >>> __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); >>> - __page_set_anon_rmap(folio, &folio->page, vma, address, 1); >>> }
diff --git a/mm/rmap.c b/mm/rmap.c index 1d8369549424..82ef5ba363d1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1278,31 +1278,43 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, * This means the inc-and-test can be bypassed. * The folio does not have to be locked. * - * If the folio is large, it is accounted as a THP. As the folio + * If the folio is pmd-mappable, it is accounted as a THP. As the folio * is new, it's assumed to be mapped exclusively by a single process. */ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address) { - int nr; + int nr = folio_nr_pages(folio); + int i; + struct page *page; - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); + VM_BUG_ON_VMA(address < vma->vm_start || + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); __folio_set_swapbacked(folio); - if (likely(!folio_test_pmd_mappable(folio))) { + if (!folio_test_large(folio)) { /* increment count (starts at -1) */ atomic_set(&folio->_mapcount, 0); - nr = 1; + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); + } else if (!folio_test_pmd_mappable(folio)) { + /* increment count (starts at 0) */ + atomic_set(&folio->_nr_pages_mapped, nr); + + page = &folio->page; + for (i = 0; i < nr; i++, page++, address += PAGE_SIZE) { + /* increment count (starts at -1) */ + atomic_set(&page->_mapcount, 0); + __page_set_anon_rmap(folio, page, vma, address, 1); + } } else { /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); - nr = folio_nr_pages(folio); __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); } __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); - __page_set_anon_rmap(folio, &folio->page, vma, address, 1); } /**