Message ID | 20230918103213.4166210-6-wangkefeng.wang@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp2558822vqi; Mon, 18 Sep 2023 03:41:32 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHTekhLAG8AQs5vjy5AScv2wG4s5ac0suXhh9t7oaq9mVtR0fh1u0gu1hhgY5YgP2fhlNlQ X-Received: by 2002:a17:90a:f991:b0:274:751a:3f3 with SMTP id cq17-20020a17090af99100b00274751a03f3mr15073382pjb.7.1695033691857; Mon, 18 Sep 2023 03:41:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695033691; cv=none; d=google.com; s=arc-20160816; b=VTgR7ufgGXbCpWnZszOOftohaDLTih2n86tgK9WAqkxXpDQlK8d6Bl2T5N1qC3pA70 ge2EreV1xbDJEVBUo+oCSfhaqUOwQwa4rkGct5hTH6xHSxAVgRvES9PeHChTtPV54eNs EQSB2m+1WimyqanOGXRAIbuoVPet2kJNj+Drui/AM5ayu3b24Mb93Gf+rhpih0m/7sKW HjN7FYE2tshWjzL42AG7OLtYGDf3ap4/Y2wSKVlg+eHXGfxxYv0KPDaOzXPyIvXrpKWn +JU5KsBJ64nKW/FxUEHICGukFLrdgTLwJVRFAe70IOF0w3vVnACEzbSgDChnVz1mu17T FVyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Wep4XMTReQIGX+QGef6xN/N0d87XNkh3obqJUoMPMNU=; fh=Je6qNyAM729b1goCtB6PkBk+yQrlQCIjJMRY6BFgKTo=; b=ZR6YozqkRZuOl4ne3LJSl51z7aHyStoeOIHXnPIAPe7bciAr3H8ef7/jhtzS5dEyi0 IapZ/ILMIDO4/+DKjEN17hChlwHbMhJH7WmPjbHV/KQnbqHzRD7zcODbeGCDIH3uDRUP w68ah0oRKhsaLbbnz6ROE7vCDwrPMuTjGjE9uJse2QhtCn37RAw5Q1Grw/DlqL78ZJhM KRSZKZpYQwHykzrj73E/Ml3cAOeJuKlVXq1RG//BEXn0nImf7UcFhLf9AcmOqhkXfaw9 QiSsRg6pFWur6OTdOMEzRVSxG2AhwxN+XVbtIyZUUobpdVulZtH2Y2oh3xHgfq3OZHFz oxJw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id z16-20020a656650000000b0056433b221b9si7627032pgv.477.2023.09.18.03.41.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Sep 2023 03:41:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id EB57880D2E4C; Mon, 18 Sep 2023 03:34:29 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241407AbjIRKds (ORCPT <rfc822;kernel.ruili@gmail.com> + 27 others); Mon, 18 Sep 2023 06:33:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241341AbjIRKdP (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 18 Sep 2023 06:33:15 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEF0512E for <linux-kernel@vger.kernel.org>; Mon, 18 Sep 2023 03:32:51 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Rq1JK6Y0RzVky5; Mon, 18 Sep 2023 18:29:53 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 18 Sep 2023 18:32:48 +0800 From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Andrew Morton <akpm@linux-foundation.org> CC: <willy@infradead.org>, <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>, <ying.huang@intel.com>, <david@redhat.com>, Zi Yan <ziy@nvidia.com>, Mike Kravetz <mike.kravetz@oracle.com>, <hughd@google.com>, Kefeng Wang <wangkefeng.wang@huawei.com> Subject: [PATCH 5/6] mm: memory: add vm_normal_pmd_folio() Date: Mon, 18 Sep 2023 18:32:12 +0800 Message-ID: <20230918103213.4166210-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> References: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Mon, 18 Sep 2023 03:34:32 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777371648501961383 X-GMAIL-MSGID: 1777371648501961383 |
Series |
mm: convert numa balancing functions to use a folio
|
|
Commit Message
Kefeng Wang
Sept. 18, 2023, 10:32 a.m. UTC
The new vm_normal_pmd_folio() wrapper is similar to vm_normal_folio(),
which allow them to completely replace the struct page variables with
struct folio variables.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
include/linux/mm.h | 2 ++
mm/memory.c | 10 ++++++++++
2 files changed, 12 insertions(+)
Comments
Kefeng Wang <wangkefeng.wang@huawei.com> writes: > The new vm_normal_pmd_folio() wrapper is similar to vm_normal_folio(), > which allow them to completely replace the struct page variables with > struct folio variables. > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > include/linux/mm.h | 2 ++ > mm/memory.c | 10 ++++++++++ > 2 files changed, 12 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 12335de50140..7d05ec047186 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2327,6 +2327,8 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr, > pte_t pte); > struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, > pte_t pte); > +struct folio *vm_normal_pmd_folio(struct vm_area_struct *vma, unsigned long addr, > + pmd_t pmd); > struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, > pmd_t pmd); Why do not follow the counterpart of page (vm_normal_page_pmd()) to be vm_normal_folio_pmd()? -- Best Regards, Huang, Ying > diff --git a/mm/memory.c b/mm/memory.c > index ce3efe7255d2..d4296ee72730 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -689,6 +689,16 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, > out: > return pfn_to_page(pfn); > } > + > +struct folio *vm_normal_pmd_folio(struct vm_area_struct *vma, unsigned long addr, > + pmd_t pmd) > +{ > + struct page *page = vm_normal_page_pmd(vma, addr, pmd); > + > + if (page) > + return page_folio(page); > + return NULL; > +} > #endif > > static void restore_exclusive_pte(struct vm_area_struct *vma,
On 2023/9/20 11:12, Huang, Ying wrote: > Kefeng Wang <wangkefeng.wang@huawei.com> writes: > >> The new vm_normal_pmd_folio() wrapper is similar to vm_normal_folio(), >> which allow them to completely replace the struct page variables with >> struct folio variables. >> >> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> >> --- >> include/linux/mm.h | 2 ++ >> mm/memory.c | 10 ++++++++++ >> 2 files changed, 12 insertions(+) >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index 12335de50140..7d05ec047186 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -2327,6 +2327,8 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr, >> pte_t pte); >> struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, >> pte_t pte); >> +struct folio *vm_normal_pmd_folio(struct vm_area_struct *vma, unsigned long addr, >> + pmd_t pmd); >> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, >> pmd_t pmd); > > Why do not follow the counterpart of page (vm_normal_page_pmd()) to be > vm_normal_folio_pmd()? Personally, X_pmd_folio seems to get folio from a pmd, but X_folio_pmd looks like "return the PMD of a folio", I could use vm_normal_folio_pmd() for consistency, thanks.
diff --git a/include/linux/mm.h b/include/linux/mm.h index 12335de50140..7d05ec047186 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2327,6 +2327,8 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr, pte_t pte); struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte); +struct folio *vm_normal_pmd_folio(struct vm_area_struct *vma, unsigned long addr, + pmd_t pmd); struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t pmd); diff --git a/mm/memory.c b/mm/memory.c index ce3efe7255d2..d4296ee72730 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -689,6 +689,16 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, out: return pfn_to_page(pfn); } + +struct folio *vm_normal_pmd_folio(struct vm_area_struct *vma, unsigned long addr, + pmd_t pmd) +{ + struct page *page = vm_normal_page_pmd(vma, addr, pmd); + + if (page) + return page_folio(page); + return NULL; +} #endif static void restore_exclusive_pte(struct vm_area_struct *vma,