From patchwork Wed Dec 7 02:34:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 30605 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp3168008wrr; Tue, 6 Dec 2022 18:23:16 -0800 (PST) X-Google-Smtp-Source: AA0mqf5bVPkgWNXmWOtZBdqvsYc86MIL6EFmNcaubGCa6T+FvLuRnYGkk06CHZ80Zoc7qFgWKXKz X-Received: by 2002:a17:906:4889:b0:7bc:42f6:372e with SMTP id v9-20020a170906488900b007bc42f6372emr46902010ejq.662.1670379796156; Tue, 06 Dec 2022 18:23:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670379796; cv=none; d=google.com; s=arc-20160816; b=PRVbcjqQ+TM0+196ildDqmsIbh8FmBnzjyaLI4EKOhgOdtnKyPvi0zA6l6a6CQ4uOi 0zz6ZKbj5EFD+4gYfCJgBpjExsD/SU3QURs6wRNihvmb0Z6RPFqeJZMc/HDMTHGEKFWH a7sRuYoBbUp/wqKcQ9GUfK58GsWZXbM9nAnZZnLm4mG1EuT08O782utCZnyuB6PO+SpX RfHmc0J5g1tg5XnGNHDbulXYQiHHbi09bIEVOzhTW60yPvSgAQ7j6iQKIrOTymDJPclC D0dUUjTKhJUU8xZTO0+l1zR5kBnvQP+bj0fFczxCZJ1wYqsMkBh7lfdj0gNKluB84ROg z+NQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=eBIxNuh688tXupo+Kj6CVyRAIwVJtcMY93CudRMrn58=; b=oUWtF+Ya5XX+oT0g5O/ca8ZDNnuBTHkn8nRs//meWVdtCGe8j5SFC5PX7XjaXuvAp/ 0AjXpR7RI28W82U49kIp9CCyxFHOLwS9wRLxO+Jb6iPgK/G9jrhpWGTnABBjx2J4Ebga yU1JiVKOQcpN6fsgOw4EQBdLJ8xHRObl2l6Gq4+Sz4anJh4NEOd4ilH5ghTg2VZH9YCd 4m6tnTD96hrE2WulIp5p8ghNcOd0le1CCkiYudmmSmyxGxzvmfK0o5X32PHRHDPAF0hV FNMhIKMdthOcCqE14TENUNdeCbfXDQo0KH/GcYW+CtD1TFwUbPBKwfrU+cVhtQfud8xi pCew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j6-20020a05640211c600b0046c0137c466si4027712edw.293.2022.12.06.18.22.52; Tue, 06 Dec 2022 18:23:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229627AbiLGCRs (ORCPT + 99 others); Tue, 6 Dec 2022 21:17:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229537AbiLGCRq (ORCPT ); Tue, 6 Dec 2022 21:17:46 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C9789582 for ; Tue, 6 Dec 2022 18:17:38 -0800 (PST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NRgrw3SjtzJqHf; Wed, 7 Dec 2022 10:16:48 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 7 Dec 2022 10:17:36 +0800 From: Kefeng Wang To: , Andrew Morton CC: "Matthew Wilcox (Oracle)" , , Kefeng Wang Subject: [PATCH 1/2] mm: huge_memory: Convert madvise_free_huge_pmd to use a folio Date: Wed, 7 Dec 2022 10:34:30 +0800 Message-ID: <20221207023431.151008-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751520165014920468?= X-GMAIL-MSGID: =?utf-8?q?1751520165014920468?= Using folios instead of pages removes several calls to compound_head(), Signed-off-by: Kefeng Wang Reviewed-by: Vishal Moola (Oracle) --- mm/huge_memory.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index abe6cfd92ffa..6e76c770529b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1603,7 +1603,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, { spinlock_t *ptl; pmd_t orig_pmd; - struct page *page; + struct folio *folio; struct mm_struct *mm = tlb->mm; bool ret = false; @@ -1623,15 +1623,15 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, goto out; } - page = pmd_page(orig_pmd); + folio = pfn_folio(pmd_pfn(orig_pmd)); /* - * If other processes are mapping this page, we couldn't discard - * the page unless they all do MADV_FREE so let's skip the page. + * If other processes are mapping this folio, we couldn't discard + * the folio unless they all do MADV_FREE so let's skip the folio. */ - if (total_mapcount(page) != 1) + if (folio_mapcount(folio) != 1) goto out; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto out; /* @@ -1639,17 +1639,17 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, * will deactivate only them. */ if (next - addr != HPAGE_PMD_SIZE) { - get_page(page); + folio_get(folio); spin_unlock(ptl); - split_huge_page(page); - unlock_page(page); - put_page(page); + split_folio(folio); + folio_unlock(folio); + folio_put(folio); goto out_unlocked; } - if (PageDirty(page)) - ClearPageDirty(page); - unlock_page(page); + if (folio_test_dirty(folio)) + folio_clear_dirty(folio); + folio_unlock(folio); if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) { pmdp_invalidate(vma, addr, pmd); @@ -1660,7 +1660,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, tlb_remove_pmd_tlb_entry(tlb, pmd, addr); } - mark_page_lazyfree(page); + mark_page_lazyfree(&folio->page); ret = true; out: spin_unlock(ptl);