From patchwork Tue Nov 15 21:22:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20582 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2949249wru; Tue, 15 Nov 2022 13:29:36 -0800 (PST) X-Google-Smtp-Source: AA0mqf6SvL01NMmwwqX2ij+8Qi5P8YZcUKdKQTvBbPBMkLoC8KRXzv+vnyMRYDOHmJva9wXnOx2I X-Received: by 2002:a62:8142:0:b0:561:efcf:1d21 with SMTP id t63-20020a628142000000b00561efcf1d21mr20162484pfd.68.1668547776527; Tue, 15 Nov 2022 13:29:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547776; cv=none; d=google.com; s=arc-20160816; b=qgwj8I5pTaWJ40NeqWHUBIzkcj3EnHDaaHq1SlFLUqJJlMZdROHefXIUB5wX/SATzO ceD0DsP+Zjwo3IgHJmLi24p6G88DROFDf5B0Ytf7PxBlQFnOGiqKgV2f5e1CZm97flBc DpAV+V3LeEyqUtPB8j4rgut49I7Oe5ucVhdQlsXq+xHfVTp8hW/kgXDMlYj9PlWXACcm mKWUGOXcOQ2QYaeE0RYEpPqm4vcLkJE2stBXpIYUHRqqsWxCX3cYDSTCheXpVPDj/9De TOZlizVJsfxdKSDdhSMEufMyZSkYKiaMtRbSY/ha4HxFetRWaOR+jnsGFxQqRnmyV4nu BN/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZXiZrA4HYBBXzWdHv73gcqz8SPH14/W1bs6g/VbOofA=; b=xCXdSe1ZihgY/rNbzZI8CBNLIKc33hUk0pBK3/HpLrcbOrbmcy2qnKJ+u4TzEzGqkR vafPqK65NYPm7YE4mrUNhlr6qjPR2AwHuhDKiX5KV+Pcc7dHjANi2S/O1w0uPN21+foq MrVYt0DAigWAq9G9XQITV14uB1QcuTu3BWQiAl9okBp4UxilGywz/0b1NV/vG/kSHwqD UodvlzO9UbWqxlQ5ZZVPa66cfATys+bpqEFw59UuEYDjQZutWAcKFEcaRLcoJpzuDXKM 5B6v6TsYl0Oj+RYFAc1BOuIYgMHrJM39i1OJyQuN1rPJatX71sEQQqmWXI5SvuDwn7xz mZcw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=UGJlR5+i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a21-20020a63e855000000b0046f9443ad6asi13642009pgk.377.2022.11.15.13.28.51; Tue, 15 Nov 2022 13:29:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=UGJlR5+i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238765AbiKOVXU (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231921AbiKOVWu (ORCPT ); Tue, 15 Nov 2022 16:22:50 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2772522BE1 for ; Tue, 15 Nov 2022 13:22:49 -0800 (PST) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKIZIJ022487; Tue, 15 Nov 2022 21:22:32 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=ZXiZrA4HYBBXzWdHv73gcqz8SPH14/W1bs6g/VbOofA=; b=UGJlR5+iFPdQb2Z9YRyMvV8jqeCIocEiEEwm6SBPa/GfwkHtchO7z4zg0Cf4DphbDK5c huSsi3txtVfg5uteoiZGellrJMvwHDvwBZwTXUc5LS5LQdxv1+2i1LhbeO4DsPHkRqu7 f9oBrGL+21ziCm0ejApMGw6iiXeX0DqLWKEAORR7UO8N8B9tgPoztaRoQQpSw24zG+Xd 1P0cSEIQRB9D0Vt4zyJEBVy3DqHg9kxwGSmYtiW8GMSTWpDIbEDiG3eJdlbzioJcMbYg ess2zzzGPj9Cjq59YV6wemd6xeBDFzHKS8fzA0wVpI76ugbVwoIbytw2lOhqUqQSULyn Xg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3ns2ybr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:32 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFLHbBh031840; Tue, 15 Nov 2022 21:22:30 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvde-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:30 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRc002082; Tue, 15 Nov 2022 21:22:29 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-2; Tue, 15 Nov 2022 21:22:29 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 01/10] mm: add folio dtor and order setter functions Date: Tue, 15 Nov 2022 13:22:08 -0800 Message-Id: <20221115212217.19539-2-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=912 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: 13SXim-31TboRW7oy2yL6cFPd65kpzUs X-Proofpoint-ORIG-GUID: 13SXim-31TboRW7oy2yL6cFPd65kpzUs X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749599153788056127?= X-GMAIL-MSGID: =?utf-8?q?1749599153788056127?= Add folio equivalents for set_compound_order() and set_compound_page_dtor(). Also remove extra new-lines introduced by mm/hugetlb: convert move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios. Suggested-by: Mike Kravetz Suggested-by: Muchun Song Signed-off-by: Sidhartha Kumar --- include/linux/mm.h | 16 ++++++++++++++++ mm/hugetlb.c | 4 +--- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 982f2607180b..068686110729 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -895,6 +895,13 @@ static inline void set_compound_page_dtor(struct page *page, page[1].compound_dtor = compound_dtor; } +static inline void folio_set_compound_dtor(struct folio *folio, + enum compound_dtor_id compound_dtor) +{ + VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio); + folio->_folio_dtor = compound_dtor; +} + void destroy_large_folio(struct folio *folio); static inline int head_compound_pincount(struct page *head) @@ -910,6 +917,15 @@ static inline void set_compound_order(struct page *page, unsigned int order) #endif } +static inline void folio_set_compound_order(struct folio *folio, + unsigned int order) +{ + folio->_folio_order = order; +#ifdef CONFIG_64BIT + folio->_folio_nr_pages = order ? 1U << order : 0; +#endif +} + /* Returns the number of pages in this potentially compound page. */ static inline unsigned long compound_nr(struct page *page) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f786993f92d0..1acde3b8251e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1771,7 +1771,7 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) { hugetlb_vmemmap_optimize(h, &folio->page); INIT_LIST_HEAD(&folio->lru); - folio->_folio_dtor = HUGETLB_PAGE_DTOR; + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); hugetlb_set_folio_subpool(folio, NULL); set_hugetlb_cgroup(folio, NULL); set_hugetlb_cgroup_rsvd(folio, NULL); @@ -2927,7 +2927,6 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, * a reservation exists for the allocation. */ page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg); - if (!page) { spin_unlock_irq(&hugetlb_lock); page = alloc_buddy_huge_page_with_mpol(h, vma, addr); @@ -7317,7 +7316,6 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re int old_nid = folio_nid(old_folio); int new_nid = folio_nid(new_folio); - folio_set_hugetlb_temporary(old_folio); folio_clear_hugetlb_temporary(new_folio); From patchwork Tue Nov 15 21:22:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20574 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2948156wru; Tue, 15 Nov 2022 13:26:11 -0800 (PST) X-Google-Smtp-Source: AA0mqf6zBKsS0b61FRiLRMpKaSklQXE7VMZ5YyEL3visXu/tFqpNRUDRUw0Sod5iO4t85Q/EQxJy X-Received: by 2002:a05:6402:5407:b0:459:cb97:8df3 with SMTP id ev7-20020a056402540700b00459cb978df3mr17142635edb.427.1668547570842; Tue, 15 Nov 2022 13:26:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547570; cv=none; d=google.com; s=arc-20160816; b=btBmhvXGcKhEQamnSyT6RPWnZ62k7oCis+7ZGTLypWUs97E7rxPZb4ccyH78lqbqbU RE8Z0Fd4pF8qsq9oTNXNw5vifbKfdqidTWTEYnWYTHbiZ6VCZroN1PqZgXeWZX1YrxB6 UQbdviLH7esK5Y7EYpvvrBw0pkD1YzWOqcK6TscLvjOgZRUROuXf/Mlxlw2q9RkqsvEE Iv86/4pMqx7wl0EZpr2l9nYFf1uScBWHTRu5crTvmi29sxIHScpd8ZfHfejM870WFh+a 0qYwvbA5R/AH5aF1/+bXSuHtuAnh1ueGcFktATHuA6GHQ+QqIbOEP9EqmmOgLcPO1wDL 4m+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=l6pQ/QCJuSbmIlwypPHGacUcrC4q4nIGq/FftDEzLUE=; b=C5naYXheGY96z9P51C684j++7WBaIoIyLnafrBWEFx23rBfLscRuCVuiboELt806FX bF2fSY+sC495O34BhFcj9GFxda78N72NQaiDlwAMseydj8vhFXXzXZPBw8RC8XBHruhk rj5sKZEN4tFYyr6+W6aL61LhTtJMDTAMKzXUGJ9BXlTwubi7BG9RBtDznHmN3S6Qaw0+ FU5HQqqc5THkMUfxFq+DZbGWixhLyklMqQu3mP5XDBvfE6ANnuA95R5b3wcnAMrknnaS yWEOLn7v/16u3JxrCsBuyhyFj4AnaN4yhjgQqyfg7B6Fk93NJsSrfCPo3NjlvZ3LJn+k Z9jg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=bMfqqNai; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ht12-20020a170907608c00b0078d0f57b0e2si15717516ejc.412.2022.11.15.13.25.45; Tue, 15 Nov 2022 13:26:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=bMfqqNai; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229553AbiKOVXJ (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230480AbiKOVWu (ORCPT ); Tue, 15 Nov 2022 16:22:50 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F87021E27 for ; Tue, 15 Nov 2022 13:22:48 -0800 (PST) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKBwUw022508; Tue, 15 Nov 2022 21:22:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=l6pQ/QCJuSbmIlwypPHGacUcrC4q4nIGq/FftDEzLUE=; b=bMfqqNaiuAfZCh0AjmiUxL2cTySomJy85PP1PRggcBP8bh6HnIJnv8G2jUhaJBbEZa/Q LLH6jYllysUwJWEGfSVWlrMyYndJDbwNVuSl9ooQOEHkqHZZTiF5EMnscavuylzFaxbr Vu7ENFOcb78fmWUshJ8gM5MZ8tit4UmQet+ltyAqOMrzlIXeZJOKOBYbZcUz1neKUeyE FuNFJfgt5sIMZF8GSG0PlHf6hV92qjKcvc2W0luEOEJE2oH8//Qglvp4MoIph/e7CAB1 4JQlewj+iwYNIvAJwPkXmH2rvpEv/2gDw0/LEz5iKVVUlPLjHSw4BQZhPtk3zTrYeNrp 6w== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3ns2ybu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:34 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFJhvLf032077; Tue, 15 Nov 2022 21:22:31 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfveu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:31 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRe002082; Tue, 15 Nov 2022 21:22:31 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-3; Tue, 15 Nov 2022 21:22:30 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 02/10] mm/hugetlb: convert destroy_compound_gigantic_page() to folios Date: Tue, 15 Nov 2022 13:22:09 -0800 Message-Id: <20221115212217.19539-3-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=930 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: qRqmsSvymaaVj34QTAbtu48juYVKn4H6 X-Proofpoint-ORIG-GUID: qRqmsSvymaaVj34QTAbtu48juYVKn4H6 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749598937960983983?= X-GMAIL-MSGID: =?utf-8?q?1749598937960983983?= Convert page operations within __destroy_compound_gigantic_page() to the corresponding folio operations. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1acde3b8251e..cf52cd0d571e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1317,42 +1317,39 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed) nr_nodes--) /* used to demote non-gigantic_huge pages as well */ -static void __destroy_compound_gigantic_page(struct page *page, +static void __destroy_compound_gigantic_folio(struct folio *folio, unsigned int order, bool demote) { int i; int nr_pages = 1 << order; struct page *p; - atomic_set(compound_mapcount_ptr(page), 0); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), 0); + atomic_set(folio_pincount_ptr(folio), 0); for (i = 1; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); p->mapping = NULL; clear_compound_head(p); if (!demote) set_page_refcounted(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + folio_clear_head(folio); } -static void destroy_compound_hugetlb_page_for_demote(struct page *page, +static void destroy_compound_hugetlb_folio_for_demote(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, true); + __destroy_compound_gigantic_folio(folio, order, true); } #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE -static void destroy_compound_gigantic_page(struct page *page, +static void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, false); + __destroy_compound_gigantic_folio(folio, order, false); } static void free_gigantic_page(struct page *page, unsigned int order) @@ -1421,7 +1418,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, return NULL; } static inline void free_gigantic_page(struct page *page, unsigned int order) { } -static inline void destroy_compound_gigantic_page(struct page *page, +static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1468,8 +1465,8 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * * For gigantic pages set the destructor to the null dtor. This * destructor will never be called. Before freeing the gigantic - * page destroy_compound_gigantic_page will turn the compound page - * into a simple group of pages. After this the destructor does not + * page destroy_compound_gigantic_folio will turn the folio into a + * simple group of pages. After this the destructor does not * apply. * * This handles the case where more than one ref is held when and @@ -1550,6 +1547,7 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, static void __update_and_free_page(struct hstate *h, struct page *page) { int i; + struct folio *folio = page_folio(page); struct page *subpage; if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1578,8 +1576,8 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * Move PageHWPoison flag from head page to the raw error pages, * which makes any healthy subpages reusable. */ - if (unlikely(PageHWPoison(page))) - hugetlb_clear_page_hwpoison(page); + if (unlikely(folio_test_hwpoison(folio))) + hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { subpage = nth_page(page, i); @@ -1595,7 +1593,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) */ if (hstate_is_gigantic(h) || hugetlb_cma_page(page, huge_page_order(h))) { - destroy_compound_gigantic_page(page, huge_page_order(h)); + destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); @@ -3426,6 +3424,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) { int i, nid = page_to_nid(page); struct hstate *target_hstate; + struct folio *folio = page_folio(page); struct page *subpage; int rc = 0; @@ -3444,10 +3443,10 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) } /* - * Use destroy_compound_hugetlb_page_for_demote for all huge page + * Use destroy_compound_hugetlb_folio_for_demote for all huge page * sizes as it will not ref count pages. */ - destroy_compound_hugetlb_page_for_demote(page, huge_page_order(h)); + destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h)); /* * Taking target hstate mutex synchronizes with set_max_huge_pages. From patchwork Tue Nov 15 21:22:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20573 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2947942wru; Tue, 15 Nov 2022 13:25:30 -0800 (PST) X-Google-Smtp-Source: AA0mqf6ufoCz53YgZBlXOJcbjMb1oWE8VZEEhzYsCG9FcKWUt08WvlP9jSthz1rAcB78Sguy/Vsq X-Received: by 2002:a63:c0a:0:b0:470:a47:996a with SMTP id b10-20020a630c0a000000b004700a47996amr17735699pgl.377.1668547530249; Tue, 15 Nov 2022 13:25:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547530; cv=none; d=google.com; s=arc-20160816; b=OwuSgQpxRnmAEFQNyOb9fy7u1WEfkxh/enlFEA9aC1UI3STUISB+OT+UoE1WI22BSw i/hYncyzdZWdaQnI0KZzVP2F0+VzCcJqX0i8TkTBzjNrS536GaECMuouZJaLOvoQxNLv 8nm0MIBrQFn2vksPU6fVbBwrcc9xYiJOFSo0T+TVBLO2K+m0Kcpl9rdle2ZRJG5DhufI 8tORi3jDTaVShNR5lh/qBhXgjoXwnMSVjdMWvq0Ynnb26N1q2T7/HecoNLwZHD9YOrst TQAVgTeoGCkwRW/Pf/yjAHn4HiAXdxUZgsENf39XlaFFGn3bA+ftZXBIWAuY3cYY5CjY bJeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UrtIfo58Nj3Y1eE1w+Ej6DcDMXYyYYkYY6rJykOyjqU=; b=ooeNmJL7bMjmomxokgTJ5VtE9jFRu2kB+MDxX/3MomYJyvpdr2M31aKSYLvj3jY2Tk unwCe77EYukknA04sHCh1RLxF0hlCLAYZtHvg6FgmONsqjscueN2MwGtn+pVhseQtW3T lKElYjh2EoJAnMeFhDmoeEnXT7KzUtTS9TtNWWKgfmBCD0/qeDDUUCy2BfGPZBdtqkSL Nqw7WSRvtP5MuCrFhUO2FSWcK3S2Evaw96iCth1PYQ8A+S4/hlATR1hqO1Q5Z/O+LfIh /WKBbTRwZqD1WDd4VLI+roRTCqGql4TQXC45CyUsxVdzOa1DZaBfYObyhxOFC54MfTrS 3haw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=jjQ96U4b; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u16-20020a17090adb5000b001fb35ed6fdesi10775pjx.53.2022.11.15.13.25.16; Tue, 15 Nov 2022 13:25:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=jjQ96U4b; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232119AbiKOVW6 (ORCPT + 99 others); Tue, 15 Nov 2022 16:22:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231684AbiKOVWt (ORCPT ); Tue, 15 Nov 2022 16:22:49 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D12E01E70C for ; Tue, 15 Nov 2022 13:22:47 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFJXdNS016540; Tue, 15 Nov 2022 21:22:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=UrtIfo58Nj3Y1eE1w+Ej6DcDMXYyYYkYY6rJykOyjqU=; b=jjQ96U4b9SFf76nI6jMBxDSDj0xUqjQLdwwwEOlXCT3EC/0MeXkV/KrXiRLiQHkMu3sK CAdNhh8djw5jkhYaQAxkPnnF5v1aDpv2XjwmvTrrme6+1IhiQ3rDS+0/pTkkOX3gdeat 9TbZlK7MF7jlPOOUh+rTa3zc/L+u6FiM3pCM0omSgu7Csnj7JFEA2uzuIru633wY0Pz5 kPP86fn094zPwJhptGkG5RTkHvB1t9IClXDQGPt+HD2pfdAka3H10zr/SWSCqW2+KDK+ nSoSi5qmXdJ0QHdEQx8cIv6JEmlGEgUbZSNOmdGG/dxC2K0PAt6VOOGSa5xybtAvRG4B 5Q== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8ykj8mq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:34 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFJl43Y031742; Tue, 15 Nov 2022 21:22:33 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvgc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:33 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRg002082; Tue, 15 Nov 2022 21:22:32 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-4; Tue, 15 Nov 2022 21:22:32 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 03/10] mm/hugetlb: convert dissolve_free_huge_page() to folios Date: Tue, 15 Nov 2022 13:22:10 -0800 Message-Id: <20221115212217.19539-4-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=952 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: B3Cp_OgsD9uqaB9D70IPjTPw6dsvggnd X-Proofpoint-ORIG-GUID: B3Cp_OgsD9uqaB9D70IPjTPw6dsvggnd X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749598894657108296?= X-GMAIL-MSGID: =?utf-8?q?1749598894657108296?= Removes compound_head() call by using a folio rather than a head page. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index cf52cd0d571e..19657f990900 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2116,21 +2116,21 @@ static struct page *remove_pool_huge_page(struct hstate *h, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct folio *folio = page_folio(page); retry: /* Not to disrupt normal path by vainly holding hugetlb_lock */ - if (!PageHuge(page)) + if (!folio_test_hugetlb(folio)) return 0; spin_lock_irq(&hugetlb_lock); - if (!PageHuge(page)) { + if (!folio_test_hugetlb(folio)) { rc = 0; goto out; } - if (!page_count(page)) { - struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); + if (!folio_ref_count(folio)) { + struct hstate *h = folio_hstate(folio); if (!available_huge_pages(h)) goto out; @@ -2138,7 +2138,7 @@ int dissolve_free_huge_page(struct page *page) * We should make sure that the page is already on the free list * when it is dissolved. */ - if (unlikely(!HPageFreed(head))) { + if (unlikely(!folio_test_hugetlb_freed(folio))) { spin_unlock_irq(&hugetlb_lock); cond_resched(); @@ -2153,7 +2153,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, head, false); + remove_hugetlb_page(h, &folio->page, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2165,12 +2165,12 @@ int dissolve_free_huge_page(struct page *page) * Attempt to allocate vmemmmap here so that we can take * appropriate action on failure. */ - rc = hugetlb_vmemmap_restore(h, head); + rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, head, false); + update_and_free_page(h, &folio->page, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, head, false); + add_hugetlb_page(h, &folio->page, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } From patchwork Tue Nov 15 21:22:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20578 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2948338wru; Tue, 15 Nov 2022 13:26:43 -0800 (PST) X-Google-Smtp-Source: AA0mqf42h/3KRRKaIwZ6RwU62J1M+Hj6Kh5xBHufTWZFU9dwLZqIxmFUde9u+s1gIqGfVK6TajMd X-Received: by 2002:a17:906:e0c6:b0:78d:3a04:e41d with SMTP id gl6-20020a170906e0c600b0078d3a04e41dmr15727002ejb.39.1668547603664; Tue, 15 Nov 2022 13:26:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547603; cv=none; d=google.com; s=arc-20160816; b=sgNthJVzl3LX0+sSBh0Zbup6Og0K0S202ZI+exISvVYn4QwJoRFUfAHvNZ8n/S5GqS BwCKgbFLZDxc9md9Ak5X5ismO+dckf95JEAVr3wPbMSesZyIuH8788lrodO5QqfWUuCh eW6KnLgXZRzsCM2wVubx7GUTSwODap9Jm1Z7t8ugyXOe1HuikDfyA4HprITYs/FIWi+R 0LQZKh6vEDMRN28yrB/AP0sFQvpOf5COwLMNBhZCC3uhJNqkJ4x7KLdGFurV4E5daBiZ eXk/9h2Vp1C5Se4IBRx2jcuOausbAUThuj3F6oBFx23Iuu809lVusiputRV3+Job8g7E XWwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6dDPh7nZ5AaTXR9UjUhCTat+xUEbUzhgxymJR2d+XiE=; b=QE9rj0UP+SlQ/oI6/35D6n1ojAvBPq8eEG8Emq6ptFfrwotdeoPfrX29mAAD8P/D2a yXEhmSMxsu1cImNKDCjSq0N38+FQqvI4Iyh2ZUa2C9tdTV/V+t5k9sWqvKDf3dTBgS0V n9giQmHbF3BruEQA2qkxtS19n7Qqh38t5vx3/OGaU8ZR/yXkI664gm3MSk46wWeU3Do2 AcqYLr9SGDIoznWs6GqnnFzen4ptYqIr9AMec6trTbnu6cQXvFKl52M8oSTWdGKFZa29 r3iFLW803oLdL0ckU0rav0bERNipdmNsNp56GknBZhXOF7lHTs8wlSumgaqCfnVc+Gmg Gmjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=gVZUgu5h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mp27-20020a1709071b1b00b0078d770f363fsi14594829ejc.471.2022.11.15.13.26.18; Tue, 15 Nov 2022 13:26:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=gVZUgu5h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238776AbiKOVX0 (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231939AbiKOVWu (ORCPT ); Tue, 15 Nov 2022 16:22:50 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 483E924F3D for ; Tue, 15 Nov 2022 13:22:49 -0800 (PST) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKD5Uk008985; Tue, 15 Nov 2022 21:22:36 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=6dDPh7nZ5AaTXR9UjUhCTat+xUEbUzhgxymJR2d+XiE=; b=gVZUgu5hKXZJOrNa4NeIRAbYOhxnopxpxWpZ6DEEHPZFxB2gT77NjrDyZukbx0HqVbww /eMsKYaZLjyiDWHYgXWodAsps/JEjqDSKnzdCPMps2qnj0KSqHr2f2Q/y/aFokgT9Wyc SFZhzNY2yxNv5QXoioVxTqRPaPYNzBQD/fpQDeU4zLWh9LoeljXuAIRtID923Z1v7jA2 3DnXAd1oJ6L5rY1H0UdnY7/veoQKmrIkM3asrs7SH7lZhfTDpwGZ/SZ+UEUOPIo6mUuB v9sQGRcw0o9ibTcdzwqoLnOcFSaG8vL5OH9hNRjQOYFL77H3Bmnmvs49AkdJ6GrFSI0M +A== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jsjx9j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:36 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFKc3tQ031778; Tue, 15 Nov 2022 21:22:34 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvhn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:34 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRi002082; Tue, 15 Nov 2022 21:22:34 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-5; Tue, 15 Nov 2022 21:22:34 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 04/10] mm/hugetlb: convert remove_hugetlb_page() to folios Date: Tue, 15 Nov 2022 13:22:11 -0800 Message-Id: <20221115212217.19539-5-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-ORIG-GUID: moFIy5L0NZEuXaiRmR-lLgVWrCPFAFIR X-Proofpoint-GUID: moFIy5L0NZEuXaiRmR-lLgVWrCPFAFIR X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749598972177457442?= X-GMAIL-MSGID: =?utf-8?q?1749598972177457442?= Removes page_folio() call by converting callers to directly pass a folio into __remove_hugetlb_page(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 48 +++++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 19657f990900..7804ba51a7b8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1423,19 +1423,18 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio, #endif /* - * Remove hugetlb page from lists, and update dtor so that page appears + * Remove hugetlb folio from lists, and update dtor so that the folio appears * as just a compound page. * - * A reference is held on the page, except in the case of demote. + * A reference is held on the folio, except in the case of demote. * * Must be called with hugetlb lock held. */ -static void __remove_hugetlb_page(struct hstate *h, struct page *page, +static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus, bool demote) { - int nid = page_to_nid(page); - struct folio *folio = page_folio(page); + int nid = folio_nid(folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio(folio), folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio_rsvd(folio), folio); @@ -1444,9 +1443,9 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; - list_del(&page->lru); + list_del(&folio->lru); - if (HPageFreed(page)) { + if (folio_test_hugetlb_freed(folio)) { h->free_huge_pages--; h->free_huge_pages_node[nid]--; } @@ -1476,26 +1475,26 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * be turned into a page of smaller size. */ if (!demote) - set_page_refcounted(page); + folio_ref_unfreeze(folio, 1); if (hstate_is_gigantic(h)) - set_compound_page_dtor(page, NULL_COMPOUND_DTOR); + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); else - set_compound_page_dtor(page, COMPOUND_PAGE_DTOR); + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); h->nr_huge_pages--; h->nr_huge_pages_node[nid]--; } -static void remove_hugetlb_page(struct hstate *h, struct page *page, +static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, false); + __remove_hugetlb_folio(h, folio, adjust_surplus, false); } -static void remove_hugetlb_page_for_demote(struct hstate *h, struct page *page, +static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, true); + __remove_hugetlb_folio(h, folio, adjust_surplus, true); } static void add_hugetlb_page(struct hstate *h, struct page *page, @@ -1630,8 +1629,9 @@ static void free_hpage_workfn(struct work_struct *work) /* * The VM_BUG_ON_PAGE(!PageHuge(page), page) in page_hstate() * is going to trigger because a previous call to - * remove_hugetlb_page() will set_compound_page_dtor(page, - * NULL_COMPOUND_DTOR), so do not use page_hstate() directly. + * remove_hugetlb_folio() will call folio_set_compound_dtor + * (folio, NULL_COMPOUND_DTOR), so do not use page_hstate() + * directly. */ h = size_to_hstate(page_size(page)); @@ -1740,12 +1740,12 @@ void free_huge_page(struct page *page) h->resv_huge_pages++; if (folio_test_hugetlb_temporary(folio)) { - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ - remove_hugetlb_page(h, page, true); + remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else { @@ -2080,6 +2080,7 @@ static struct page *remove_pool_huge_page(struct hstate *h, { int nr_nodes, node; struct page *page = NULL; + struct folio *folio; lockdep_assert_held(&hugetlb_lock); for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) { @@ -2091,7 +2092,8 @@ static struct page *remove_pool_huge_page(struct hstate *h, !list_empty(&h->hugepage_freelists[node])) { page = list_entry(h->hugepage_freelists[node].next, struct page, lru); - remove_hugetlb_page(h, page, acct_surplus); + folio = page_folio(page); + remove_hugetlb_folio(h, folio, acct_surplus); break; } } @@ -2153,7 +2155,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, &folio->page, false); + remove_hugetlb_folio(h, folio, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2792,7 +2794,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * and enqueue_huge_page() for new_page. The counters will remain * stable since this happens under the lock. */ - remove_hugetlb_page(h, old_page, false); + remove_hugetlb_folio(h, old_folio, false); /* * Ref count on new page is already zero as it was dropped @@ -3219,7 +3221,7 @@ static void try_to_free_low(struct hstate *h, unsigned long count, goto out; if (PageHighMem(page)) continue; - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, page_folio(page), false); list_add(&page->lru, &page_list); } } @@ -3430,7 +3432,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); - remove_hugetlb_page_for_demote(h, page, false); + remove_hugetlb_folio_for_demote(h, folio, false); spin_unlock_irq(&hugetlb_lock); rc = hugetlb_vmemmap_restore(h, page); From patchwork Tue Nov 15 21:22:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20583 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2949251wru; Tue, 15 Nov 2022 13:29:36 -0800 (PST) X-Google-Smtp-Source: AA0mqf40RofIljSaj4UCnQQxmzZKNnC2KfKAAii1UoFFwmKZAvUGRS/kNV79pm8p03ds/QNagxlo X-Received: by 2002:a05:6a00:3485:b0:571:fa1d:85b7 with SMTP id cp5-20020a056a00348500b00571fa1d85b7mr13196818pfb.39.1668547776610; Tue, 15 Nov 2022 13:29:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547776; cv=none; d=google.com; s=arc-20160816; b=UoRvhLvlwCs077PA7R7VZpt4ErIrNU3NLEE6eDnKno64S/lgVJ9+QiaYPM1XukLK4W HkP0SUAqCvLgifljVQX1btks5IF/X+ms0K/SrYbcHsU6d8CGFVlQ96U5BTsgoGNCfwbL XvLa47myeRxQiftgSPtA7yiikJcIRosUTt6bJEWdK+XjV7qP6ZchafYHx+jrWPgjPiY2 rPVuo7yDmNGq1nVpfcxl4MgWdlzQJtFx3QJ3ALbHGsPVIdSC20f0EgcZ9FRBrF+xrJeP 4pZ19dGaOm5FzXeU9sTLRVidflLyPSSURqJjyL/KkYiMARchwx55fhr6jlLIwms6zpYr 5q/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=LPo0ewPs3MTWKRLrDd7l9egwICWaBxr5EYJykOq0AaU=; b=cnKcPfseqWUihHVOjMqp2XqyQQZHrLhngWPNRwLd2n7/yvrewTyq6zPdKxxB4qcYB/ NDwEial6e/qQU8zzIMTm5OkbjFEmaDsW6I/geVrkf7U8SZqsL8ob3OlT3/T2xI3r6RBB gAaXzE+7BRBDUFt5OteIy3bcokB/Y3U1wR0Qh6WsBWo43/kzPvZgI9WftEXqPD0LwfHD rsFesgrEd9f5/9pCRPYFMpKpT57T9y/Mn9bAkWC3e4pE45oSLci2G1c2jKAeUlAPuUXq XPgUxFmKEe6J2FXpnnG5QzgNbHi6/84l9FWbFSBKS2JdZFuOjywh55+2GH+qQIfR/FI+ Rz5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=XyaAkNcr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jd14-20020a170903260e00b00186e9ad55c1si12675408plb.434.2022.11.15.13.28.59; Tue, 15 Nov 2022 13:29:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=XyaAkNcr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238725AbiKOVXP (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231907AbiKOVWu (ORCPT ); Tue, 15 Nov 2022 16:22:50 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41DB723174 for ; Tue, 15 Nov 2022 13:22:49 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFJRpxm016588; Tue, 15 Nov 2022 21:22:37 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=LPo0ewPs3MTWKRLrDd7l9egwICWaBxr5EYJykOq0AaU=; b=XyaAkNcrt/2ShS96GLCoy/uUO7ULfjTR2XJd/Zxc/3AYq/Vcc7T+dPYpFjdySw0Ib3kq F9xGOduFna3KWxttgOPVNeVzlPfzY1BN7JESXXuF2iDsZXb7mMETcRXcuD/c0J0nMPPS yy6JpxRXmXV2WWVQfwa85Z14I6XmlhMbGLaxRseEbUzvYVBmfEulwRUzpBJPkKwghxpE kGbLqbqGpnyfZKNcnegAzncvG0NrYqXxkU4v2kiXUTEkne93D1qkqRkxWn1qM8DXG1Zn 62lJE8LZ0E7cX9EmVaG2tnuENq/kel+rKK1qmYs3RhfGUdmIXKwt55aT3kcoTdk3jGGD Wg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8ykj8n1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:37 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFLHbBl031840; Tue, 15 Nov 2022 21:22:36 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvk1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:36 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRk002082; Tue, 15 Nov 2022 21:22:35 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-6; Tue, 15 Nov 2022 21:22:35 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 05/10] mm/hugetlb: convert update_and_free_page() to folios Date: Tue, 15 Nov 2022 13:22:12 -0800 Message-Id: <20221115212217.19539-6-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=870 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: nEUOO4GS8a_Dbdeejy_lg_4c6Rc2I0Tt X-Proofpoint-ORIG-GUID: nEUOO4GS8a_Dbdeejy_lg_4c6Rc2I0Tt X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749599153617783257?= X-GMAIL-MSGID: =?utf-8?q?1749599153617783257?= Make more progress on converting the free_huge_page() destructor to operate on folios by converting update_and_free_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7804ba51a7b8..660ae46e741b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1469,7 +1469,7 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, * apply. * * This handles the case where more than one ref is held when and - * after update_and_free_page is called. + * after update_and_free_hugetlb_folio is called. * * In the case of demote we do not ref count the page as it will soon * be turned into a page of smaller size. @@ -1600,7 +1600,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) } /* - * As update_and_free_page() can be called under any context, so we cannot + * As update_and_free_hugetlb_folio() can be called under any context, so we cannot * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate * the vmemmap pages. @@ -1648,11 +1648,11 @@ static inline void flush_free_hpage_work(struct hstate *h) flush_work(&free_hpage_work); } -static void update_and_free_page(struct hstate *h, struct page *page, +static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio, bool atomic) { - if (!HPageVmemmapOptimized(page) || !atomic) { - __update_and_free_page(h, page); + if (!folio_test_hugetlb_vmemmap_optimized(folio) || !atomic) { + __update_and_free_page(h, &folio->page); return; } @@ -1663,16 +1663,18 @@ static void update_and_free_page(struct hstate *h, struct page *page, * empty. Otherwise, schedule_work() had been called but the workfn * hasn't retrieved the list yet. */ - if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) + if (llist_add((struct llist_node *)&folio->mapping, &hpage_freelist)) schedule_work(&free_hpage_work); } static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) { struct page *page, *t_page; + struct folio *folio; list_for_each_entry_safe(page, t_page, list, lru) { - update_and_free_page(h, page, false); + folio = page_folio(page); + update_and_free_hugetlb_folio(h, folio, false); cond_resched(); } } @@ -1742,12 +1744,12 @@ void free_huge_page(struct page *page) if (folio_test_hugetlb_temporary(folio)) { remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -2160,8 +2162,8 @@ int dissolve_free_huge_page(struct page *page) spin_unlock_irq(&hugetlb_lock); /* - * Normally update_and_free_page will allocate required vmemmmap - * before freeing the page. update_and_free_page will fail to + * Normally update_and_free_hugtlb_folio will allocate required vmemmmap + * before freeing the page. update_and_free_hugtlb_folio will fail to * free the page if it can not allocate required vmemmap. We * need to adjust max_huge_pages if the page is not freed. * Attempt to allocate vmemmmap here so that we can take @@ -2169,7 +2171,7 @@ int dissolve_free_huge_page(struct page *page) */ rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, &folio->page, false); + update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); add_hugetlb_page(h, &folio->page, false); @@ -2807,7 +2809,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Pages have been replaced, we can safely free the old one. */ spin_unlock_irq(&hugetlb_lock); - update_and_free_page(h, old_page, false); + update_and_free_hugetlb_folio(h, old_folio, false); } return ret; @@ -2816,7 +2818,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, spin_unlock_irq(&hugetlb_lock); /* Page has a zero ref count, but needs a ref to be freed */ folio_ref_unfreeze(new_folio, 1); - update_and_free_page(h, new_page, false); + update_and_free_hugetlb_folio(h, new_folio, false); return ret; } From patchwork Tue Nov 15 21:22:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20575 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2948203wru; Tue, 15 Nov 2022 13:26:21 -0800 (PST) X-Google-Smtp-Source: AA0mqf5eTLpeq2yWBnGWivr64auahtHMxABg8POBwuI07cFw3JHmKAWIonPGAEORh6oKZWGcYsIj X-Received: by 2002:aa7:d697:0:b0:461:dbcc:5176 with SMTP id d23-20020aa7d697000000b00461dbcc5176mr16623173edr.53.1668547569467; Tue, 15 Nov 2022 13:26:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547568; cv=none; d=google.com; s=arc-20160816; b=oMyNYywHCfr5clumcB2oGyynBPXpY5ZtluVdKrBTXew7VUs3t2e1LqHqVkI6QWoItK 3UVGEVmXtSdlZM0kHC+yA1hMl4RUWeeKVjJcAJLruDKN+DieE9jXCL/cQ6X18Sz/EWJl vSyAWhp+7Oa6OYHistCetw+MboW5rHoXTf0YFalPxYq2EDXpJZrVm/DoxMclIbOP9QSp 11S9AEbMCYHjNA7xlGglYdtb1edcIgCX/HFH9dMPzUJC+0JTPmVixxoB/d4cDqL+Um2C r9rLdGOXB+wWzcRZG0uo+1+exKjGI7jQLZGplA7QSNwt6hfG4wLfVHa99In25Lon8son PVLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+p46qtxfAvi4yBaujCKpTxRFQIybXD2YBopI7D5Y2Sc=; b=ixgUBAPxVN/lJevsXgM/AWEt2ocSuVSLWUwu1LO9F5RzqC99+d2hHk/bf1+Hc/7ax7 z83oNyFo4eFPELqvj/LTlEm2Ets64Czh6F/UIX/unT61ntNrt/EKqsNuLM7hZ7IT5hH6 pqpyksFOjFMaZ9UNx5YguruPjb+u9nwJ7GK8ShvzvofNCT3WEBfN+tt55NMv0rOE1I+l O814RzTD106Z+/kXYvBLeAlq3iXYTRo7ZYvg3sIpvpc1RVB7z+mjSTlwBwxj6uFNH0wH PdaYTHq+JO9TDZX93yoNRvS8wG92y7/k7asb5J3suMdSp+jTWEwGIsrzGkxsUXuo/UtI NMOw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=jTzarBSG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jl8-20020a17090775c800b0078dc3cb8b48si7525085ejc.625.2022.11.15.13.25.44; Tue, 15 Nov 2022 13:26:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=jTzarBSG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232064AbiKOVXE (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231905AbiKOVWt (ORCPT ); Tue, 15 Nov 2022 16:22:49 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2603A1EC49 for ; Tue, 15 Nov 2022 13:22:48 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFJXSXl016551; Tue, 15 Nov 2022 21:22:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=+p46qtxfAvi4yBaujCKpTxRFQIybXD2YBopI7D5Y2Sc=; b=jTzarBSGS7mQIpmkVSAFfJ8vQ0qy2dKlMKEpNZ4Pqg6x4dpLT/YPry4Ga1CvdvTtL5Tz u+pZ6peWrKmzdd/eYs/ZXyYzQy0snjlYTJHDmIyqQAIRJns0srgYsWfG+a/5QMuE8bfq DbpZuWsNpyLXB+QbuGBaKFXKXIbbNczNTupl0QbIqpFeLqn+/r830IRi0RJ6LVNABLAg Ss9T3LjDCye5BIWNVVpqccC6SQuRmYW5PP5nvy/9WNf4ROOdts8yxBdck3ovj7CwBpKN FE2whxjklTS9Urd+z66pVYNIoj3HYHWL8+F9j6Iv6/OiPWW3EPcn6JQM21INZy1eSMXg Vg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8ykj8n8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:38 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFK0D1N031734; Tue, 15 Nov 2022 21:22:37 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvma-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:37 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRm002082; Tue, 15 Nov 2022 21:22:36 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-7; Tue, 15 Nov 2022 21:22:36 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 06/10] mm/hugetlb: convert add_hugetlb_page() to folios and add hugetlb_cma_folio() Date: Tue, 15 Nov 2022 13:22:13 -0800 Message-Id: <20221115212217.19539-7-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: iWP9XscYer9oVI762P9uash6HqGbu10c X-Proofpoint-ORIG-GUID: iWP9XscYer9oVI762P9uash6HqGbu10c X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749598934412027913?= X-GMAIL-MSGID: =?utf-8?q?1749598934412027913?= Convert add_hugetlb_page() to take in a folio, also convert hugetlb_cma_page() to take in a folio. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 660ae46e741b..7382c162dbcd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -53,13 +53,13 @@ struct hstate hstates[HUGE_MAX_HSTATE]; #ifdef CONFIG_CMA static struct cma *hugetlb_cma[MAX_NUMNODES]; static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { - return cma_pages_valid(hugetlb_cma[page_to_nid(page)], page, + return cma_pages_valid(hugetlb_cma[folio_nid(folio)], &folio->page, 1 << order); } #else -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { return false; } @@ -1497,17 +1497,17 @@ static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *foli __remove_hugetlb_folio(h, folio, adjust_surplus, true); } -static void add_hugetlb_page(struct hstate *h, struct page *page, +static void add_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { int zeroed; - int nid = page_to_nid(page); + int nid = folio_nid(folio); - VM_BUG_ON_PAGE(!HPageVmemmapOptimized(page), page); + VM_BUG_ON_FOLIO(!folio_test_hugetlb_vmemmap_optimized(folio), folio); lockdep_assert_held(&hugetlb_lock); - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&folio->lru); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; @@ -1516,21 +1516,21 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, h->surplus_huge_pages_node[nid]++; } - set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - set_page_private(page, 0); + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); + folio_change_private(folio, 0); /* * We have to set HPageVmemmapOptimized again as above - * set_page_private(page, 0) cleared it. + * folio_change_private(folio, 0) cleared it. */ - SetHPageVmemmapOptimized(page); + folio_set_hugetlb_vmemmap_optimized(folio); /* - * This page is about to be managed by the hugetlb allocator and + * This folio is about to be managed by the hugetlb allocator and * should have no users. Drop our reference, and check for others * just in case. */ - zeroed = put_page_testzero(page); - if (!zeroed) + zeroed = folio_put_testzero(folio); + if (unlikely(!zeroed)) /* * It is VERY unlikely soneone else has taken a ref on * the page. In this case, we simply return as the @@ -1539,8 +1539,8 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, */ return; - arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + arch_clear_hugepage_flags(&folio->page); + enqueue_huge_page(h, &folio->page); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1566,7 +1566,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_page(h, page, true); + add_hugetlb_folio(h, page_folio(page), true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1591,7 +1591,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * need to be given back to CMA in free_gigantic_page. */ if (hstate_is_gigantic(h) || - hugetlb_cma_page(page, huge_page_order(h))) { + hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { @@ -2174,7 +2174,7 @@ int dissolve_free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, &folio->page, false); + add_hugetlb_folio(h, folio, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } @@ -3442,7 +3442,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) /* Allocation of vmemmmap failed, we can not demote page */ spin_lock_irq(&hugetlb_lock); set_page_refcounted(page); - add_hugetlb_page(h, page, false); + add_hugetlb_folio(h, page_folio(page), false); return rc; } From patchwork Tue Nov 15 21:22:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20577 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2948310wru; Tue, 15 Nov 2022 13:26:40 -0800 (PST) X-Google-Smtp-Source: AA0mqf5TGpRjrcvizsMeBTVMdFrjSQO1E2ieCRvcwEKPV2Akt1YEaPm2h+MjMyZ+s9ST6qI5dUHa X-Received: by 2002:a17:906:3fd4:b0:7ad:95cf:726e with SMTP id k20-20020a1709063fd400b007ad95cf726emr15553335ejj.60.1668547599878; Tue, 15 Nov 2022 13:26:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547599; cv=none; d=google.com; s=arc-20160816; b=jSlM29fm1PFNXL9Vrsn1jjButLnrMkKppwlYJ+Hppka15UFhGdgcgTRLtjzPcp2buO 8ruDhLNl25B2tldLNR9whvJlWQuOFKWPs0REfx11Py0BVc3llnkLf2BsB/cIFPqBjjlr XRjAdB0sseEvxNSe8DGnzdQcbVsq0oCUCw5shrdai85lc4tCj9CgAoAlLAV77gXCYwKI T2ocSV/D9246oAQ6az2xVBmf9nRWMN7tZPMMaprSdoJxBvgyQFAcxiGE2LPvMkrw6hTA hx07i+Ewr4qeHnsCpAoz/15N0eW/jgUjtnmHOSb3wUJOyEW3B69lxhxf75JPlSEq1SV9 iyrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0C32TPwYA1sxUQ1BprDeGSkpbbMnBeTp3PiSrVMBGtI=; b=0BDbkgaJFwGrJgCa/uCeLBdZCMeNQuBdXHr5W+j1EsI4Jd+saPlrNnFSZ50gzyo5za y7nUNkm/17xn9gHe+T6c2AXwIgDsitmtuw0dFdSS5+dGe6VFAQ/uvhiM4RhZ9msjLjn8 zjspL5avSFA8JXXGbijKIuRhdgyQuZViowYQksPuxURdyk9LH1Oi9wTND49VRTXSP2vr mKEsS67QQ9vZtiXuPHC0Dp/YKrnXdIBd0W3RcNcRPISMbafOldvnvpi7reFZ2Njo8Z3g CgEeB80xN/FUaG4ZiC/YiDdz3bjYfjvFiG8JCkUt7nyvuJVqwzlsOcTMQqnU/NWI9ukK AHZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=SFRBsTfv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f26-20020a50ee9a000000b00460c36e09b6si10541077edr.431.2022.11.15.13.26.15; Tue, 15 Nov 2022 13:26:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=SFRBsTfv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238771AbiKOVXY (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231953AbiKOVWu (ORCPT ); Tue, 15 Nov 2022 16:22:50 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D37626AC9 for ; Tue, 15 Nov 2022 13:22:49 -0800 (PST) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKRoLs008998; Tue, 15 Nov 2022 21:22:41 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=0C32TPwYA1sxUQ1BprDeGSkpbbMnBeTp3PiSrVMBGtI=; b=SFRBsTfvH1LOKYOxqXdl718oidp1tjJfMSH1LtpfTfvfxmtbDgNiq9vLSe4bcCZD+m56 IXYnRszYpuvuNtBVqcw3ZXpkukPITqJimDsmKDEBxlw6JhMDSlUhsbUQR+Jx+bS4XLcG yTZ+2DsRzNFQeXhF++fqIK9rKtY+uy2HnCyG3wHi3SxuEvCglhyZGYTr3PLtj/42MyP1 FQ2UnfPmvSQNt1sPg/oOCTxXJDbHAYLSsSfegi65THFR7Tfn7SwKItIvtnNDICRWdWPb owdeqypV99huuL2iOf8fDtXMGbBztvlQK8jeZQhaefr0Hxvegk+RXRCE9PYol+Dx3VbG Sw== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jsjxa1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:40 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFK0D1P031734; Tue, 15 Nov 2022 21:22:38 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvnc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:38 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRo002082; Tue, 15 Nov 2022 21:22:38 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-8; Tue, 15 Nov 2022 21:22:38 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 07/10] mm/hugetlb: convert enqueue_huge_page() to folios Date: Tue, 15 Nov 2022 13:22:14 -0800 Message-Id: <20221115212217.19539-8-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-ORIG-GUID: ok8lAA7QoayIlTeg983FneMaP8FuRyUG X-Proofpoint-GUID: ok8lAA7QoayIlTeg983FneMaP8FuRyUG X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749598967623160463?= X-GMAIL-MSGID: =?utf-8?q?1749598967623160463?= Convert callers of enqueue_huge_page() to pass in a folio, function is renamed to enqueue_hugetlb_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7382c162dbcd..ebb98c1af2fb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1119,17 +1119,17 @@ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) return false; } -static void enqueue_huge_page(struct hstate *h, struct page *page) +static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) { - int nid = page_to_nid(page); + int nid = folio_nid(folio); lockdep_assert_held(&hugetlb_lock); - VM_BUG_ON_PAGE(page_count(page), page); + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio); - list_move(&page->lru, &h->hugepage_freelists[nid]); + list_move(&folio->lru, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; - SetHPageFreed(page); + folio_set_hugetlb_freed(folio); } static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) @@ -1540,7 +1540,7 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, return; arch_clear_hugepage_flags(&folio->page); - enqueue_huge_page(h, &folio->page); + enqueue_hugetlb_folio(h, folio); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1752,7 +1752,7 @@ void free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); } } @@ -2427,7 +2427,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) if ((--needed) < 0) break; /* Add the page to the hugetlb allocator */ - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, page_folio(page)); } free: spin_unlock_irq(&hugetlb_lock); @@ -2793,8 +2793,8 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Ok, old_page is still a genuine free hugepage. Remove it from * the freelist and decrease the counters. These will be * incremented again when calling __prep_account_new_huge_page() - * and enqueue_huge_page() for new_page. The counters will remain - * stable since this happens under the lock. + * and enqueue_hugetlb_folio() for new_folio. The counters will + * remain stable since this happens under the lock. */ remove_hugetlb_folio(h, old_folio, false); @@ -2803,7 +2803,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * earlier. It can be directly added to the pool free list. */ __prep_account_new_huge_page(h, nid); - enqueue_huge_page(h, new_page); + enqueue_hugetlb_folio(h, new_folio); /* * Pages have been replaced, we can safely free the old one. From patchwork Tue Nov 15 21:22:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20579 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2948459wru; Tue, 15 Nov 2022 13:27:06 -0800 (PST) X-Google-Smtp-Source: AA0mqf70UNmtm4o28NjDeVMI5iBnMnHkn/AxvUl6hhiD0sfJMxH7r3cK/bKg7mcq+Uw6nIQ9thO4 X-Received: by 2002:a17:906:4dc2:b0:7ae:50c6:fd0a with SMTP id f2-20020a1709064dc200b007ae50c6fd0amr15200359ejw.184.1668547626233; Tue, 15 Nov 2022 13:27:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547626; cv=none; d=google.com; s=arc-20160816; b=DavRFKGRIBWhey+I6zv/L1i+c2voUpr8dO8I3cIbxAxHZtH4tfJAJ3bMCRu+//4uFj kegd2h3NlQjPbkriKmzEXOBCNId0TjLypxwOusnvyr0sfMZUnr46h1BeBpr0nIJd2gNm a6aCdAYVDv7205HAAAUM8dAP3N1WzpBkGnatVrPO/ExoVrMn5ll7RL9W1bg0+LUDUjHD ilUqeaOm1sVOhwcx+mJjyN57o0XAk/9falYPTTedU5FmihcwMLQOiZKujvcu1HmaTFf/ Gd/8xB2fD71un0fWLKyhXVriU621fLvvjffPflxEGmsLpXkT89tCTiYPpHaaA4O/t/gI fATw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YCHxFVBKjRFyvrHnAhnUSTSjkexP1PKdOWE1OJA0lOs=; b=kMGg9JkV02hrfinL4yL+amdOfx+3mne+b+tedC9VgWJIo11FzYGkrBRwRJ/6bZvxEp XvsV5rYLE/F0CnFpH0qwIqCLfwXUxvIkTgRDroBVOkaiG2BWYp2uHflAqKusFVp+AwVu jfPkJdgL+ay06V6CYMOcrfA0plpT2c8vXNb7Wk3xafOco0+hiT/MseQyubhJdvpFKGfp Gfuh4l+yGIRw8nDsXMoOQ16CDcTyD9gaFEl4jtOL3/xA1th65fvKdrmvDiJiP7hd7j7E eQ+SR/agl6nTYYfK+uEN0elyTQj8euzCp7i66bqerpK5CzWDCNkflMjQZF7SHhQjIfYr b3hQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=h75cORDG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g10-20020a056402090a00b00461b2c3c4acsi12538748edz.515.2022.11.15.13.26.42; Tue, 15 Nov 2022 13:27:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=h75cORDG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238783AbiKOVXc (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232572AbiKOVWw (ORCPT ); Tue, 15 Nov 2022 16:22:52 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36BA4205D8 for ; Tue, 15 Nov 2022 13:22:51 -0800 (PST) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKIZIS022487; Tue, 15 Nov 2022 21:22:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=YCHxFVBKjRFyvrHnAhnUSTSjkexP1PKdOWE1OJA0lOs=; b=h75cORDGzkayEzPMs91By8rUbmv7gsVsPPcFN73BBIw83r1FejZmPW4feg1SejSqTmmH exgeRW111Q+3Ws5sbwb++Komo53zqC9FtqJDowPjIameRtqNHfwYnP7xnH9oRmRfewpB omKNoTg8WaNsVoHBqvvI60aak8NRavikp8GEjvZnihCShiM8cZYXQLBnQk4SwP9e9Wnm tolqvB2MKAUGdOvT7GbvGSOOS+OM0naOXYJCJo5iND8X2rdaokBMOXLgwI+wcS1QP6d9 IltiM/ZmkWeLI/IXh9wDRe6cbK3CMeRL+Enc496O9iRecOJ5N1qKLTXkfNBf4KAfU/kv wg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3ns2ycp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:42 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFJo7KO032016; Tue, 15 Nov 2022 21:22:40 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvpe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:40 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRq002082; Tue, 15 Nov 2022 21:22:39 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-9; Tue, 15 Nov 2022 21:22:39 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 08/10] mm/hugetlb: convert free_gigantic_page() to folios Date: Tue, 15 Nov 2022 13:22:15 -0800 Message-Id: <20221115212217.19539-9-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: FGfWrX5ONbv4_-bCfGT8BRk2eBO2_vdw X-Proofpoint-ORIG-GUID: FGfWrX5ONbv4_-bCfGT8BRk2eBO2_vdw X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749598995963014074?= X-GMAIL-MSGID: =?utf-8?q?1749598995963014074?= Convert callers of free_gigantic_page() to use folios, function is then renamed to free_gigantic_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ebb98c1af2fb..bc039ff28b8f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1352,18 +1352,20 @@ static void destroy_compound_gigantic_folio(struct folio *folio, __destroy_compound_gigantic_folio(folio, order, false); } -static void free_gigantic_page(struct page *page, unsigned int order) +static void free_gigantic_folio(struct folio *folio, unsigned int order) { /* * If the page isn't allocated using the cma allocator, * cma_release() returns false. */ #ifdef CONFIG_CMA - if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) + int nid = folio_nid(folio); + + if (cma_release(hugetlb_cma[nid], &folio->page, 1 << order)) return; #endif - free_contig_range(page_to_pfn(page), 1 << order); + free_contig_range(folio_pfn(folio), 1 << order); } #ifdef CONFIG_CONTIG_ALLOC @@ -1417,7 +1419,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int order) { } +static inline void free_gigantic_folio(struct folio *folio, + unsigned int order) { } static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1556,7 +1559,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * If we don't know which subpages are hwpoisoned, we can't free * the hugepage, so it's leaked intentionally. */ - if (HPageRawHwpUnreliable(page)) + if (folio_test_hugetlb_raw_hwp_unreliable(folio)) return; if (hugetlb_vmemmap_restore(h, page)) { @@ -1566,7 +1569,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_folio(h, page_folio(page), true); + add_hugetlb_folio(h, folio, true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1579,7 +1582,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { - subpage = nth_page(page, i); + subpage = folio_page(folio, i); subpage->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | 1 << PG_active | 1 << PG_private | @@ -1588,12 +1591,12 @@ static void __update_and_free_page(struct hstate *h, struct page *page) /* * Non-gigantic pages demoted from CMA allocated gigantic pages - * need to be given back to CMA in free_gigantic_page. + * need to be given back to CMA in free_gigantic_folio. */ if (hstate_is_gigantic(h) || hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); } @@ -2013,6 +2016,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nodemask_t *node_alloc_noretry) { struct page *page; + struct folio *folio; bool retry = false; retry: @@ -2023,14 +2027,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nid, nmask, node_alloc_noretry); if (!page) return NULL; - + folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_page(page, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); if (!retry) { retry = true; goto retry; @@ -2038,7 +2042,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; } } - prep_new_huge_page(h, page, page_to_nid(page)); + prep_new_huge_page(h, page, folio_nid(folio)); return page; } @@ -3039,6 +3043,7 @@ static void __init gather_bootmem_prealloc(void) list_for_each_entry(m, &huge_boot_pages, list) { struct page *page = virt_to_page(m); + struct folio *folio = page_folio(page); struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); @@ -3049,7 +3054,7 @@ static void __init gather_bootmem_prealloc(void) free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } /* From patchwork Tue Nov 15 21:22:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20580 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2948525wru; Tue, 15 Nov 2022 13:27:15 -0800 (PST) X-Google-Smtp-Source: AA0mqf7iOuSZXcCsVmOQ4gkiVQ2qIaKaNPheXiyHrxnkwJ/fDnTHqAgCJGVvacGfVeNw8tNDvv3+ X-Received: by 2002:a17:906:4085:b0:7ae:bfeb:2219 with SMTP id u5-20020a170906408500b007aebfeb2219mr15198395ejj.145.1668547634981; Tue, 15 Nov 2022 13:27:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547634; cv=none; d=google.com; s=arc-20160816; b=l6CPBsjZWD7BsEBODqkMZxDKfbu10t8q1DJxLxa1Qor4spHc5l4YffdKQI6u+4GJUP 6VQ3lAoK/l9hWocXCmGRz4NnI5dZNh2Zk0+uSFqQ2+BKTXzQo+jz9D8Z93rtOHSV2MZa VAi3QRmp3ljbImWNr0jVF7Q0/H6YwRelTrnOYyiWdF4QufLkz3XOGpJ591quTG2mBq+Z 4SWeRBn2VZZLWrXMQt/f+oQ17VQSwxs+TF/Rbj0Xux3SjmTCDKO1Kbm0QHrgw/knAdJR mynywUcS94Da5/g1pR80W2Fd0ximItJDfh5X/ybHLv/V4gGCAl+jxUEFwMMB2UPmpQUv CW5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=c35oFeYVRDc1jiMWKSJGilR0YmnuM0FOLWWkNYqbAmY=; b=Ktg1zQxaZ2OH0qX3WdtWwCOUc7xven8Zrm/FluUIK+7VhH/Z3aioTL31psVmozhp6H 8e0cuyg3cpvXP40XoAwCj2FWBMrPyDxim1F8/cTWBM8taBUeEaTyz3vbEU2ugK1viXAT cpSbe2SbKJ8JfIyeNBU3SwjXpvEgr4oSRk/E/TYyHNXzht5om4KUWCX8PaQS9SSWhZVx TIP/CTVSXxkRi4Qn43wsj2UAZ+CLTEpzHmOVF5xeuSo1nTIBuqZaCzr5L03gwKF3xlnr keh9WS5F5PRuM0XDCqsgGvi/SomNOi3eL2Qh9tDACerG9FmWe9sltAx0n+Hni5rP9f0N NlRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=Ui1mhG+c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t7-20020a056402524700b0046351fa4ae2si13592002edd.99.2022.11.15.13.26.51; Tue, 15 Nov 2022 13:27:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=Ui1mhG+c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238795AbiKOVXh (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238402AbiKOVWy (ORCPT ); Tue, 15 Nov 2022 16:22:54 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 436B51FCCB for ; Tue, 15 Nov 2022 13:22:52 -0800 (PST) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFKD5Up008985; Tue, 15 Nov 2022 21:22:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=c35oFeYVRDc1jiMWKSJGilR0YmnuM0FOLWWkNYqbAmY=; b=Ui1mhG+cz1UlBz0n8duynNVXMyHR4z2mJEbk8ardEqok8DaVR8+IahFBB1W2gmiG3pY7 DJGbyu6Wl+K+z/+ltjLb80PCBEQ9n+HvSdnyKdVLQvhM0wZsV7HhneVygIFeMWea7XKf hamImMdruJ6fQy+j098g4TcLUH+3NcGaySuHikjmiufbLLM43xFrshprAeO8vGDdgID5 jtvo+MFJZzd44kr/8gky9va4ielNACFseGvGQJemPNZvc42sKslUtv3gZATapcrhn8uW gfIVrA9e3GuVz/ScE901avIdAia2JbYSedscxctrSVci18rkFqiQYG2tllVDLTtJFWHy Ww== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jsjxab-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:43 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFJo7KP032016; Tue, 15 Nov 2022 21:22:41 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvqu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:41 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRs002082; Tue, 15 Nov 2022 21:22:40 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-10; Tue, 15 Nov 2022 21:22:40 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 09/10] mm/hugetlb: convert hugetlb prep functions to folios Date: Tue, 15 Nov 2022 13:22:16 -0800 Message-Id: <20221115212217.19539-10-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-ORIG-GUID: 13LmkHReTmM1lw843VOWh17NJgZFrTi9 X-Proofpoint-GUID: 13LmkHReTmM1lw843VOWh17NJgZFrTi9 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749599004846537963?= X-GMAIL-MSGID: =?utf-8?q?1749599004846537963?= Convert prep_new_huge_page() and __prep_compound_gigantic_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 59 +++++++++++++++++++++++++--------------------------- 1 file changed, 28 insertions(+), 31 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bc039ff28b8f..c1d68648943a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1780,28 +1780,26 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) set_hugetlb_cgroup_rsvd(folio, NULL); } -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) +static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid) { - struct folio *folio = page_folio(page); - __prep_new_hugetlb_folio(h, folio); spin_lock_irq(&hugetlb_lock); __prep_account_new_huge_page(h, nid); spin_unlock_irq(&hugetlb_lock); } -static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, - bool demote) +static bool __prep_compound_gigantic_folio(struct folio *folio, + unsigned int order, bool demote) { int i, j; int nr_pages = 1 << order; struct page *p; - /* we rely on prep_new_huge_page to set the destructor */ - set_compound_order(page, order); - __SetPageHead(page); + /* we rely on prep_new_hugetlb_folio to set the destructor */ + folio_set_compound_order(folio, order); + __folio_set_head(folio); for (i = 0; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); /* * For gigantic hugepages allocated through bootmem at @@ -1842,42 +1840,40 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, VM_BUG_ON_PAGE(page_count(p), p); } if (i != 0) - set_compound_head(p, page); + set_compound_head(p, &folio->page); } - atomic_set(compound_mapcount_ptr(page), -1); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), -1); + atomic_set(folio_pincount_ptr(folio), 0); return true; out_error: /* undo page modifications made above */ for (j = 0; j < i; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); if (j != 0) clear_compound_head(p); set_page_refcounted(p); } /* need to clear PG_reserved on remaining tail pages */ for (; j < nr_pages; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); __ClearPageReserved(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + __folio_clear_head(folio); return false; } -static bool prep_compound_gigantic_page(struct page *page, unsigned int order) +static bool prep_compound_gigantic_folio(struct folio *folio, + unsigned int order) { - return __prep_compound_gigantic_page(page, order, false); + return __prep_compound_gigantic_folio(folio, order, false); } -static bool prep_compound_gigantic_page_for_demote(struct page *page, +static bool prep_compound_gigantic_folio_for_demote(struct folio *folio, unsigned int order) { - return __prep_compound_gigantic_page(page, order, true); + return __prep_compound_gigantic_folio(folio, order, true); } /* @@ -2029,7 +2025,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; folio = page_folio(page); if (hstate_is_gigantic(h)) { - if (!prep_compound_gigantic_page(page, huge_page_order(h))) { + if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! @@ -2042,7 +2038,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; } } - prep_new_huge_page(h, page, folio_nid(folio)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); return page; } @@ -3047,10 +3043,10 @@ static void __init gather_bootmem_prealloc(void) struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); - WARN_ON(page_count(page) != 1); - if (prep_compound_gigantic_page(page, huge_page_order(h))) { - WARN_ON(PageReserved(page)); - prep_new_huge_page(h, page, page_to_nid(page)); + WARN_ON(folio_ref_count(folio) != 1); + if (prep_compound_gigantic_folio(folio, huge_page_order(h))) { + WARN_ON(folio_test_reserved(folio)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ @@ -3469,13 +3465,14 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) for (i = 0; i < pages_per_huge_page(h); i += pages_per_huge_page(target_hstate)) { subpage = nth_page(page, i); + folio = page_folio(subpage); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_page_for_demote(subpage, + prep_compound_gigantic_folio_for_demote(folio, target_hstate->order); else prep_compound_page(subpage, target_hstate->order); set_page_private(subpage, 0); - prep_new_huge_page(target_hstate, subpage, nid); + prep_new_hugetlb_folio(target_hstate, folio, nid); free_huge_page(subpage); } mutex_unlock(&target_hstate->resize_lock); From patchwork Tue Nov 15 21:22:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 20581 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2948547wru; Tue, 15 Nov 2022 13:27:19 -0800 (PST) X-Google-Smtp-Source: AA0mqf4mkfVXfPVyua0xWIhslw5oFcL7Qy4n+zH2rCx0NQnYxSTpA+YXlxE+WzdeWfngzGktcEx1 X-Received: by 2002:a17:906:1805:b0:78d:36d7:ed29 with SMTP id v5-20020a170906180500b0078d36d7ed29mr15935194eje.655.1668547639057; Tue, 15 Nov 2022 13:27:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668547639; cv=none; d=google.com; s=arc-20160816; b=C6668tMClJnotNhUGtZBoDkC46tqTybVpHgQiA9ToPOSnA7oA19UoE893bniaeXXsV kfBlqK/Wn3is/uDiCt/l6pKH/4MT47utJSB9Keyxpvcw4aJg2N1o3Cosk+aOVw8t5PhU pgU7wGFzNZRBc1K0ltXEp1rUDpsUbwsACDdNlzwom5ST4zu7tEf/KeQw1XOixiLzfxEa o/RTk2UNTiUnLgs+y17nsBTilZU5pSr4I+dgyF6sGjrrLN7vkgqb/K8GVpc1jb1Ikrax X3B2wH1vRICnewmroViEIHQQIKpP631gutEbAyP0Nh6ZIz1DVjXjFyBwtYYCcvg8ScG2 3NCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=S8tyJWIYcZoH4JvVl6w79xgzUHmCh2dSs4Q8Fb3RnFo=; b=OY0JIxaDFx9a+WaHSqY9HP04WHdEMBrsKWhwzVFBdE3EjbKZv1eIsAH1UGwKgZKNS7 iSLkXZKqoRQ5UnmyslzEJu68x+e40fs7J7Ok8HpkinfEDlR8ghxOgWzJMyf0Si6JA8Rz thbivQZUU4p6vlV3QZf53eIk60IX3j6lxXBWTIBKZZRsnNcdE+eZ57ZgSvrhNMjseDXe mY3ARr3DG5FAA4g4CUVWIPfFpml07BywE87WvdfmiCRq2grxyvcxykivrFQLTMRR2uSH o/l98abgfBuf+5ZOvdHUVQK91/IPAp4giw1csj1nSABqxW97NYOyOBDepH7i/ByYtwnI 4b0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=DQJFwLo5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f4-20020a50a6c4000000b004597b778b3bsi12241773edc.75.2022.11.15.13.26.54; Tue, 15 Nov 2022 13:27:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=DQJFwLo5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238798AbiKOVXm (ORCPT + 99 others); Tue, 15 Nov 2022 16:23:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232099AbiKOVW5 (ORCPT ); Tue, 15 Nov 2022 16:22:57 -0500 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8FD32790B for ; Tue, 15 Nov 2022 13:22:55 -0800 (PST) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AFJMu9g003946; Tue, 15 Nov 2022 21:22:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=S8tyJWIYcZoH4JvVl6w79xgzUHmCh2dSs4Q8Fb3RnFo=; b=DQJFwLo5AMfmIVSrB2x1noVsrKi+CrlMZVhDAgvn0aKTv4iJFtAYy4zif3aNae+ax8vh efIy/3TnR0In+DetYPzaBVUWTHUHpfSDkMW667A7LCGmeFe+caNyiu+SznQ9fgTgzECs 7W2Er6OMjRKOFKjlsPXcLWqnlzglwrvDfRcNyKLIjW7T4crbnp09Zb9Jfh0R3j+rKsmx blHoiQ6SwCyJW1mtNfMED047ykfKXbrJyHpWklSQ2FoWLD4HrQkvt8krJI0uYMD2rRKe j1GXOKBnlClxJYGGbJ/Vc10CQoNfN89UMHbXHxBPAWx92zQx6JZVlYho4rcgx0eHntmn kQ== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3n131y0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:43 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AFK1dFO031835; Tue, 15 Nov 2022 21:22:43 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kt1xcfvsd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Nov 2022 21:22:43 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AFLMSRu002082; Tue, 15 Nov 2022 21:22:42 GMT Received: from dhcp-10-132-95-73.usdhcp.oraclecorp.com.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3kt1xcfvb0-11; Tue, 15 Nov 2022 21:22:42 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable 10/10] mm/hugetlb: change hugetlb allocation functions to return a folio Date: Tue, 15 Nov 2022 13:22:17 -0800 Message-Id: <20221115212217.19539-11-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221115212217.19539-1-sidhartha.kumar@oracle.com> References: <20221115212217.19539-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 adultscore=0 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211150146 X-Proofpoint-GUID: hiib2QocS7DmD6684nO_8-iB3ymcMvIO X-Proofpoint-ORIG-GUID: hiib2QocS7DmD6684nO_8-iB3ymcMvIO X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749599009245283540?= X-GMAIL-MSGID: =?utf-8?q?1749599009245283540?= Many hugetlb allocation helper functions have now been converting to folios, update their higher level callers to be compatible with folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 98 ++++++++++++++++++++++++---------------------------- 1 file changed, 46 insertions(+), 52 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c1d68648943a..ab20cfb0ff05 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1369,7 +1369,7 @@ static void free_gigantic_folio(struct folio *folio, unsigned int order) } #ifdef CONFIG_CONTIG_ALLOC -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { unsigned long nr_pages = pages_per_huge_page(h); @@ -1385,7 +1385,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[nid], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } if (!(gfp_mask & __GFP_THISNODE)) { @@ -1396,17 +1396,16 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[node], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } } } #endif - - return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); + return page_folio(alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask)); } #else /* !CONFIG_CONTIG_ALLOC */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1414,7 +1413,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, #endif /* CONFIG_CONTIG_ALLOC */ #else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1938,7 +1937,7 @@ pgoff_t hugetlb_basepage_index(struct page *page) return (index << compound_order(page_head)) + compound_idx; } -static struct page *alloc_buddy_huge_page(struct hstate *h, +static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { @@ -1997,7 +1996,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, if (node_alloc_noretry && !page && alloc_try_hard) node_set(nid, *node_alloc_noretry); - return page; + return page_folio(page); } /* @@ -2007,23 +2006,21 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, * Note that returned page is 'frozen': ref count of head page and all tail * pages is zero. */ -static struct page *alloc_fresh_huge_page(struct hstate *h, +static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { - struct page *page; struct folio *folio; bool retry = false; retry: if (hstate_is_gigantic(h)) - page = alloc_gigantic_page(h, gfp_mask, nid, nmask); + folio = alloc_gigantic_folio(h, gfp_mask, nid, nmask); else - page = alloc_buddy_huge_page(h, gfp_mask, + folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, node_alloc_noretry); - if (!page) + if (!folio) return NULL; - folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* @@ -2040,7 +2037,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, } prep_new_hugetlb_folio(h, folio, folio_nid(folio)); - return page; + return folio; } /* @@ -2050,21 +2047,21 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, static int alloc_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, nodemask_t *node_alloc_noretry) { - struct page *page; + struct folio *folio; int nr_nodes, node; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { - page = alloc_fresh_huge_page(h, gfp_mask, node, nodes_allowed, - node_alloc_noretry); - if (page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, node, + nodes_allowed, node_alloc_noretry); + if (folio) break; } - if (!page) + if (!folio) return 0; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ return 1; } @@ -2225,7 +2222,7 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn) static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page = NULL; + struct folio *folio = NULL; if (hstate_is_gigantic(h)) return NULL; @@ -2235,8 +2232,8 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, goto out_unlock; spin_unlock_irq(&hugetlb_lock); - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; spin_lock_irq(&hugetlb_lock); @@ -2248,43 +2245,42 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, * codeflow */ if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) { - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); spin_unlock_irq(&hugetlb_lock); - free_huge_page(page); + free_huge_page(&folio->page); return NULL; } h->surplus_huge_pages++; - h->surplus_huge_pages_node[page_to_nid(page)]++; + h->surplus_huge_pages_node[folio_nid(folio)]++; out_unlock: spin_unlock_irq(&hugetlb_lock); - return page; + return &folio->page; } static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page; + struct folio *folio; if (hstate_is_gigantic(h)) return NULL; - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; /* fresh huge pages are frozen */ - set_page_refcounted(page); - + folio_ref_unfreeze(folio, 1); /* * We do not account these pages as surplus because they are only * temporary and will be released properly on the last reference */ - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); - return page; + return &folio->page; } /* @@ -2734,19 +2730,18 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, } /* - * alloc_and_dissolve_huge_page - Allocate a new page and dissolve the old one + * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve + * the old one * @h: struct hstate old page belongs to * @old_page: Old page to dissolve * @list: List to isolate the page in case we need to * Returns 0 on success, otherwise negated error. */ -static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, - struct list_head *list) +static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, + struct folio *old_folio, struct list_head *list) { gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - struct folio *old_folio = page_folio(old_page); int nid = folio_nid(old_folio); - struct page *new_page; struct folio *new_folio; int ret = 0; @@ -2757,26 +2752,25 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * the pool. This simplifies and let us do most of the processing * under the lock. */ - new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL); - if (!new_page) + new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL); + if (!new_folio) return -ENOMEM; - new_folio = page_folio(new_page); __prep_new_hugetlb_folio(h, new_folio); retry: spin_lock_irq(&hugetlb_lock); if (!folio_test_hugetlb(old_folio)) { /* - * Freed from under us. Drop new_page too. + * Freed from under us. Drop new_folio too. */ goto free_new; } else if (folio_ref_count(old_folio)) { /* - * Someone has grabbed the page, try to isolate it here. + * Someone has grabbed the folio, try to isolate it here. * Fail with -EBUSY if not possible. */ spin_unlock_irq(&hugetlb_lock); - ret = isolate_hugetlb(old_page, list); + ret = isolate_hugetlb(&old_folio->page, list); spin_lock_irq(&hugetlb_lock); goto free_new; } else if (!folio_test_hugetlb_freed(old_folio)) { @@ -2854,7 +2848,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) ret = 0; else if (!folio_ref_count(folio)) - ret = alloc_and_dissolve_huge_page(h, &folio->page, list); + ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); return ret; } @@ -3072,14 +3066,14 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) if (!alloc_bootmem_huge_page(h, nid)) break; } else { - struct page *page; + struct folio *folio; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - page = alloc_fresh_huge_page(h, gfp_mask, nid, + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, &node_states[N_MEMORY], NULL); - if (!page) + if (!folio) break; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ } cond_resched(); }