From patchwork Thu Nov 17 21:02:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21911 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621551wrr; Thu, 17 Nov 2022 13:05:42 -0800 (PST) X-Google-Smtp-Source: AA0mqf6g5tVbx2jzaxGNI4sP0T2+prXHNRkTh6M/ff3p9WAfCW2U262GnbLvzqUwoC3xe4SAcBwA X-Received: by 2002:a63:530f:0:b0:476:bfca:38ad with SMTP id h15-20020a63530f000000b00476bfca38admr3774741pgb.576.1668719142598; Thu, 17 Nov 2022 13:05:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719142; cv=none; d=google.com; s=arc-20160816; b=RsjUXEjijcdKSmN1LTgUvFnnBf8DxInFHcTke3JJfAz3zkSSK9uphvq4IVtvm9K+/q sT8BLi/aghd8aadEfTcgqYjBBLzkhyskOHzy0z5Gu/26EI7zE34HEdAkZGNUD9i/5B8e a3HCE/Y71jVFknQqZOdncRt4ZCy+A8VVmFJCot8VTxOCT8e5mau36unm2H/tdNhCdLBL ms1TcyVdZ4LpYyYOnB3reU8gzQTBs/tfMHU3rypYYnjyZPCqP7tH7FrBqu+K0wST14gN z43Vfs+7f37B4vtRvO/MGkfLbTH0EbK8I7FLtUq3jo7VYJwvy1mrTR1XFS5w2pK0a/hi t/2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=k+s6al3wwRtBndCW2ZWmf31NL6al8soF2mssq2Ti0Oc=; b=MiD6xfU8dV7fbQph0wIaIYsdtmNOoNtiCt4620KeEkNslJhPoC8p9tKmxnp9kuVHT/ sG2i6t8ub6Eyenj6FjWtQ4+Qd1eYOvFNofOCRF8JFoI6oqMPT8MFczUGizpe9I/GgTja nWAtSIfzKbNq3ftdftgPCg9AkBl0APWkl4wV6bkcUw9PT36LvGn7kC4Bew0bIePfan/F Ri25g0C4f8d++ZvxCRDN3TOsxUD0uyHxGwxGXLN2lOQMQE/unUT+6OlpF+9vw1Vt+UfQ 1EbCoIvCuDC26IqMIm8eETxMFx7P74vfpBuS8lylsCt+tDOVhDYmpF4Yi8rbtd3Uhs5A YTCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=mZlOHPgA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a7-20020a634d07000000b00476d3de30e8si2054724pgb.10.2022.11.17.13.05.29; Thu, 17 Nov 2022 13:05:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=mZlOHPgA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240512AbiKQVEL (ORCPT + 99 others); Thu, 17 Nov 2022 16:04:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239809AbiKQVDl (ORCPT ); Thu, 17 Nov 2022 16:03:41 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAA5D275E5 for ; Thu, 17 Nov 2022 13:03:32 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOQbn002482; Thu, 17 Nov 2022 21:03:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=k+s6al3wwRtBndCW2ZWmf31NL6al8soF2mssq2Ti0Oc=; b=mZlOHPgAIKUokYiEYU/wm818rb3shzppAw9LoNV9rfPRyh2Hmca0AcnVA4RKROurg1dT e8bLXLt0LzHkhQm+SPWJPLuCZ6X3x/1GfLiBaB/WWSfJ3RcK2lO4HhBZeXOXqyZ0MShI dkQ4sPM37QTzkI63BVK3QWEoBqiCMn2UThfsPJ5GdJ8r/jNegaZcnqx9p3SpbqZ2EKJo Zjb1cLsvaIiDlNspDSmWe1BCeHqqRTaDkVGw9aE0xuLC0xV+znIv7RmK0SupkOLAJJok imGso4NsuPTtYzZBxCjsEibB85WxF5wRJG5URlzxu9JhQWygStYEE4wseIVT3DuJFrTY yA== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgkv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:11 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKgvsI010998; Thu, 17 Nov 2022 21:03:09 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy2m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:09 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fd032582; Thu, 17 Nov 2022 21:03:09 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-2; Thu, 17 Nov 2022 21:03:09 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 01/10] mm: add folio dtor and order setter functions Date: Thu, 17 Nov 2022 13:02:49 -0800 Message-Id: <20221117210258.12732-2-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: 7-LgzR41kaozz5_oBEAW_85xG-oTzz4V X-Proofpoint-ORIG-GUID: 7-LgzR41kaozz5_oBEAW_85xG-oTzz4V X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778843607952336?= X-GMAIL-MSGID: =?utf-8?q?1749778843607952336?= Add folio equivalents for set_compound_order() and set_compound_page_dtor(). Also remove extra new-lines introduced by mm/hugetlb: convert move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios. Suggested-by: Mike Kravetz Suggested-by: Muchun Song Signed-off-by: Sidhartha Kumar --- include/linux/mm.h | 16 ++++++++++++++++ mm/hugetlb.c | 4 +--- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a48c5ad16a5e..47ba2b2b22ae 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -972,6 +972,13 @@ static inline void set_compound_page_dtor(struct page *page, page[1].compound_dtor = compound_dtor; } +static inline void folio_set_compound_dtor(struct folio *folio, + enum compound_dtor_id compound_dtor) +{ + VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio); + folio->_folio_dtor = compound_dtor; +} + void destroy_large_folio(struct folio *folio); static inline int head_compound_pincount(struct page *head) @@ -987,6 +994,15 @@ static inline void set_compound_order(struct page *page, unsigned int order) #endif } +static inline void folio_set_compound_order(struct folio *folio, + unsigned int order) +{ + folio->_folio_order = order; +#ifdef CONFIG_64BIT + folio->_folio_nr_pages = 1U << order; +#endif +} + /* Returns the number of pages in this potentially compound page. */ static inline unsigned long compound_nr(struct page *page) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 002337cc27b5..157f2392c64f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1780,7 +1780,7 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) { hugetlb_vmemmap_optimize(h, &folio->page); INIT_LIST_HEAD(&folio->lru); - folio->_folio_dtor = HUGETLB_PAGE_DTOR; + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); hugetlb_set_folio_subpool(folio, NULL); set_hugetlb_cgroup(folio, NULL); set_hugetlb_cgroup_rsvd(folio, NULL); @@ -2936,7 +2936,6 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, * a reservation exists for the allocation. */ page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg); - if (!page) { spin_unlock_irq(&hugetlb_lock); page = alloc_buddy_huge_page_with_mpol(h, vma, addr); @@ -7349,7 +7348,6 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re int old_nid = folio_nid(old_folio); int new_nid = folio_nid(new_folio); - folio_set_hugetlb_temporary(old_folio); folio_clear_hugetlb_temporary(new_folio); From patchwork Thu Nov 17 21:02:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21905 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621030wrr; Thu, 17 Nov 2022 13:04:34 -0800 (PST) X-Google-Smtp-Source: AA0mqf7Osak05gSgzc+qDdsSxcadZ6WThpK9ryWVIGp6nZSS7RsfKA7irJLhsrxvEuO72sfrtr0N X-Received: by 2002:a05:6a00:4210:b0:56c:b8c2:ee89 with SMTP id cd16-20020a056a00421000b0056cb8c2ee89mr4644566pfb.21.1668719074232; Thu, 17 Nov 2022 13:04:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719074; cv=none; d=google.com; s=arc-20160816; b=SaFJRkrV1/FKP31ZpXwAA/lzz3MtTT+7aW+apw8iUBvB2f2MJ9Fy5wqVeX4eCB+dtZ JFkft9ff8PWi4OY2aCk3cPY/pv/9x71b9xCWbw4uTjRv+KDlVpglX1LtKQ0ZnUON+Jcl Y7ETI2vLXCLOZGJJoKf+2csONAVII470X4z73hVMw/NKmhigbpQk27NZLjxwvdG/SMxb kGoK1BfK4DaXfhZv6BgJQ1GBlPPEVt/8kY3pI1R5I6HL28GEbF9i28/DUzAxQBJtlXL/ tGsH25AM9W/XlYt93G6EqecEZ+8SHWB7KxRsfhPh8Les1T8txqPRUU01aP3annFfvL1H 5PLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Mwgxn09UOrTmFlxHKtjfX4954mioyEjow3Fh9kxIaBI=; b=Vrij98GTl68Avsf70hhOE/mnBVWA/YmND2IqmW0YUn9RnrUobu0uk6Ms2r8/XcLTwX 2DD3n7ZfYQFUHyL3VEIn0uXm3H4YpUeGUizkyzARlZUuwiaV3QfTOqFze2hj4Nv4GB3a lFsSkXRpgdql8o2Bv42+KAToXt6yHyeHhMKVNdto6ycSte5Qju1TtCceU3/61378W+5y g5dBN/qWJ3tnwzzd7rM17164vb/oXpTNcGUnsjiSlwMtHODVed4gkQcxfh0KejeOsPGM HR3crtTPJtdsNoZQkphna54spZfBfYkILJW/ABFNwGkNM+0sgjmU/by9iGsKcpOA8WTT jMcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=fBXJHHHj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c38-20020a634e26000000b00434f41f6981si2045532pgb.360.2022.11.17.13.04.20; Thu, 17 Nov 2022 13:04:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=fBXJHHHj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235039AbiKQVDn (ORCPT + 99 others); Thu, 17 Nov 2022 16:03:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231919AbiKQVDk (ORCPT ); Thu, 17 Nov 2022 16:03:40 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA78324080 for ; Thu, 17 Nov 2022 13:03:33 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOOsG002460; Thu, 17 Nov 2022 21:03:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=Mwgxn09UOrTmFlxHKtjfX4954mioyEjow3Fh9kxIaBI=; b=fBXJHHHjY3Ssevgb0OWIQ4bPwMJwpdWIKu2hsNhyMFZp4ERJkBbIcaY+3rCpR6pWK3td ma2CDaxLS9ZeRwK6BQknBrCfhU8NexP7QHUXFH3y6vIMTeI80a+PYUNBe6Whsc3MNW3Y HTyuT/qk3fpfHKippcmkDC4rc/+GsFtfOo24JRxnPKbU7aW+RwrgIDhj4meVgL2KWI5+ x4UUEz1Fg68Br3MzuXb75SYVBv9Qs478lYT44V81diFx6c63RNBY6muxOmK+V4AtvKSd XpWTPaNmFMM/tnc27pMGqVst5sohnfToot4X52pe4JzL7l+E8HBvXpZLbfyULckJZu2n Cw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgm2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:12 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKdVqx010856; Thu, 17 Nov 2022 21:03:11 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy42-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:11 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Ff032582; Thu, 17 Nov 2022 21:03:10 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-3; Thu, 17 Nov 2022 21:03:10 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 02/10] mm/hugetlb: convert destroy_compound_gigantic_page() to folios Date: Thu, 17 Nov 2022 13:02:50 -0800 Message-Id: <20221117210258.12732-3-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: YPKAZCsTo2tT3XhgA7Js0vlwMEPkwgre X-Proofpoint-ORIG-GUID: YPKAZCsTo2tT3XhgA7Js0vlwMEPkwgre X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778771622979444?= X-GMAIL-MSGID: =?utf-8?q?1749778771622979444?= Convert page operations within __destroy_compound_gigantic_page() to the corresponding folio operations. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 43 +++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 157f2392c64f..5edb81541ede 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1325,43 +1325,40 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed) nr_nodes--) /* used to demote non-gigantic_huge pages as well */ -static void __destroy_compound_gigantic_page(struct page *page, +static void __destroy_compound_gigantic_folio(struct folio *folio, unsigned int order, bool demote) { int i; int nr_pages = 1 << order; struct page *p; - atomic_set(compound_mapcount_ptr(page), 0); - atomic_set(subpages_mapcount_ptr(page), 0); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), 0); + atomic_set(folio_subpages_mapcount_ptr(folio), 0); + atomic_set(folio_pincount_ptr(folio), 0); for (i = 1; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); p->mapping = NULL; clear_compound_head(p); if (!demote) set_page_refcounted(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + folio_clear_head(folio); } -static void destroy_compound_hugetlb_page_for_demote(struct page *page, +static void destroy_compound_hugetlb_folio_for_demote(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, true); + __destroy_compound_gigantic_folio(folio, order, true); } #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE -static void destroy_compound_gigantic_page(struct page *page, +static void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, false); + __destroy_compound_gigantic_folio(folio, order, false); } static void free_gigantic_page(struct page *page, unsigned int order) @@ -1430,7 +1427,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, return NULL; } static inline void free_gigantic_page(struct page *page, unsigned int order) { } -static inline void destroy_compound_gigantic_page(struct page *page, +static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1477,8 +1474,8 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * * For gigantic pages set the destructor to the null dtor. This * destructor will never be called. Before freeing the gigantic - * page destroy_compound_gigantic_page will turn the compound page - * into a simple group of pages. After this the destructor does not + * page destroy_compound_gigantic_folio will turn the folio into a + * simple group of pages. After this the destructor does not * apply. * * This handles the case where more than one ref is held when and @@ -1559,6 +1556,7 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, static void __update_and_free_page(struct hstate *h, struct page *page) { int i; + struct folio *folio = page_folio(page); struct page *subpage; if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1587,8 +1585,8 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * Move PageHWPoison flag from head page to the raw error pages, * which makes any healthy subpages reusable. */ - if (unlikely(PageHWPoison(page))) - hugetlb_clear_page_hwpoison(page); + if (unlikely(folio_test_hwpoison(folio))) + hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { subpage = nth_page(page, i); @@ -1604,7 +1602,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) */ if (hstate_is_gigantic(h) || hugetlb_cma_page(page, huge_page_order(h))) { - destroy_compound_gigantic_page(page, huge_page_order(h)); + destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); @@ -3435,6 +3433,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) { int i, nid = page_to_nid(page); struct hstate *target_hstate; + struct folio *folio = page_folio(page); struct page *subpage; int rc = 0; @@ -3453,10 +3452,10 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) } /* - * Use destroy_compound_hugetlb_page_for_demote for all huge page + * Use destroy_compound_hugetlb_folio_for_demote for all huge page * sizes as it will not ref count pages. */ - destroy_compound_hugetlb_page_for_demote(page, huge_page_order(h)); + destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h)); /* * Taking target hstate mutex synchronizes with set_max_huge_pages. From patchwork Thu Nov 17 21:02:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21914 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp622024wrr; Thu, 17 Nov 2022 13:06:32 -0800 (PST) X-Google-Smtp-Source: AA0mqf6wr8eSqOtnfwP0AKmBP/DZoNn8D6JdsGhCMQLT7Y0BgVv62cpN9ZYp2dLYLNfOinKxBPam X-Received: by 2002:a17:90a:4307:b0:213:9df5:43b2 with SMTP id q7-20020a17090a430700b002139df543b2mr10423042pjg.86.1668719192104; Thu, 17 Nov 2022 13:06:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719192; cv=none; d=google.com; s=arc-20160816; b=05X897QsmUy7JeGNkfR09QVLBofl2H+4UfL1pgWI4LfQEH/f3tBlDMmK1fnOvjxgQB g7imIhqYyhbxtpogBVqlQQJ/eBEYePgzZW5E8YBpBbyeq/gn5Z8q3tl19pgQZI4HH79+ HfwoA+H4YPUuXcW68CCACbKIQHu/Of5PkpUzP5lxaPK9yNja8dCTKePNYSD4zMhB82NY liCbbkt7QBBV36K/Hrtq8N2HhXTbT7zLIh+7f4vz1htddWNnkPvyZphn9+WL51RkIOm0 nwCrbaNdQ6EyE3z01St/wIIFD+72bet9PwlOnvr5XwdGtuY30b0gWu/jXskUQpnJFV81 hTyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=THT0/VFA6cUN/tQQrWJEQ7HX1532qlc3yOTzGPja16Y=; b=0/Gx60Kb9KE1t+2NNrGhSHRUK3G3ihmgJ1iRpfZ8yahVj+BYiV00r6DIeHWuzwQX2Q Y62lqqHFlGWqvEMMiTJdy0qgGJFz5wg1fVbeBcBv7L6NxguX3Ij0o+p2rHzA+OLBJyGd Fsxg8U/G0y7tUH4zdjcMFWM8ILVu//+byyDutFZszNeK5uhCaBemp0RJjPNpV1E6vtRR BFlp/RkZghocPsI+b003jaZkz/BjIYENqloruTC9b7/KRJ4vs+15h8YizhPVdEsz65FZ bCxm4t1wxfr4DrFHT0UemRDumy2nHJgBvheWDPB+xh8oQ5rifXw8MCI0s3nTfVoFNvhU Cnfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=HOdAn38D; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b17-20020a056a00115100b00554f1b32131si1829637pfm.362.2022.11.17.13.06.18; Thu, 17 Nov 2022 13:06:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=HOdAn38D; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240016AbiKQVDt (ORCPT + 99 others); Thu, 17 Nov 2022 16:03:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234461AbiKQVDk (ORCPT ); Thu, 17 Nov 2022 16:03:40 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9CB822283 for ; Thu, 17 Nov 2022 13:03:32 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKY74u016799; Thu, 17 Nov 2022 21:03:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=THT0/VFA6cUN/tQQrWJEQ7HX1532qlc3yOTzGPja16Y=; b=HOdAn38DMgOTa+N9ssboUAViktvfm5BQ/L3WicuQqCZnYX49AdNCvhWZRERAujkZxdAa V3PZmxn+/FymIdDBcFg1LDuTFQxRI+QaD9hsVeUoh/kYKjYdgpS8fgnYwWy27NPieXko 0mpihLyvuwA8zUDNw00gsQDYpvFDDx5mOLZ0faHkX6yxwFuySKvCm5022weV9HzR3PbC AGzZfUWJo9OBuR6I7vFvZOdMcFIyKxn74m67VxiXuxqLiR/YPofUt4lNPwMFEvqKNu2Y xvfVGrilUhhPY9kFw/O0FBNkzr9w7j/+uQs4J7IhQucaP9zwt9zB7tkaHPf+0uaJ6JLo Dg== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgmb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:14 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKXQTX010907; Thu, 17 Nov 2022 21:03:12 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy4v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:12 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fh032582; Thu, 17 Nov 2022 21:03:12 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-4; Thu, 17 Nov 2022 21:03:11 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 03/10] mm/hugetlb: convert dissolve_free_huge_page() to folios Date: Thu, 17 Nov 2022 13:02:51 -0800 Message-Id: <20221117210258.12732-4-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: nhWf9rDHgY9oOzn1_pf27y3htBFgoPfl X-Proofpoint-ORIG-GUID: nhWf9rDHgY9oOzn1_pf27y3htBFgoPfl X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778895781168522?= X-GMAIL-MSGID: =?utf-8?q?1749778895781168522?= Removes compound_head() call by using a folio rather than a head page. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5edb81541ede..ff609efe5497 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2126,21 +2126,21 @@ static struct page *remove_pool_huge_page(struct hstate *h, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct folio *folio = page_folio(page); retry: /* Not to disrupt normal path by vainly holding hugetlb_lock */ - if (!PageHuge(page)) + if (!folio_test_hugetlb(folio)) return 0; spin_lock_irq(&hugetlb_lock); - if (!PageHuge(page)) { + if (!folio_test_hugetlb(folio)) { rc = 0; goto out; } - if (!page_count(page)) { - struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); + if (!folio_ref_count(folio)) { + struct hstate *h = folio_hstate(folio); if (!available_huge_pages(h)) goto out; @@ -2148,7 +2148,7 @@ int dissolve_free_huge_page(struct page *page) * We should make sure that the page is already on the free list * when it is dissolved. */ - if (unlikely(!HPageFreed(head))) { + if (unlikely(!folio_test_hugetlb_freed(folio))) { spin_unlock_irq(&hugetlb_lock); cond_resched(); @@ -2163,7 +2163,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, head, false); + remove_hugetlb_page(h, &folio->page, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2175,12 +2175,12 @@ int dissolve_free_huge_page(struct page *page) * Attempt to allocate vmemmmap here so that we can take * appropriate action on failure. */ - rc = hugetlb_vmemmap_restore(h, head); + rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, head, false); + update_and_free_page(h, &folio->page, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, head, false); + add_hugetlb_page(h, &folio->page, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } From patchwork Thu Nov 17 21:02:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21906 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621217wrr; Thu, 17 Nov 2022 13:04:58 -0800 (PST) X-Google-Smtp-Source: AA0mqf6xcWB4W9cdqBZKH1t4DX+TA/cfNL0CxazxzjAggJtKWWRr+AYPZhO7fGQskrlkUiNib41C X-Received: by 2002:a17:902:988a:b0:188:ef2a:b275 with SMTP id s10-20020a170902988a00b00188ef2ab275mr4336075plp.24.1668719098237; Thu, 17 Nov 2022 13:04:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719098; cv=none; d=google.com; s=arc-20160816; b=Fs8fTCPjJNF8w929QAJqjRRkNNfE2xo6s32D4fvtd9+T6y8mtykhtaQgD6xVxSa8Ou tGqyF/gnJ96AlsR/lY33SIETlkuTo9iBY8RVDvXbeBopKMyOJoucKvqi1y3GOzdL5ah0 Uc73yeGUUpn1Co7k3lq3yCtsFHzS6U951Vttq2g9htnBRJxjibzGHS8BuXc+9f5ytl1I f1wo6vs6cJhCs/XrBiGrY0HqW/OZe5zXuSM/9Pn5t1P5qyeDzpZsdcNSKqWpART7Wymv BSZGQqdCp393NrsMDgoiangE/Ulxm75NY5COfiOTfe/Q/tQNUINzaU7XTvcRfPOHx6r+ HdOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mZHrU3BK7yqZ4Vo8Pufvz4V3D37bYSkveHg4q16NVVk=; b=Y0XbCtNhWl0qaMSwTlHIt74w77tB18S4Do3ZWfOOKOYJn47zY1X862uZ0Lj5JXT9sa cXHl456duwCnjvlJZ6l55Tzi3d2iEmRl6+XAz1aUgGuE3iD8kG3piz/Cmf95bGsizSf6 hdgOXas3cFDiHkymLZtUVnlcsg7b6P5vVfGNyfCRp4thJTNlt1DHQeQauXnefD9+auwT wZjcc0EXWwNCPSRyY+UC2tY7RiUfwrSGI3hvKZsbsWIEA1i5ooYYE9vyxTJ4I1T2OErV u1rGak5kcaR23I2MAP4edHqO4c2g48GDowmZjNAXx4sz525XEVuT2bJd6tR04eMCiVli YKog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=VWxlmb0m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id oo16-20020a17090b1c9000b00213f5bec3c0si6567289pjb.84.2022.11.17.13.04.26; Thu, 17 Nov 2022 13:04:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=VWxlmb0m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240036AbiKQVDq (ORCPT + 99 others); Thu, 17 Nov 2022 16:03:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234794AbiKQVDk (ORCPT ); Thu, 17 Nov 2022 16:03:40 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA4BF22535 for ; Thu, 17 Nov 2022 13:03:34 -0800 (PST) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOJkU004807; Thu, 17 Nov 2022 21:03:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=mZHrU3BK7yqZ4Vo8Pufvz4V3D37bYSkveHg4q16NVVk=; b=VWxlmb0mNDjm13eBhavoZeiFeiEVLiwFGSuMcf6Ud5y7NHTLDHgrGHTnD+szwyYc9AEc OF8aCX0DzuyxqExl5f3fI0ocBQImFOaCeKJmMlsCbvPk67RScr5EEJFv94gTNimsYHGY bRxb4Q+2K36VM9O2/mLMJpLPSkf9cTrHN9ItSug1DJLhpH63y/etqncknODQMPpP64nx wcc8e8GEJmDqv+8uZ83LaP7PPfJC7oaA2iYx6OY9pf2DnH/blQyeG1vun+MwbNhhtfFp MeH9C8yxHURoe5+JkJTAZ1TkX0RZ1gOwLkkQ6PKUmMpir0j2y1hEzIxHAyZfml7AecEQ kg== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste1f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:21 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKX0ST010906; Thu, 17 Nov 2022 21:03:14 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy5x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:14 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fj032582; Thu, 17 Nov 2022 21:03:13 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-5; Thu, 17 Nov 2022 21:03:13 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 04/10] mm/hugetlb: convert remove_hugetlb_page() to folios Date: Thu, 17 Nov 2022 13:02:52 -0800 Message-Id: <20221117210258.12732-5-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: O_u6e90ysPysUcIJo01hnKRH-XJFbJse X-Proofpoint-GUID: O_u6e90ysPysUcIJo01hnKRH-XJFbJse X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778796972928895?= X-GMAIL-MSGID: =?utf-8?q?1749778796972928895?= Removes page_folio() call by converting callers to directly pass a folio into __remove_hugetlb_page(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 48 +++++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ff609efe5497..d9604c0dac54 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1432,19 +1432,18 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio, #endif /* - * Remove hugetlb page from lists, and update dtor so that page appears + * Remove hugetlb folio from lists, and update dtor so that the folio appears * as just a compound page. * - * A reference is held on the page, except in the case of demote. + * A reference is held on the folio, except in the case of demote. * * Must be called with hugetlb lock held. */ -static void __remove_hugetlb_page(struct hstate *h, struct page *page, +static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus, bool demote) { - int nid = page_to_nid(page); - struct folio *folio = page_folio(page); + int nid = folio_nid(folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio(folio), folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio_rsvd(folio), folio); @@ -1453,9 +1452,9 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; - list_del(&page->lru); + list_del(&folio->lru); - if (HPageFreed(page)) { + if (folio_test_hugetlb_freed(folio)) { h->free_huge_pages--; h->free_huge_pages_node[nid]--; } @@ -1485,26 +1484,26 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * be turned into a page of smaller size. */ if (!demote) - set_page_refcounted(page); + folio_ref_unfreeze(folio, 1); if (hstate_is_gigantic(h)) - set_compound_page_dtor(page, NULL_COMPOUND_DTOR); + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); else - set_compound_page_dtor(page, COMPOUND_PAGE_DTOR); + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); h->nr_huge_pages--; h->nr_huge_pages_node[nid]--; } -static void remove_hugetlb_page(struct hstate *h, struct page *page, +static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, false); + __remove_hugetlb_folio(h, folio, adjust_surplus, false); } -static void remove_hugetlb_page_for_demote(struct hstate *h, struct page *page, +static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, true); + __remove_hugetlb_folio(h, folio, adjust_surplus, true); } static void add_hugetlb_page(struct hstate *h, struct page *page, @@ -1639,8 +1638,9 @@ static void free_hpage_workfn(struct work_struct *work) /* * The VM_BUG_ON_PAGE(!PageHuge(page), page) in page_hstate() * is going to trigger because a previous call to - * remove_hugetlb_page() will set_compound_page_dtor(page, - * NULL_COMPOUND_DTOR), so do not use page_hstate() directly. + * remove_hugetlb_folio() will call folio_set_compound_dtor + * (folio, NULL_COMPOUND_DTOR), so do not use page_hstate() + * directly. */ h = size_to_hstate(page_size(page)); @@ -1749,12 +1749,12 @@ void free_huge_page(struct page *page) h->resv_huge_pages++; if (folio_test_hugetlb_temporary(folio)) { - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ - remove_hugetlb_page(h, page, true); + remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else { @@ -2090,6 +2090,7 @@ static struct page *remove_pool_huge_page(struct hstate *h, { int nr_nodes, node; struct page *page = NULL; + struct folio *folio; lockdep_assert_held(&hugetlb_lock); for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) { @@ -2101,7 +2102,8 @@ static struct page *remove_pool_huge_page(struct hstate *h, !list_empty(&h->hugepage_freelists[node])) { page = list_entry(h->hugepage_freelists[node].next, struct page, lru); - remove_hugetlb_page(h, page, acct_surplus); + folio = page_folio(page); + remove_hugetlb_folio(h, folio, acct_surplus); break; } } @@ -2163,7 +2165,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, &folio->page, false); + remove_hugetlb_folio(h, folio, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2801,7 +2803,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * and enqueue_huge_page() for new_page. The counters will remain * stable since this happens under the lock. */ - remove_hugetlb_page(h, old_page, false); + remove_hugetlb_folio(h, old_folio, false); /* * Ref count on new page is already zero as it was dropped @@ -3228,7 +3230,7 @@ static void try_to_free_low(struct hstate *h, unsigned long count, goto out; if (PageHighMem(page)) continue; - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, page_folio(page), false); list_add(&page->lru, &page_list); } } @@ -3439,7 +3441,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); - remove_hugetlb_page_for_demote(h, page, false); + remove_hugetlb_folio_for_demote(h, folio, false); spin_unlock_irq(&hugetlb_lock); rc = hugetlb_vmemmap_restore(h, page); From patchwork Thu Nov 17 21:02:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21913 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621600wrr; Thu, 17 Nov 2022 13:05:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf6xJVGGe1DBHD+XFlcPP4Ivf6r4bX9Zejox4SlggylF+iB8I92MIIPdH0SK1yasqZU+m3oo X-Received: by 2002:a17:902:6b08:b0:187:4467:7aba with SMTP id o8-20020a1709026b0800b0018744677abamr4419190plk.61.1668719148518; Thu, 17 Nov 2022 13:05:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719148; cv=none; d=google.com; s=arc-20160816; b=mRZxfsHO9TzRclg9PvF60Q3XG8y5g71quh1MsC4Fk6Cqs8NUtXeK8IPqjLybiIJJpq kP+cQyRj2zJow0ggbNe1nRz4vJ/jD8XWLnKtaesy3PM6jIHZCwEnnhD0miylci+tLFZ4 8YEjkTgd653asDzum1nIEdoK7+IIH07UGuGQH7QRDHIGWq42riGMOsaRi3Vo08/JyvHU DLb3jB9bASUvxebE9/n/hPa80fwyVUSxxv4lxXAYHdIUa46wZYbciYhJ8rgT2APzVrFv gMM1F22p876/xKrW8uCVLJ8JpLsvNJpOx/GhhL1YgpC8MAGZLG4my0hkabc6wQODjXvB RUdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cqECmlw3uurUHKox1I6ZdisuXQoCt6s1cy/6t7wynMs=; b=1GvBkfvqBgN9cw0xvgnffgR5W6Aaaz8TKmRUZPqFti0soF69JnLwUqlJzV2xSplKgx LyFdgpik+aZbJs/xxAktZs8S5X2PUC2JbaHaDQFmU9CtDnys3B/3bMhh1IjkwyfbKG/O hEukL5YbolOQ9lKA1MHEz6Pcs7z55SB44mSfiWKoucOhk9ndQNuNN5nkIDKh3KgCU6vK wbFVOxBK9jzFOZE42gCBiUk/kYUmChVFcFmV1IWqi00tRG+XbeP+McJJwa7l86RDfUES 8sjRRqNHWqFOo36SdFXSvj0cxWDor6xAMGOWB4/7OE9Di39K/GN5UIkAezVVRPT+1Dbe VBuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=owINbtNF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s4-20020a170903214400b0017cf07295c2si1633170ple.484.2022.11.17.13.05.34; Thu, 17 Nov 2022 13:05:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=owINbtNF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240579AbiKQVEU (ORCPT + 99 others); Thu, 17 Nov 2022 16:04:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239887AbiKQVDm (ORCPT ); Thu, 17 Nov 2022 16:03:42 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BB7B4B981 for ; Thu, 17 Nov 2022 13:03:40 -0800 (PST) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOJkV004807; Thu, 17 Nov 2022 21:03:32 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=cqECmlw3uurUHKox1I6ZdisuXQoCt6s1cy/6t7wynMs=; b=owINbtNFwCFxdpC9OE3VsaWfEbaWpFOA3KdRDnycY5U/d6odcsBzxjGovspwsbFzkbDS vdobQvZR4I/u5gUzgC90nD4OHEEGWvKuQ8W7JmX0s0mW8Yz/JcJOjz65k8nBECzBblgH 5RSEPHCspapuA991Oc0+0UUGqAQcz9dLH1yWh15RIHhCPb9oWHx5GR55U83JJvDnXmAr 3R3bbJb4oB6tiiE6IEQFIbxWaN2Rhlx5Tbj+zc7+FWzMjI/CPecTn3dyQgDgpHrcG7Oa D9BGz+d+jzuucUCGUNr9aJSTGZKd+joAIhItJAXby1MbaFF9l4ULc44+qhX8AW894XaF hw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste1j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:22 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKZxKC010886; Thu, 17 Nov 2022 21:03:15 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy6v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:15 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fl032582; Thu, 17 Nov 2022 21:03:15 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-6; Thu, 17 Nov 2022 21:03:14 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 05/10] mm/hugetlb: convert update_and_free_page() to folios Date: Thu, 17 Nov 2022 13:02:53 -0800 Message-Id: <20221117210258.12732-6-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: 8oGgEcfOoIqJtLrVyJh52foqg_R44GrQ X-Proofpoint-GUID: 8oGgEcfOoIqJtLrVyJh52foqg_R44GrQ X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778849690299640?= X-GMAIL-MSGID: =?utf-8?q?1749778849690299640?= Make more progress on converting the free_huge_page() destructor to operate on folios by converting update_and_free_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d9604c0dac54..80301fab56d8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1478,7 +1478,7 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, * apply. * * This handles the case where more than one ref is held when and - * after update_and_free_page is called. + * after update_and_free_hugetlb_folio is called. * * In the case of demote we do not ref count the page as it will soon * be turned into a page of smaller size. @@ -1609,7 +1609,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) } /* - * As update_and_free_page() can be called under any context, so we cannot + * As update_and_free_hugetlb_folio() can be called under any context, so we cannot * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate * the vmemmap pages. @@ -1657,11 +1657,11 @@ static inline void flush_free_hpage_work(struct hstate *h) flush_work(&free_hpage_work); } -static void update_and_free_page(struct hstate *h, struct page *page, +static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio, bool atomic) { - if (!HPageVmemmapOptimized(page) || !atomic) { - __update_and_free_page(h, page); + if (!folio_test_hugetlb_vmemmap_optimized(folio) || !atomic) { + __update_and_free_page(h, &folio->page); return; } @@ -1672,16 +1672,18 @@ static void update_and_free_page(struct hstate *h, struct page *page, * empty. Otherwise, schedule_work() had been called but the workfn * hasn't retrieved the list yet. */ - if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) + if (llist_add((struct llist_node *)&folio->mapping, &hpage_freelist)) schedule_work(&free_hpage_work); } static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) { struct page *page, *t_page; + struct folio *folio; list_for_each_entry_safe(page, t_page, list, lru) { - update_and_free_page(h, page, false); + folio = page_folio(page); + update_and_free_hugetlb_folio(h, folio, false); cond_resched(); } } @@ -1751,12 +1753,12 @@ void free_huge_page(struct page *page) if (folio_test_hugetlb_temporary(folio)) { remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -2170,8 +2172,8 @@ int dissolve_free_huge_page(struct page *page) spin_unlock_irq(&hugetlb_lock); /* - * Normally update_and_free_page will allocate required vmemmmap - * before freeing the page. update_and_free_page will fail to + * Normally update_and_free_hugtlb_folio will allocate required vmemmmap + * before freeing the page. update_and_free_hugtlb_folio will fail to * free the page if it can not allocate required vmemmap. We * need to adjust max_huge_pages if the page is not freed. * Attempt to allocate vmemmmap here so that we can take @@ -2179,7 +2181,7 @@ int dissolve_free_huge_page(struct page *page) */ rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, &folio->page, false); + update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); add_hugetlb_page(h, &folio->page, false); @@ -2816,7 +2818,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Pages have been replaced, we can safely free the old one. */ spin_unlock_irq(&hugetlb_lock); - update_and_free_page(h, old_page, false); + update_and_free_hugetlb_folio(h, old_folio, false); } return ret; @@ -2825,7 +2827,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, spin_unlock_irq(&hugetlb_lock); /* Page has a zero ref count, but needs a ref to be freed */ folio_ref_unfreeze(new_folio, 1); - update_and_free_page(h, new_page, false); + update_and_free_hugetlb_folio(h, new_folio, false); return ret; } From patchwork Thu Nov 17 21:02:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21912 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621563wrr; Thu, 17 Nov 2022 13:05:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf77XlTTv3c/66TvpsSBR99kz2+sxrtb1CjzMsPiPTYE/7yPpasBBiEnVwtUt1pM8LWJ9LQm X-Received: by 2002:a17:903:1306:b0:186:9cce:c4d with SMTP id iy6-20020a170903130600b001869cce0c4dmr4439260plb.103.1668719144715; Thu, 17 Nov 2022 13:05:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719144; cv=none; d=google.com; s=arc-20160816; b=UJb2rzD2v4b6h28XAiic/clk8A0Aof1144HDgOwO4d6vbBaAaJHGWFmscpJxjkhdCj eBGLw8WwubdCd7pRRAhDXrmOu8e1A169vDo0tcGc+AXg2E/QNnnPS9vjT9XA0ETW4yVD 77ylf6gLx3ApOnMypW9wKjTeWjq4TzqT/s2ju7Zqv9l+cB3o8ShoJpjAKk53GEFd5yi2 rukaSvEG7KP1TRRMIfXTW4oXIyptfcJSXTZP2SdyXowC5wYmZE/CgLFf2990zxIgzob5 UfJzoyvZX71S54tU9lc68AC8FTArxvdq0Rtnr4a+zHSb3eXdDVDsvmXWPFI0MKSFiZM6 609w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0qWyxReQlxSizjSVKyD4aM3NoT7R0refv45YW4nsk6s=; b=R2pdebFYH/3v1SBvHCUA+jcH0hMdpT64p7Iiz3NH75ebz7iAi5L/oerxlwpJZyXiPl 5/VvUeac3JF2nkAMXSsy8rNu/wpomOw+9vCG2pC/3x6krgAR5BW8uuTPvFmEwW0lq1fv d/VaO7RmIYd0pjNn7yLQ7d8D9WN7Nssb4448UK0AAoT+m7SzsXcjeytFoZMtgmo/mzAV pCMOEDIkqpITymtyaQkYTdowhZHLd/dj6IKtF6bgi3RtpnfpA9LDKmRK3PTCqDewJcfR tSRaBYAO5ZUoDsRCzUCoR49Wv2kzs8DvmNoyOCs0ZUXahEeQl1kJF/xRSSXsjgKNEgFl 1Ogg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=W04PJwqV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e2-20020a637442000000b0041bfb11f138si1863378pgn.442.2022.11.17.13.05.30; Thu, 17 Nov 2022 13:05:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=W04PJwqV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240547AbiKQVEQ (ORCPT + 99 others); Thu, 17 Nov 2022 16:04:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235033AbiKQVDl (ORCPT ); Thu, 17 Nov 2022 16:03:41 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAFF52871C for ; Thu, 17 Nov 2022 13:03:33 -0800 (PST) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOKk1004815; Thu, 17 Nov 2022 21:03:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=0qWyxReQlxSizjSVKyD4aM3NoT7R0refv45YW4nsk6s=; b=W04PJwqVZab1rB5Zby5TwF9khOolPJcYZ1TZxDREJZAOP6v5Gix0WU7Eb5YjlLx7gLrn aJ8wJqYmNK+QdenKn0hEdaI+zUeqQIQtv5nTrrxIupYCzXmaVSvQQokYqkKhfppGNzXO uiUMxzZN92iAZFzFokIaPwaNxLaifYA0rAWuECeKVTGs3eIouUMRsuRytS9bw57R9U2J R9+Kyc0Ol1wVe9YTLSraW6ww5Sfalt2b8rIxqhEgurIBHINasgIwr5bSULgn4xmMrIxd Z7Ul2PSjCc64PlAZsiQ4pWARO+LZMOOTC8a/yGWT6KM1pJTmNaFAag11DJaP88/WNQR+ tQ== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste1p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:19 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKeRoh010857; Thu, 17 Nov 2022 21:03:17 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy8g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:17 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fn032582; Thu, 17 Nov 2022 21:03:16 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-7; Thu, 17 Nov 2022 21:03:16 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 06/10] mm/hugetlb: convert add_hugetlb_page() to folios and add hugetlb_cma_folio() Date: Thu, 17 Nov 2022 13:02:54 -0800 Message-Id: <20221117210258.12732-7-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: 5PyG6JSvGqMgaqRjUhIf2Cn11r8hVcFN X-Proofpoint-GUID: 5PyG6JSvGqMgaqRjUhIf2Cn11r8hVcFN X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778845842928514?= X-GMAIL-MSGID: =?utf-8?q?1749778845842928514?= Convert add_hugetlb_page() to take in a folio, also convert hugetlb_cma_page() to take in a folio. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 80301fab56d8..bf36aa8e6072 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -54,13 +54,13 @@ struct hstate hstates[HUGE_MAX_HSTATE]; #ifdef CONFIG_CMA static struct cma *hugetlb_cma[MAX_NUMNODES]; static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { - return cma_pages_valid(hugetlb_cma[page_to_nid(page)], page, + return cma_pages_valid(hugetlb_cma[folio_nid(folio)], &folio->page, 1 << order); } #else -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { return false; } @@ -1506,17 +1506,17 @@ static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *foli __remove_hugetlb_folio(h, folio, adjust_surplus, true); } -static void add_hugetlb_page(struct hstate *h, struct page *page, +static void add_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { int zeroed; - int nid = page_to_nid(page); + int nid = folio_nid(folio); - VM_BUG_ON_PAGE(!HPageVmemmapOptimized(page), page); + VM_BUG_ON_FOLIO(!folio_test_hugetlb_vmemmap_optimized(folio), folio); lockdep_assert_held(&hugetlb_lock); - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&folio->lru); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; @@ -1525,21 +1525,21 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, h->surplus_huge_pages_node[nid]++; } - set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - set_page_private(page, 0); + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); + folio_change_private(folio, 0); /* * We have to set HPageVmemmapOptimized again as above - * set_page_private(page, 0) cleared it. + * folio_change_private(folio, 0) cleared it. */ - SetHPageVmemmapOptimized(page); + folio_set_hugetlb_vmemmap_optimized(folio); /* - * This page is about to be managed by the hugetlb allocator and + * This folio is about to be managed by the hugetlb allocator and * should have no users. Drop our reference, and check for others * just in case. */ - zeroed = put_page_testzero(page); - if (!zeroed) + zeroed = folio_put_testzero(folio); + if (unlikely(!zeroed)) /* * It is VERY unlikely soneone else has taken a ref on * the page. In this case, we simply return as the @@ -1548,8 +1548,8 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, */ return; - arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + arch_clear_hugepage_flags(&folio->page); + enqueue_huge_page(h, &folio->page); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1575,7 +1575,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_page(h, page, true); + add_hugetlb_folio(h, page_folio(page), true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1600,7 +1600,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * need to be given back to CMA in free_gigantic_page. */ if (hstate_is_gigantic(h) || - hugetlb_cma_page(page, huge_page_order(h))) { + hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { @@ -2184,7 +2184,7 @@ int dissolve_free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, &folio->page, false); + add_hugetlb_folio(h, folio, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } @@ -3451,7 +3451,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) /* Allocation of vmemmmap failed, we can not demote page */ spin_lock_irq(&hugetlb_lock); set_page_refcounted(page); - add_hugetlb_page(h, page, false); + add_hugetlb_folio(h, page_folio(page), false); return rc; } From patchwork Thu Nov 17 21:02:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21910 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621451wrr; Thu, 17 Nov 2022 13:05:29 -0800 (PST) X-Google-Smtp-Source: AA0mqf48vJz3pFAODBTs/jg41mAhjtlYfiWo+EUBKLJGs5eT2IBPL4o37LZ3jTvVe+A1PqrhY5i3 X-Received: by 2002:a17:902:ed8c:b0:186:72bd:371f with SMTP id e12-20020a170902ed8c00b0018672bd371fmr4389867plj.131.1668719128782; Thu, 17 Nov 2022 13:05:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719128; cv=none; d=google.com; s=arc-20160816; b=ws5zrxwz17oyO5Dw3EUBfaJjPIcemJC0+Oe4dvGyD6NPmbfwuYAJrECFtWD2W4XtSu R+nF9sQ2BZ7jtqZzJwM3aYt0pj0FfNxdACymZlqCWoYEZscaiEhDuTMNT5coRG6RCPfo DVz6+wZa3Mc76MBmtLP/fwZ9N6qIloueQUyNitBNz1rkMns/mGu/FSXfjceyKYTyymbS 9pAUC2M3w6gvz1D9Qwmm3YJtEWxVHqQQEL6QEODiKJbZXZv6oALBTLHGhzwidcC0s+p0 mEYK0Kf2d5GxJOm+1B6Q+ckP2FqVWvo5JXkYNJ9NVohfbsKEHFQXw89jYELB+NBcnVAx EkWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DrYH6pRWKnBgFEybzVkbkIij9mLZr9efXDcEx32Vsb4=; b=AMh0nFrK4qmKfGgriRQ2xCrdus83BIReeqQujMpCXi8m15hIIuNhdOrwVJXkEl/maq Qyw6aDOpnOHavRQM8I7RBYkFNDWupm5jk9wT/hh6I76I2iC1oiGaPh5EKDnsDpkVFp/n HWwNGLzy2uHqMIjP+4H2ZniwtbdGBPCeDsZBzPpJK8n3Hxvn50VK0NDG2LSLh8xInwpK kQaJE1YvX31m6TH+9B4tWUafZG2yrVjSc/jG/omtRcB7+JdIG+bPw413/1hkY4Zgk5nI tm13kJ0UhG88p0NAN4CTMvKJPoR1JQC9k1QTcf7nHLnevwHlYim/xF1/hPzUqDnl9eoV dTKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=VDGIwT5M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e12-20020a17090301cc00b001781cf050b6si2129198plh.14.2022.11.17.13.05.13; Thu, 17 Nov 2022 13:05:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=VDGIwT5M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240411AbiKQVEG (ORCPT + 99 others); Thu, 17 Nov 2022 16:04:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234954AbiKQVDl (ORCPT ); Thu, 17 Nov 2022 16:03:41 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA91227160 for ; Thu, 17 Nov 2022 13:03:33 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOOsX002460; Thu, 17 Nov 2022 21:03:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=DrYH6pRWKnBgFEybzVkbkIij9mLZr9efXDcEx32Vsb4=; b=VDGIwT5M5SD802U1EjVYgYvMvY6sHlKd5pcfMjemg07SH41xeDz6wbz9VvtpZ9PAq16W 99G+Y+gBnPly26Utio0czONIJgtOc+G3uEBR31tdl4IGcrpaXXKJnZP9eCh/a1QrkuV1 gkbpUU2RMnI8CM6MBwy8qcWSV8i0rBNPVGz1u5e1CRzcsPUxvq4BOKpJMwkTEjCbccFZ I8633KcHf1pPqlp1uVWvxA14rJnEj8ozBBy9aM7IHWMEQjg45ga3NR3QEY3AascCcC0Z zaJvmBEcOrZUzLnxnbBI5XwYF/bs26xKBkbgoNU3Nq4C2v95ClHCg6hfSujpz/QV6DtW uw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgn4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:22 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKXMtJ011016; Thu, 17 Nov 2022 21:03:19 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy9j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:18 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fp032582; Thu, 17 Nov 2022 21:03:18 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-8; Thu, 17 Nov 2022 21:03:18 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 07/10] mm/hugetlb: convert enqueue_huge_page() to folios Date: Thu, 17 Nov 2022 13:02:55 -0800 Message-Id: <20221117210258.12732-8-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: WCB2r5V_hKEQ7qtIE9ysF_dPdvWcN3qx X-Proofpoint-ORIG-GUID: WCB2r5V_hKEQ7qtIE9ysF_dPdvWcN3qx X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778829061520337?= X-GMAIL-MSGID: =?utf-8?q?1749778829061520337?= Convert callers of enqueue_huge_page() to pass in a folio, function is renamed to enqueue_hugetlb_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bf36aa8e6072..4eb6f3d6f46e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1127,17 +1127,17 @@ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) return false; } -static void enqueue_huge_page(struct hstate *h, struct page *page) +static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) { - int nid = page_to_nid(page); + int nid = folio_nid(folio); lockdep_assert_held(&hugetlb_lock); - VM_BUG_ON_PAGE(page_count(page), page); + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio); - list_move(&page->lru, &h->hugepage_freelists[nid]); + list_move(&folio->lru, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; - SetHPageFreed(page); + folio_set_hugetlb_freed(folio); } static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) @@ -1549,7 +1549,7 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, return; arch_clear_hugepage_flags(&folio->page); - enqueue_huge_page(h, &folio->page); + enqueue_hugetlb_folio(h, folio); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1761,7 +1761,7 @@ void free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); } } @@ -2436,7 +2436,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) if ((--needed) < 0) break; /* Add the page to the hugetlb allocator */ - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, page_folio(page)); } free: spin_unlock_irq(&hugetlb_lock); @@ -2802,8 +2802,8 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Ok, old_page is still a genuine free hugepage. Remove it from * the freelist and decrease the counters. These will be * incremented again when calling __prep_account_new_huge_page() - * and enqueue_huge_page() for new_page. The counters will remain - * stable since this happens under the lock. + * and enqueue_hugetlb_folio() for new_folio. The counters will + * remain stable since this happens under the lock. */ remove_hugetlb_folio(h, old_folio, false); @@ -2812,7 +2812,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * earlier. It can be directly added to the pool free list. */ __prep_account_new_huge_page(h, nid); - enqueue_huge_page(h, new_page); + enqueue_hugetlb_folio(h, new_folio); /* * Pages have been replaced, we can safely free the old one. From patchwork Thu Nov 17 21:02:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21907 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621347wrr; Thu, 17 Nov 2022 13:05:16 -0800 (PST) X-Google-Smtp-Source: AA0mqf7CzoI3fJy07tWlkkTZi1qPVjfvXa23gMQD08v93onM/Qi6AueV++XfMsXXlr8hMv2IPOnW X-Received: by 2002:a63:f117:0:b0:476:e9c6:50ac with SMTP id f23-20020a63f117000000b00476e9c650acmr3691456pgi.430.1668719116380; Thu, 17 Nov 2022 13:05:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719116; cv=none; d=google.com; s=arc-20160816; b=ntesK3ROu89IAJW9JRy2O4iltpkeEJ9qMght5Am7UOyPYmTCSff7DOF0EEH2Xz40sF VoXoIDDb4Wiba2qNi1NGV4ERT7K2eydnaH6jvVQ5XjfoygoaSL0kerasI1Z0q5wBW/46 6WwgQs9OVjHT+n0XuUf8x8bszedt/R0IFjR8y+zpAMZL7opsFZwztiMCVLQmA5AXKkAV aUAGNMs3vbCSw1v8FRAswAYlrGOup06OukyfWzl0XOJvXyS91RqHFyMoST78Yg+fRVSe vhmuMjdfcQBs3SHnrbWjJLPxE2ldyWQJAGLCnh9zz2UixpNd1a0T1+/XyOThsQD0i/1B Nwfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cwkUHEJT5bDkOzVSGa3m03Vwd59Go/cano1leZnsjBU=; b=urFhb3Bf3564oN4F+Ro9oVCtRvSGlKUuheX7VB5dwNfZLPLLi8Dr5vjivO6jhwCqzi DXEhpn5qx52CFbOYQ4hlhDmLsvvz5vbtGPNfIkfNQObi5p+ER5kIR+sk/yOODl96RefE kPDsyz4/fVmcBBnrjIf8izmXx8N6bdqrx1xs/2MXnc+kVuAGs+y4UNzsDtAluFgVG8/f 0eFtM/+kxvsLTGG9BuKnxbmAzVEoJiRjYK9tvso7WAeu/B4DHKYXz4zpy+zJJtVAXcPC /EfcMbmlZo+z0EgKrD9NulxgkP0QVz8bLJHkojkNJVizoR5iAJnxDawdEnqSU2/+aGqt SKZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=f7crwLu4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g14-20020a056a001a0e00b0056c2e395593si2065152pfv.167.2022.11.17.13.05.02; Thu, 17 Nov 2022 13:05:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=f7crwLu4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240267AbiKQVDw (ORCPT + 99 others); Thu, 17 Nov 2022 16:03:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234809AbiKQVDk (ORCPT ); Thu, 17 Nov 2022 16:03:40 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA6BC22BEA for ; Thu, 17 Nov 2022 13:03:34 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOOsY002460; Thu, 17 Nov 2022 21:03:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=cwkUHEJT5bDkOzVSGa3m03Vwd59Go/cano1leZnsjBU=; b=f7crwLu4zaBU+jwvzBHOsYEFrBd4NcUw3Xj6RAYVogF3jVfljo5ZOF49cyR2Rs6gokzL ce8BkqUGJEKvSg1Zyaw7J1/kBeDGQLJgLcavHmRYc/cQ40zYGwB4wnl/cvfh4hb+rXqs BFks4NnwRTL/vlUoHndVl/bpYLUrd2/x4d7jWSDI3HvR0ZYhxibUbQUp4vmFW0oJ/yjC bbuneS01PvTsn7kgov1+08G8WT+OzeiqdKG3DMxaIU7LDxX/qH3i2AMXjCQ/gpTWTTTO QLeh5FnCY2d+fwdptTy8TZesQYyUzdaX4UZI4JkZ+fQRB6W80AfrZYIIb51ncyjYXjn2 cA== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgn8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:22 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKXQTe010907; Thu, 17 Nov 2022 21:03:20 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagyax-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:20 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fr032582; Thu, 17 Nov 2022 21:03:19 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-9; Thu, 17 Nov 2022 21:03:19 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 08/10] mm/hugetlb: convert free_gigantic_page() to folios Date: Thu, 17 Nov 2022 13:02:56 -0800 Message-Id: <20221117210258.12732-9-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: AbH47GrRIcaFuj3YJ6J0Owjvw9P9qaQ9 X-Proofpoint-ORIG-GUID: AbH47GrRIcaFuj3YJ6J0Owjvw9P9qaQ9 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778816175840102?= X-GMAIL-MSGID: =?utf-8?q?1749778816175840102?= Convert callers of free_gigantic_page() to use folios, function is then renamed to free_gigantic_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4eb6f3d6f46e..38c5ca015363 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1361,18 +1361,20 @@ static void destroy_compound_gigantic_folio(struct folio *folio, __destroy_compound_gigantic_folio(folio, order, false); } -static void free_gigantic_page(struct page *page, unsigned int order) +static void free_gigantic_folio(struct folio *folio, unsigned int order) { /* * If the page isn't allocated using the cma allocator, * cma_release() returns false. */ #ifdef CONFIG_CMA - if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) + int nid = folio_nid(folio); + + if (cma_release(hugetlb_cma[nid], &folio->page, 1 << order)) return; #endif - free_contig_range(page_to_pfn(page), 1 << order); + free_contig_range(folio_pfn(folio), 1 << order); } #ifdef CONFIG_CONTIG_ALLOC @@ -1426,7 +1428,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int order) { } +static inline void free_gigantic_folio(struct folio *folio, + unsigned int order) { } static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1565,7 +1568,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * If we don't know which subpages are hwpoisoned, we can't free * the hugepage, so it's leaked intentionally. */ - if (HPageRawHwpUnreliable(page)) + if (folio_test_hugetlb_raw_hwp_unreliable(folio)) return; if (hugetlb_vmemmap_restore(h, page)) { @@ -1575,7 +1578,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_folio(h, page_folio(page), true); + add_hugetlb_folio(h, folio, true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1588,7 +1591,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { - subpage = nth_page(page, i); + subpage = folio_page(folio, i); subpage->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | 1 << PG_active | 1 << PG_private | @@ -1597,12 +1600,12 @@ static void __update_and_free_page(struct hstate *h, struct page *page) /* * Non-gigantic pages demoted from CMA allocated gigantic pages - * need to be given back to CMA in free_gigantic_page. + * need to be given back to CMA in free_gigantic_folio. */ if (hstate_is_gigantic(h) || hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); } @@ -2023,6 +2026,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nodemask_t *node_alloc_noretry) { struct page *page; + struct folio *folio; bool retry = false; retry: @@ -2033,14 +2037,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nid, nmask, node_alloc_noretry); if (!page) return NULL; - + folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_page(page, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); if (!retry) { retry = true; goto retry; @@ -3048,6 +3052,7 @@ static void __init gather_bootmem_prealloc(void) list_for_each_entry(m, &huge_boot_pages, list) { struct page *page = virt_to_page(m); + struct folio *folio = page_folio(page); struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); @@ -3058,7 +3063,7 @@ static void __init gather_bootmem_prealloc(void) free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } /* From patchwork Thu Nov 17 21:02:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21908 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621376wrr; Thu, 17 Nov 2022 13:05:19 -0800 (PST) X-Google-Smtp-Source: AA0mqf4dfc2mUDcTwjNi2m5dpsAvxDtYRR7ZYRRaSf4f6QGtdoh2ou/B+L4i/DE0zvXgUGg32N0F X-Received: by 2002:a63:a0b:0:b0:46f:53cb:65b5 with SMTP id 11-20020a630a0b000000b0046f53cb65b5mr3548043pgk.507.1668719118870; Thu, 17 Nov 2022 13:05:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719118; cv=none; d=google.com; s=arc-20160816; b=gZHev1K1X4GghZsi3OHPPKvsFE6Byzp00S6NmKvxTL9mI0k83ize278tCaDSg708JU Pi1EDOYZhvmW3L1ifMZRinnxeoJxP+17soyFu1akkcYe8e7L65a0/3fwzxKAFs23mSqq Cow1jocHxdo8wzGEc/bcjHPJdLchi/+6hj9O5hD2Yp0AuIh8OSqVKRdcvkSESRh+BF0T iDiHzTDTmvMsjV55LANBqT3tJOHmwyW6Fu7vOCoU1cL9mYUnT4+Ng2acf1i+19T8mGTc e0OP4lVVkoBj/nXxjk6g+YOXzaC88OvEnhbQMIiNQmWXdhCz8BGMNrcF8ggIEw6YjeDU K3bA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MCK/rXGUTZNQyhTZ+P51qmllsIFD0ccOv0nuQ2VWPKM=; b=LbEwQtZgashS1ZIV/5G7A18sunKTURrT8kq9Q+3rF/uN6KC4+TCw2J52XC7Fn6tqtI eeIV04xBJqb1JYv68+viAHRxe89H0uNisrsMsPTZooDVugFOj/Fa1eTHgGExBnjRz7Wh 7i6f9Dp5yXtgRaw/tIZg011DAdnX0U5p9NFYPrkaAoK75vJSZc7LRsBHj31Spk37e/Sm yNUzXCZIo4NH+P1qLw8/DSZv3AD8PTswG0Lo/PDeNJZZo5xXQX0TpEsIK77eoTu1rLna w2EBI/LyKK5sMdF6TE/NGyQUJhL2tMuSOETODDfAcqqkeXzfpi9KRnDBjDjwzpgPf6gH jp+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=MfRqQmhD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w11-20020a056a0014cb00b0056c8da80bfcsi1953838pfu.351.2022.11.17.13.05.05; Thu, 17 Nov 2022 13:05:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=MfRqQmhD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240338AbiKQVD4 (ORCPT + 99 others); Thu, 17 Nov 2022 16:03:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234747AbiKQVDk (ORCPT ); Thu, 17 Nov 2022 16:03:40 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA5B322BE3 for ; Thu, 17 Nov 2022 13:03:34 -0800 (PST) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOKut004819; Thu, 17 Nov 2022 21:03:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=MCK/rXGUTZNQyhTZ+P51qmllsIFD0ccOv0nuQ2VWPKM=; b=MfRqQmhDx6K5xV8shmyIJBd7uIX0lygdVT7WzN3v1ea1XOIxeXlnPBzwtByaUk73rfZv dcuu9WvbCG603vi0uF8G3v6h8Cl7bUZpZUqOe3GdOYi3W0IMpxuHR9QupKzKWbTj419C sZGiW33H9HNqkzJETyoSi9LaeGcOjbp22YRbXktCUsl9gzMk9HWh8DdZj9Ge005pqkXK zlc6D4+Q14y2U6lywcTJXVYb8pmJsSXviuywNfezJzC7J43YykM5f5ur/HJnGeB1BUa5 5ytmiPQS1GTPwaaWjCUq3QDaQDOcNGI091bMv/8QdqwbB6xB88aBDDEJuaC4e05KyJzQ mw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste28-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:24 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKZSqx010894; Thu, 17 Nov 2022 21:03:22 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagycb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:21 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Ft032582; Thu, 17 Nov 2022 21:03:21 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-10; Thu, 17 Nov 2022 21:03:21 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 09/10] mm/hugetlb: convert hugetlb prep functions to folios Date: Thu, 17 Nov 2022 13:02:57 -0800 Message-Id: <20221117210258.12732-10-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: lMDZlMUHC4wS5IWV71oJIP6jr0QEoaU7 X-Proofpoint-GUID: lMDZlMUHC4wS5IWV71oJIP6jr0QEoaU7 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778818855127056?= X-GMAIL-MSGID: =?utf-8?q?1749778818855127056?= Convert prep_new_huge_page() and __prep_compound_gigantic_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 61 +++++++++++++++++++++++++--------------------------- 1 file changed, 29 insertions(+), 32 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 38c5ca015363..6ecf9874c521 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1789,28 +1789,26 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) set_hugetlb_cgroup_rsvd(folio, NULL); } -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) +static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid) { - struct folio *folio = page_folio(page); - __prep_new_hugetlb_folio(h, folio); spin_lock_irq(&hugetlb_lock); __prep_account_new_huge_page(h, nid); spin_unlock_irq(&hugetlb_lock); } -static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, - bool demote) +static bool __prep_compound_gigantic_folio(struct folio *folio, + unsigned int order, bool demote) { int i, j; int nr_pages = 1 << order; struct page *p; - /* we rely on prep_new_huge_page to set the destructor */ - set_compound_order(page, order); - __SetPageHead(page); + /* we rely on prep_new_hugetlb_folio to set the destructor */ + folio_set_compound_order(folio, order); + __SetPageHead(&folio->page); for (i = 0; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); /* * For gigantic hugepages allocated through bootmem at @@ -1851,43 +1849,41 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, VM_BUG_ON_PAGE(page_count(p), p); } if (i != 0) - set_compound_head(p, page); + set_compound_head(p, &folio->page); } - atomic_set(compound_mapcount_ptr(page), -1); - atomic_set(subpages_mapcount_ptr(page), 0); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), -1); + atomic_set(folio_subpages_mapcount_ptr(folio), 0); + atomic_set(folio_pincount_ptr(folio), 0); return true; out_error: /* undo page modifications made above */ for (j = 0; j < i; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); if (j != 0) clear_compound_head(p); set_page_refcounted(p); } /* need to clear PG_reserved on remaining tail pages */ for (; j < nr_pages; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); __ClearPageReserved(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + folio_clear_head(folio); return false; } -static bool prep_compound_gigantic_page(struct page *page, unsigned int order) +static bool prep_compound_gigantic_folio(struct folio *folio, + unsigned int order) { - return __prep_compound_gigantic_page(page, order, false); + return __prep_compound_gigantic_folio(folio, order, false); } -static bool prep_compound_gigantic_page_for_demote(struct page *page, +static bool prep_compound_gigantic_folio_for_demote(struct folio *folio, unsigned int order) { - return __prep_compound_gigantic_page(page, order, true); + return __prep_compound_gigantic_folio(folio, order, true); } /* @@ -2039,7 +2035,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; folio = page_folio(page); if (hstate_is_gigantic(h)) { - if (!prep_compound_gigantic_page(page, huge_page_order(h))) { + if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! @@ -2052,7 +2048,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; } } - prep_new_huge_page(h, page, page_to_nid(page)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); return page; } @@ -3056,10 +3052,10 @@ static void __init gather_bootmem_prealloc(void) struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); - WARN_ON(page_count(page) != 1); - if (prep_compound_gigantic_page(page, huge_page_order(h))) { - WARN_ON(PageReserved(page)); - prep_new_huge_page(h, page, page_to_nid(page)); + WARN_ON(folio_ref_count(folio) != 1); + if (prep_compound_gigantic_folio(folio, huge_page_order(h))) { + WARN_ON(folio_test_reserved(folio)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ @@ -3478,13 +3474,14 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) for (i = 0; i < pages_per_huge_page(h); i += pages_per_huge_page(target_hstate)) { subpage = nth_page(page, i); + folio = page_folio(subpage); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_page_for_demote(subpage, + prep_compound_gigantic_folio_for_demote(folio, target_hstate->order); else prep_compound_page(subpage, target_hstate->order); set_page_private(subpage, 0); - prep_new_huge_page(target_hstate, subpage, nid); + prep_new_hugetlb_folio(target_hstate, folio, nid); free_huge_page(subpage); } mutex_unlock(&target_hstate->resize_lock); From patchwork Thu Nov 17 21:02:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 21909 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp621413wrr; Thu, 17 Nov 2022 13:05:24 -0800 (PST) X-Google-Smtp-Source: AA0mqf6M/pNVBNC1it56wOAWrUfGaSdzVaJRz3XbPz0nGvFNwt8d6ZxVRSwVJxb94fQfSFpyjW8C X-Received: by 2002:a17:903:2449:b0:17f:5778:888b with SMTP id l9-20020a170903244900b0017f5778888bmr4366681pls.119.1668719123915; Thu, 17 Nov 2022 13:05:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668719123; cv=none; d=google.com; s=arc-20160816; b=MxBFUYEHmn/RIXjJJH0ltUonC17k8fA+M33qTcieF8V5YGtpsESUmoiyKt7LAmvMiM g9k2S/nRTtS8bcEFjyXXyh79xXh8Rk6cA8h8Me+u9BvageEwvMlEZgcGv8fLFZC2+pVU K9jRgRuO8JoLTsPkWsm3QMoEUcA4xllk4rs28d8kEZ7FopO4HgIiS9HkzctmOofnXWnD hR33DSY7u3aNkZq508oJhPy8J2NBOdj/rfwPuL2k44uLl3lpu0Pm+buxtqwtUTd0AZRF tKKmadNV6s5gMBD5haH0VbQoaFoCUVgQRnyuuQiDjnhcMh6RKaiDkcWBr07yk6YJzFzr szjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oiF9VDlWy9y8mIee/ulZOw0MitvLoK+ljs9NDxhkktU=; b=MdU/hXP2XxAcVlYlnQZI1w/dVtoC4y4nhDh54T7JtLSIHNuYODaCsMSVOsqpTciKLl kGqYhgLiOMdQJTr5nKlp4Gewm9pjXW5aoL5YmDhlqNtBm/x9tc5OMRUiCIoBytTY2YL6 FxqBy6tynnYnbOZfDh14faIoP5EJF6JK9DEvEepkDBQ9WxpEBY20skucA/VGVqLyAA4s qIl+TEIN6tHHG8KBLnNLLgTIgS9erli0U2aX4KSucPOHZM8kGzl6/TUi5GMs9wFFktru OLB878IOc27DW4o4rYwyZ2CAh3aju9Br3hjDD+alDtuKuAjSNMhGZbadxsrwUyWScQNv nIEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=VvDg14FE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cp8-20020a056a00348800b0056bcfbc75casi1741315pfb.177.2022.11.17.13.05.09; Thu, 17 Nov 2022 13:05:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=VvDg14FE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240121AbiKQVEC (ORCPT + 99 others); Thu, 17 Nov 2022 16:04:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239843AbiKQVDl (ORCPT ); Thu, 17 Nov 2022 16:03:41 -0500 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAB26286CA for ; Thu, 17 Nov 2022 13:03:35 -0800 (PST) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOOse002460; Thu, 17 Nov 2022 21:03:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=oiF9VDlWy9y8mIee/ulZOw0MitvLoK+ljs9NDxhkktU=; b=VvDg14FEWYbX1LeHqxnrB+23hs03FodxFJd+LnUisyoPukL26JOD2XDqt6fDADYXPe2R /bhUwENwMks+z2fAOcsC/MeIi7UKsVEEhDEFoRwMCVRUnXvPjRJ6YKnamC0H9i48RSWE od8QAxDt6dddyCnq5/GEALy2G1hzKl8AyjjBJExhmmy6fgIU4BltV2jJAMxqOBtdcyqC uYoNwUGyBifYB46VzUqb4hoOGueYV38TsEFl7e4ZnQFu2lZ2K7LnsfWk9r74siCADTTF ZrArSscj/JF2ruguWlOECO1wKjxbG64TLx458PCq9KJGhRmIL+Tdnkwm3pCYjGQKr1Lt fw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgnn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:25 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKX0SZ010906; Thu, 17 Nov 2022 21:03:23 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagyd6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:23 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fv032582; Thu, 17 Nov 2022 21:03:22 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-11; Thu, 17 Nov 2022 21:03:22 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 10/10] mm/hugetlb: change hugetlb allocation functions to return a folio Date: Thu, 17 Nov 2022 13:02:58 -0800 Message-Id: <20221117210258.12732-11-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: DOBrNUfYvOVgJV39WwrLF5eaa6twuBCS X-Proofpoint-ORIG-GUID: DOBrNUfYvOVgJV39WwrLF5eaa6twuBCS X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749778823843582152?= X-GMAIL-MSGID: =?utf-8?q?1749778823843582152?= Many hugetlb allocation helper functions have now been converting to folios, update their higher level callers to be compatible with folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 98 ++++++++++++++++++++++++---------------------------- 1 file changed, 46 insertions(+), 52 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6ecf9874c521..aa0a388bded4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1378,7 +1378,7 @@ static void free_gigantic_folio(struct folio *folio, unsigned int order) } #ifdef CONFIG_CONTIG_ALLOC -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { unsigned long nr_pages = pages_per_huge_page(h); @@ -1394,7 +1394,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[nid], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } if (!(gfp_mask & __GFP_THISNODE)) { @@ -1405,17 +1405,16 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[node], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } } } #endif - - return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); + return page_folio(alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask)); } #else /* !CONFIG_CONTIG_ALLOC */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1423,7 +1422,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, #endif /* CONFIG_CONTIG_ALLOC */ #else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1948,7 +1947,7 @@ pgoff_t hugetlb_basepage_index(struct page *page) return (index << compound_order(page_head)) + compound_idx; } -static struct page *alloc_buddy_huge_page(struct hstate *h, +static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { @@ -2007,7 +2006,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, if (node_alloc_noretry && !page && alloc_try_hard) node_set(nid, *node_alloc_noretry); - return page; + return page_folio(page); } /* @@ -2017,23 +2016,21 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, * Note that returned page is 'frozen': ref count of head page and all tail * pages is zero. */ -static struct page *alloc_fresh_huge_page(struct hstate *h, +static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { - struct page *page; struct folio *folio; bool retry = false; retry: if (hstate_is_gigantic(h)) - page = alloc_gigantic_page(h, gfp_mask, nid, nmask); + folio = alloc_gigantic_folio(h, gfp_mask, nid, nmask); else - page = alloc_buddy_huge_page(h, gfp_mask, + folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, node_alloc_noretry); - if (!page) + if (!folio) return NULL; - folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* @@ -2050,7 +2047,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, } prep_new_hugetlb_folio(h, folio, folio_nid(folio)); - return page; + return folio; } /* @@ -2060,21 +2057,21 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, static int alloc_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, nodemask_t *node_alloc_noretry) { - struct page *page; + struct folio *folio; int nr_nodes, node; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { - page = alloc_fresh_huge_page(h, gfp_mask, node, nodes_allowed, - node_alloc_noretry); - if (page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, node, + nodes_allowed, node_alloc_noretry); + if (folio) break; } - if (!page) + if (!folio) return 0; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ return 1; } @@ -2235,7 +2232,7 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn) static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page = NULL; + struct folio *folio = NULL; if (hstate_is_gigantic(h)) return NULL; @@ -2245,8 +2242,8 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, goto out_unlock; spin_unlock_irq(&hugetlb_lock); - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; spin_lock_irq(&hugetlb_lock); @@ -2258,43 +2255,42 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, * codeflow */ if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) { - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); spin_unlock_irq(&hugetlb_lock); - free_huge_page(page); + free_huge_page(&folio->page); return NULL; } h->surplus_huge_pages++; - h->surplus_huge_pages_node[page_to_nid(page)]++; + h->surplus_huge_pages_node[folio_nid(folio)]++; out_unlock: spin_unlock_irq(&hugetlb_lock); - return page; + return &folio->page; } static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page; + struct folio *folio; if (hstate_is_gigantic(h)) return NULL; - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; /* fresh huge pages are frozen */ - set_page_refcounted(page); - + folio_ref_unfreeze(folio, 1); /* * We do not account these pages as surplus because they are only * temporary and will be released properly on the last reference */ - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); - return page; + return &folio->page; } /* @@ -2743,19 +2739,18 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, } /* - * alloc_and_dissolve_huge_page - Allocate a new page and dissolve the old one + * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve + * the old one * @h: struct hstate old page belongs to * @old_page: Old page to dissolve * @list: List to isolate the page in case we need to * Returns 0 on success, otherwise negated error. */ -static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, - struct list_head *list) +static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, + struct folio *old_folio, struct list_head *list) { gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - struct folio *old_folio = page_folio(old_page); int nid = folio_nid(old_folio); - struct page *new_page; struct folio *new_folio; int ret = 0; @@ -2766,26 +2761,25 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * the pool. This simplifies and let us do most of the processing * under the lock. */ - new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL); - if (!new_page) + new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL); + if (!new_folio) return -ENOMEM; - new_folio = page_folio(new_page); __prep_new_hugetlb_folio(h, new_folio); retry: spin_lock_irq(&hugetlb_lock); if (!folio_test_hugetlb(old_folio)) { /* - * Freed from under us. Drop new_page too. + * Freed from under us. Drop new_folio too. */ goto free_new; } else if (folio_ref_count(old_folio)) { /* - * Someone has grabbed the page, try to isolate it here. + * Someone has grabbed the folio, try to isolate it here. * Fail with -EBUSY if not possible. */ spin_unlock_irq(&hugetlb_lock); - ret = isolate_hugetlb(old_page, list); + ret = isolate_hugetlb(&old_folio->page, list); spin_lock_irq(&hugetlb_lock); goto free_new; } else if (!folio_test_hugetlb_freed(old_folio)) { @@ -2863,7 +2857,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) ret = 0; else if (!folio_ref_count(folio)) - ret = alloc_and_dissolve_huge_page(h, &folio->page, list); + ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); return ret; } @@ -3081,14 +3075,14 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) if (!alloc_bootmem_huge_page(h, nid)) break; } else { - struct page *page; + struct folio *folio; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - page = alloc_fresh_huge_page(h, gfp_mask, nid, + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, &node_states[N_MEMORY], NULL); - if (!page) + if (!folio) break; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ } cond_resched(); }