From patchwork Fri Sep 22 07:09:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 143502 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp5656646vqi; Fri, 22 Sep 2023 08:15:33 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFpxRXFKHbuPxNqpSBhlAEwZcMdWKbPLu02fasQfhlXLcVR9i8X1JX0qxgNRBrqsHwFlX7x X-Received: by 2002:a05:6a20:914a:b0:14e:2208:d62f with SMTP id x10-20020a056a20914a00b0014e2208d62fmr4897035pzc.22.1695395733252; Fri, 22 Sep 2023 08:15:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695395733; cv=none; d=google.com; s=arc-20160816; b=DZhtHGzwYl57L9JGnYSBuxUZPPpy7YFLF1CnTn91fidHh0Lm0wk/rwAr5pUdoJcjsE TNOpMiZfdjqVpU1sX/E8llUO/4Ps6YsUHHQDqM3gZnuGnlt14VVFtip8yADfjLvAAzHQ juqDZDmK0G5j6y8yOQvX+zHsFLZrXdxzhdLGZNg1//yrsB1FqdpPNS2SGwqGl+iU+gDl yq7LNuA1496uNlnANuDj2eYcSUR/zukRwQJWPluFcWbWBFJIHqhdK14bx9O925zfd83z CWP5rv/K91YlabLClF5+c++u7FmBlTzEIhvananpWu4qWRpsRG8Jjjw/0LwqiI5TZPjl 840w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=k0mCgISWNiznjjM6SFzdDoz9gB321AuL+FXt04gr7LU=; fh=KSy1dDf6lUvgdoPctzpy47/Mjv6bi14N3nvpkZLQDdc=; b=rl6YpOrWxgTpFoUwp7a4GZHoKOhPKwyQJQRPrEK/Gn4FrzbchdTuJv2cu68ezSU8T3 gMolw7YJY+oBZzmHz0dK6D9yMnnWsXBMsGFCF+zf28Qi7XwgcU9b7kpPnCmM2Aik5BTx l7PeypoZllnOZa/bECDiUZKRRvXT/7y8yDsZh6+Ss/pmZm+WuF0FnAgc6K2fm//GRfKe qvwcgLBeGmN6qp+tst0i44spSApFkPMFvFxQs67peMiMyqG3Apiy44cUSbxsf9MQDiF/ HyVxuQUjoRLqPq7d9Du4i6wi1VYp+bCfgaedOPsz+/lryXVGuha6YuTuOKhxjrkiNTV+ HYXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=jxWX0Bgf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id s3-20020a656443000000b00577549e67c8si3924209pgv.601.2023.09.22.08.15.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 08:15:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=jxWX0Bgf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 266A583BB5A7; Fri, 22 Sep 2023 00:10:28 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230349AbjIVHKM (ORCPT + 30 others); Fri, 22 Sep 2023 03:10:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230332AbjIVHKK (ORCPT ); Fri, 22 Sep 2023 03:10:10 -0400 Received: from out-229.mta0.migadu.com (out-229.mta0.migadu.com [IPv6:2001:41d0:1004:224b::e5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B66418F for ; Fri, 22 Sep 2023 00:10:03 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695366600; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k0mCgISWNiznjjM6SFzdDoz9gB321AuL+FXt04gr7LU=; b=jxWX0BgfnsPK+/UXL3X/EVvjG8BWDljXrq53ger1dOsqtFyF7Qy0lOSvvZlzrSxRGiowBX aZ+aCA1UetBKaJ3y1q6OBuvEv3GThzFUov3jfT6/IHSVlPKdHtoHot+HJ51tvuCMu1WOgT 36r5JuQY/7UoGeeVslveatFA3PqI7mk= From: Yajun Deng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, glider@google.com, elver@google.com, dvyukov@google.com, rppt@kernel.org, david@redhat.com, osalvador@suse.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Yajun Deng Subject: [PATCH 1/4] mm: pass set_count and set_reserved to __init_single_page Date: Fri, 22 Sep 2023 15:09:20 +0800 Message-Id: <20230922070923.355656-2-yajun.deng@linux.dev> In-Reply-To: <20230922070923.355656-1-yajun.deng@linux.dev> References: <20230922070923.355656-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Fri, 22 Sep 2023 00:10:28 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777725373617137032 X-GMAIL-MSGID: 1777751276210795452 When we init a single page, we need to mark this page reserved if it does. And somes page may not need to set page count, such as compound pages. Pass set_count and set_reserved to __init_single_page, let the caller decide if it needs to set page count or mark page reserved. Signed-off-by: Yajun Deng --- mm/hugetlb.c | 2 +- mm/internal.h | 3 ++- mm/mm_init.c | 30 ++++++++++++++++-------------- 3 files changed, 19 insertions(+), 16 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e2123d1bb4a2..4f91e47430ce 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3196,7 +3196,7 @@ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, for (pfn = head_pfn + start_page_number; pfn < end_pfn; pfn++) { struct page *page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid); + __init_single_page(page, pfn, zone, nid, true, false); prep_compound_tail((struct page *)folio, pfn - head_pfn); ret = page_ref_freeze(page, 1); VM_BUG_ON(!ret); diff --git a/mm/internal.h b/mm/internal.h index 7a961d12b088..8bded7f98493 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1210,7 +1210,8 @@ struct vma_prepare { }; void __meminit __init_single_page(struct page *page, unsigned long pfn, - unsigned long zone, int nid); + unsigned long zone, int nid, bool set_count, + bool set_reserved); /* shrinker related functions */ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, diff --git a/mm/mm_init.c b/mm/mm_init.c index 06a72c223bce..c40042098a82 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -557,11 +557,13 @@ static void __init find_zone_movable_pfns_for_nodes(void) } void __meminit __init_single_page(struct page *page, unsigned long pfn, - unsigned long zone, int nid) + unsigned long zone, int nid, bool set_count, + bool set_reserved) { mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); - init_page_count(page); + if (set_count) + init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); @@ -572,6 +574,8 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, if (!is_highmem_idx(zone)) set_page_address(page, __va(pfn << PAGE_SHIFT)); #endif + if (set_reserved) + __SetPageReserved(page); } #ifdef CONFIG_NUMA @@ -714,7 +718,7 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) if (zone_spans_pfn(zone, pfn)) break; } - __init_single_page(pfn_to_page(pfn), pfn, zid, nid); + __init_single_page(pfn_to_page(pfn), pfn, zid, nid, true, false); } #else static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} @@ -821,8 +825,8 @@ static void __init init_unavailable_range(unsigned long spfn, pfn = pageblock_end_pfn(pfn) - 1; continue; } - __init_single_page(pfn_to_page(pfn), pfn, zone, node); - __SetPageReserved(pfn_to_page(pfn)); + __init_single_page(pfn_to_page(pfn), pfn, zone, node, + true, true); pgcnt++; } @@ -884,7 +888,7 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone } page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid); + __init_single_page(page, pfn, zone, nid, true, false); if (context == MEMINIT_HOTPLUG) __SetPageReserved(page); @@ -965,11 +969,9 @@ static void __init memmap_init(void) #ifdef CONFIG_ZONE_DEVICE static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, unsigned long zone_idx, int nid, - struct dev_pagemap *pgmap) + struct dev_pagemap *pgmap, + bool set_count) { - - __init_single_page(page, pfn, zone_idx, nid); - /* * Mark page reserved as it will need to wait for onlining * phase for it to be fully associated with a zone. @@ -977,7 +979,7 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, * We can use the non-atomic __set_bit operation for setting * the flag as we are still initializing the pages. */ - __SetPageReserved(page); + __init_single_page(page, pfn, zone_idx, nid, set_count, true); /* * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer @@ -1041,7 +1043,7 @@ static void __ref memmap_init_compound(struct page *head, for (pfn = head_pfn + 1; pfn < end_pfn; pfn++) { struct page *page = pfn_to_page(pfn); - __init_zone_device_page(page, pfn, zone_idx, nid, pgmap); + __init_zone_device_page(page, pfn, zone_idx, nid, pgmap, false); prep_compound_tail(head, pfn - head_pfn); set_page_count(page, 0); @@ -1084,7 +1086,7 @@ void __ref memmap_init_zone_device(struct zone *zone, for (pfn = start_pfn; pfn < end_pfn; pfn += pfns_per_compound) { struct page *page = pfn_to_page(pfn); - __init_zone_device_page(page, pfn, zone_idx, nid, pgmap); + __init_zone_device_page(page, pfn, zone_idx, nid, pgmap, true); if (pfns_per_compound == 1) continue; @@ -2058,7 +2060,7 @@ static unsigned long __init deferred_init_pages(struct zone *zone, } else { page++; } - __init_single_page(page, pfn, zid, nid); + __init_single_page(page, pfn, zid, nid, true, false); nr_pages++; } return (nr_pages); From patchwork Fri Sep 22 07:09:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 143366 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:6358:a55:b0:13f:353d:d1ed with SMTP id 21csp4574086rwb; Fri, 22 Sep 2023 04:11:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE3P2IR7QZA0GJiIbnplcbBLJmzhDqSOpnMg8BNLaNmOaWMUX9WKKOG8GBMP+j8JERycNck X-Received: by 2002:a17:902:9a43:b0:1c5:d0ba:429 with SMTP id x3-20020a1709029a4300b001c5d0ba0429mr4184861plv.4.1695381087657; Fri, 22 Sep 2023 04:11:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695381087; cv=none; d=google.com; s=arc-20160816; b=B69wT+eYSZdGo7qPHzujLHeNYg/ZvGFHBkwd3stOJfPtaajE7i4uNvdD0ONvs+dzGj UX3lmQs2hD2kVan09CHrRPVOKLpfMDK6p6IejVaY3LfMxXyx0WE6oZQfeBzWZAAlZSA2 EItmy2iqUk6tRors7JAROvVQ4JvHDRxaOM4LCIiAjIadaTwMizVibDFN0vsIzelG1luh u6sNAXl+Mw4DQnU7aBwr6rlOeqc273VravzTD9ylvfxPnrjfu564/MAjei+4JaJqSLk8 iu/Hb7XyUv+rGJm5VFZyaoZCSBX+M0sNetJZ0scRdVvU2Rh8DmQRQem6/Zf8rz20k35c D1Iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eNDRWQx4rNnkd5izfMYcW2fBhjnfX3xJn3/1FZ3VThE=; fh=KSy1dDf6lUvgdoPctzpy47/Mjv6bi14N3nvpkZLQDdc=; b=WZDf4aivjwlarV/X9wRowacVF6ux7t8bS2cglOaaxEbnl81fdioxPO0XTT1JPDOzD8 yv6jqD7sbF6oY0UurG4+UWvt5viSJdacP1sMlP3WXkB9HhZVdons5L3zsaRIt3am+iCH shMzcJHg0U+aqrCL7YFB5eYYJY0Kv8xV8IrYpXke/XcfKKq7aQotx0LTiMG6iMNpNI6V DDufmQWI7mYGKpeoRMRuMQHtuesp7/sFGqnO1Fa4Wy/CNiz5ngDJ0WVFKUymIHluQY+R 1IE1yEuvOC7DD+E5mHeNs4RuyCpCSNQG/QL6hVW8eZpYw5tiunjZKyfcbqtFUBnmf7li R0iQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=iSP4Wnij; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id kb14-20020a170903338e00b001c1f1394bf9si3335847plb.357.2023.09.22.04.11.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:11:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=iSP4Wnij; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 56CE785624A4; Fri, 22 Sep 2023 00:10:29 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230430AbjIVHKU (ORCPT + 30 others); Fri, 22 Sep 2023 03:10:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230390AbjIVHKS (ORCPT ); Fri, 22 Sep 2023 03:10:18 -0400 Received: from out-213.mta0.migadu.com (out-213.mta0.migadu.com [IPv6:2001:41d0:1004:224b::d5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23006197 for ; Fri, 22 Sep 2023 00:10:11 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695366609; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eNDRWQx4rNnkd5izfMYcW2fBhjnfX3xJn3/1FZ3VThE=; b=iSP4WnijIwKLq3l9vadWpYjfJoJm3g4DerOWhwM35ttFjdkUM6l19oJCA3gNFvh1Mphrj3 HsqfWa/w9fO+8D4ekxGE+wh4qCZ9R7alts+HM2SsnLz/nxZodV+mCe9vbVcqanUTTx5BMC y/tcpUU+PnQQYCSvaVyj01COgaRrssQ= From: Yajun Deng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, glider@google.com, elver@google.com, dvyukov@google.com, rppt@kernel.org, david@redhat.com, osalvador@suse.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Yajun Deng Subject: [PATCH 2/4] mm: Introduce MEMINIT_LATE context Date: Fri, 22 Sep 2023 15:09:21 +0800 Message-Id: <20230922070923.355656-3-yajun.deng@linux.dev> In-Reply-To: <20230922070923.355656-1-yajun.deng@linux.dev> References: <20230922070923.355656-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Fri, 22 Sep 2023 00:10:29 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777735919351413397 X-GMAIL-MSGID: 1777735919351413397 __free_pages_core() will always reset pages count and clear reserved flag. It will consume a lot of time if there are a lot of pages. Introduce MEMINIT_LATE context, if the context is MEMINIT_EARLY, we don't need reset pages count and clear reserved flag. Signed-off-by: Yajun Deng --- include/linux/mmzone.h | 1 + mm/internal.h | 7 ++++--- mm/kmsan/init.c | 2 +- mm/memblock.c | 4 ++-- mm/memory_hotplug.c | 2 +- mm/mm_init.c | 11 ++++++----- mm/page_alloc.c | 14 ++++++++------ 7 files changed, 23 insertions(+), 18 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1e9cf3aa1097..253e792d409f 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1442,6 +1442,7 @@ bool zone_watermark_ok_safe(struct zone *z, unsigned int order, */ enum meminit_context { MEMINIT_EARLY, + MEMINIT_LATE, MEMINIT_HOTPLUG, }; diff --git a/mm/internal.h b/mm/internal.h index 8bded7f98493..31737196257c 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -394,9 +394,10 @@ static inline void clear_zone_contiguous(struct zone *zone) extern int __isolate_free_page(struct page *page, unsigned int order); extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); -extern void memblock_free_pages(struct page *page, unsigned long pfn, - unsigned int order); -extern void __free_pages_core(struct page *page, unsigned int order); +extern void memblock_free_pages(unsigned long pfn, unsigned int order, + enum meminit_context context); +extern void __free_pages_core(struct page *page, unsigned int order, + enum meminit_context context); /* * This will have no effect, other than possibly generating a warning, if the diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c index ffedf4dbc49d..b7ed98b854a6 100644 --- a/mm/kmsan/init.c +++ b/mm/kmsan/init.c @@ -172,7 +172,7 @@ static void do_collection(void) shadow = smallstack_pop(&collect); origin = smallstack_pop(&collect); kmsan_setup_meta(page, shadow, origin, collect.order); - __free_pages_core(page, collect.order); + __free_pages_core(page, collect.order, MEMINIT_LATE); } } diff --git a/mm/memblock.c b/mm/memblock.c index 5a88d6d24d79..a32364366bb2 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1685,7 +1685,7 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size) end = PFN_DOWN(base + size); for (; cursor < end; cursor++) { - memblock_free_pages(pfn_to_page(cursor), cursor, 0); + memblock_free_pages(cursor, 0, MEMINIT_LATE); totalram_pages_inc(); } } @@ -2089,7 +2089,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) while (start + (1UL << order) > end) order--; - memblock_free_pages(pfn_to_page(start), start, order); + memblock_free_pages(start, order, MEMINIT_LATE); start += (1UL << order); } diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 3b301c4023ff..d38548265f26 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -634,7 +634,7 @@ void generic_online_page(struct page *page, unsigned int order) * case in page freeing fast path. */ debug_pagealloc_map_pages(page, 1 << order); - __free_pages_core(page, order); + __free_pages_core(page, order, MEMINIT_HOTPLUG); totalram_pages_add(1UL << order); } EXPORT_SYMBOL_GPL(generic_online_page); diff --git a/mm/mm_init.c b/mm/mm_init.c index c40042098a82..0a4437aae30d 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1976,7 +1976,7 @@ static void __init deferred_free_range(unsigned long pfn, if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) { for (i = 0; i < nr_pages; i += pageblock_nr_pages) set_pageblock_migratetype(page + i, MIGRATE_MOVABLE); - __free_pages_core(page, MAX_ORDER); + __free_pages_core(page, MAX_ORDER, MEMINIT_LATE); return; } @@ -1986,7 +1986,7 @@ static void __init deferred_free_range(unsigned long pfn, for (i = 0; i < nr_pages; i++, page++, pfn++) { if (pageblock_aligned(pfn)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, 0); + __free_pages_core(page, 0, MEMINIT_LATE); } } @@ -2568,9 +2568,10 @@ void __init set_dma_reserve(unsigned long new_dma_reserve) dma_reserve = new_dma_reserve; } -void __init memblock_free_pages(struct page *page, unsigned long pfn, - unsigned int order) +void __init memblock_free_pages(unsigned long pfn, unsigned int order, + enum meminit_context context) { + struct page *page = pfn_to_page(pfn); if (IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) { int nid = early_pfn_to_nid(pfn); @@ -2583,7 +2584,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, /* KMSAN will take care of these pages. */ return; } - __free_pages_core(page, order); + __free_pages_core(page, order, context); } DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 06be8821d833..6c4f4531bee0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1278,7 +1278,7 @@ static void __free_pages_ok(struct page *page, unsigned int order, __count_vm_events(PGFREE, 1 << order); } -void __free_pages_core(struct page *page, unsigned int order) +void __free_pages_core(struct page *page, unsigned int order, enum meminit_context context) { unsigned int nr_pages = 1 << order; struct page *p = page; @@ -1289,14 +1289,16 @@ void __free_pages_core(struct page *page, unsigned int order) * of all pages to 1 ("allocated"/"not free"). We have to set the * refcount of all involved pages to 0. */ - prefetchw(p); - for (loop = 0; loop < (nr_pages - 1); loop++, p++) { - prefetchw(p + 1); + if (context != MEMINIT_EARLY) { + prefetchw(p); + for (loop = 0; loop < (nr_pages - 1); loop++, p++) { + prefetchw(p + 1); + __ClearPageReserved(p); + set_page_count(p, 0); + } __ClearPageReserved(p); set_page_count(p, 0); } - __ClearPageReserved(p); - set_page_count(p, 0); atomic_long_add(nr_pages, &page_zone(page)->managed_pages); From patchwork Fri Sep 22 07:09:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 143223 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp5375711vqi; Fri, 22 Sep 2023 00:10:46 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEWagYux1dbXRLn6vwp/qcai9+pnu7CzMRYq4uq0ylYagDXY7xgAPgLt4vv0Xfru9uvU36G X-Received: by 2002:a1f:4903:0:b0:48d:b7c:56c8 with SMTP id w3-20020a1f4903000000b0048d0b7c56c8mr7415080vka.0.1695366646238; Fri, 22 Sep 2023 00:10:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695366646; cv=none; d=google.com; s=arc-20160816; b=WwnOJ++7lU7SMn1iMXjAl4JE39xx+VROxQToquUCLgZNlnKm3FTIXqjpF5K7hWsJVm o5e7XPO7XtWGcR9QEUH5uF++Zj8+NxQkWtLMjPjgcfgaYuDLlWI/jRZ0++QFzkQHsvzN NQfkm4Yl0FaXsCF5BGAKic2wq0ieQ23mJvOZHVRUW6xWCuU0BtkXPVMCtlD2nHmUbiJX rVzJsKByr1IqSCZaAbyEgQuB6v7WLnyUHbH5leMgMl5/Y8v/IruTlo7gVI8uopMc+prl Mx6rjza5infCVq5M4kFbDUaY8uqNmj3BIO/mOOU0TsqQ+yT1AshHDEfKlW4cAxyqdQL3 M4Ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YklfUpteE5FTVxoZIVXyNtzYnXnzzLxI2l0gg0fMR4U=; fh=KSy1dDf6lUvgdoPctzpy47/Mjv6bi14N3nvpkZLQDdc=; b=CsApJ7akrX5wLqNx8H37wbbAtdjhCm+kvYlbQ6RUWnJGrd5SA+xMuJK4x6au+R/FcN oVU+zMXrZdUyzx45K4cDxTLnAFx/UX60xDTU6vUII84dGLiJ3hPWaZc+NL1Yw0CidUkc tr+8lTw41a7MRiclu8QA0KTvqVtWd4ixUzwFKZzIjy+GVikte7xF89ht2Ou66+YJKLib y4fqvcFr+lyv5qrssovRPzDjZTvsGRTwWqfpyepncOqet4iSetu51YXqqqIX0AtNlkkH 8V4j6WRm3nRmiB+zQhn7NyvbGQBdObMAh2Vu5jABHrLFbZSEUnX1hXBPNMni5RsSKBAD 8ssQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=AFYpk1uq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id 21-20020a630d55000000b005649cee422esi3182345pgn.464.2023.09.22.00.10.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 00:10:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=AFYpk1uq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 6742783BB5A6; Fri, 22 Sep 2023 00:10:43 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230348AbjIVHKb (ORCPT + 30 others); Fri, 22 Sep 2023 03:10:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230445AbjIVHK1 (ORCPT ); Fri, 22 Sep 2023 03:10:27 -0400 Received: from out-210.mta0.migadu.com (out-210.mta0.migadu.com [91.218.175.210]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68167198 for ; Fri, 22 Sep 2023 00:10:19 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695366617; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YklfUpteE5FTVxoZIVXyNtzYnXnzzLxI2l0gg0fMR4U=; b=AFYpk1uq44SipDFiOBO/r4CBpTdJf11xLhKC3MBb7kaBEY5+vQeckQrlIU+3y3vfuW33xS T2mK50ZEPqJJLFI3vGfPNpey/+U8/pdmze2yWsY1x79c7yi/xA5LllfZ122JnR+CSx0glc RY5VQg6TmowLlVqe3uOL2GyJOGwdJA4= From: Yajun Deng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, glider@google.com, elver@google.com, dvyukov@google.com, rppt@kernel.org, david@redhat.com, osalvador@suse.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Yajun Deng Subject: [PATCH 3/4] mm: Set page count and mark page reserved in reserve_bootmem_region Date: Fri, 22 Sep 2023 15:09:22 +0800 Message-Id: <20230922070923.355656-4-yajun.deng@linux.dev> In-Reply-To: <20230922070923.355656-1-yajun.deng@linux.dev> References: <20230922070923.355656-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Fri, 22 Sep 2023 00:10:43 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777720776276975492 X-GMAIL-MSGID: 1777720776276975492 memmap_init_range() would set page count of all pages, but the free pages count would be reset in __free_pages_core(). These two are opposite operations. It's unnecessary and time-consuming when it's in MEMINIT_EARLY context. Set page count and mark page reserved in reserve_bootmem_region when in MEMINIT_EARLY context, and change the context from MEMINIT_LATE to MEMINIT_EARLY in __free_pages_memory. At the same time, the init list head in reserve_bootmem_region isn't need. As it already done in __init_single_page. The following data was tested on an x86 machine with 190GB of RAM. before: free_low_memory_core_early() 342ms after: free_low_memory_core_early() 286ms Signed-off-by: Yajun Deng --- mm/memblock.c | 2 +- mm/mm_init.c | 20 ++++++++++++++------ mm/page_alloc.c | 8 +++++--- 3 files changed, 20 insertions(+), 10 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index a32364366bb2..9276f1819982 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2089,7 +2089,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) while (start + (1UL << order) > end) order--; - memblock_free_pages(start, order, MEMINIT_LATE); + memblock_free_pages(start, order, MEMINIT_EARLY); start += (1UL << order); } diff --git a/mm/mm_init.c b/mm/mm_init.c index 0a4437aae30d..1cc310f706a9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -718,7 +718,7 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) if (zone_spans_pfn(zone, pfn)) break; } - __init_single_page(pfn_to_page(pfn), pfn, zid, nid, true, false); + __init_single_page(pfn_to_page(pfn), pfn, zid, nid, false, false); } #else static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} @@ -756,8 +756,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, init_reserved_page(start_pfn, nid); - /* Avoid false-positive PageTail() */ - INIT_LIST_HEAD(&page->lru); + /* Set page count for the reserve region */ + init_page_count(page); /* * no need for atomic set_bit because the struct @@ -888,9 +888,17 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone } page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid, true, false); - if (context == MEMINIT_HOTPLUG) - __SetPageReserved(page); + + /* If the context is MEMINIT_EARLY, we will set page count and + * mark page reserved in reserve_bootmem_region, the free region + * wouldn't have page count and reserved flag and we don't + * need to reset pages count and clear reserved flag in + * __free_pages_core. + */ + if (context == MEMINIT_EARLY) + __init_single_page(page, pfn, zone, nid, false, false); + else + __init_single_page(page, pfn, zone, nid, true, true); /* * Usually, we want to mark the pageblock MIGRATE_MOVABLE, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6c4f4531bee0..6ac58c5f3b00 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1285,9 +1285,11 @@ void __free_pages_core(struct page *page, unsigned int order, enum meminit_conte unsigned int loop; /* - * When initializing the memmap, __init_single_page() sets the refcount - * of all pages to 1 ("allocated"/"not free"). We have to set the - * refcount of all involved pages to 0. + * When initializing the memmap, memmap_init_range sets the refcount + * of all pages to 1 ("allocated"/"not free") in hotplug context. We + * have to set the refcount of all involved pages to 0. Otherwise, + * we don't do it, as reserve_bootmem_region only set the refcount on + * reserve region ("allocated") in early context. */ if (context != MEMINIT_EARLY) { prefetchw(p); From patchwork Fri Sep 22 07:09:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 143618 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp5818925vqi; Fri, 22 Sep 2023 12:29:08 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFICeuFMXmHfVFrDCNV7dEsECvWCxLdOoTQDNu9qmCV+rBGKMaWzJXFMN6TugCm9fl0K1FF X-Received: by 2002:a17:902:f548:b0:1b8:9fc4:2733 with SMTP id h8-20020a170902f54800b001b89fc42733mr354496plf.3.1695410948557; Fri, 22 Sep 2023 12:29:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695410948; cv=none; d=google.com; s=arc-20160816; b=F9u47xgoc5N4UoL7Ii9DYy9itXyrsTPOK4XsSom+fTbj9d/VpPGqwzQFlWvA272WgD 3K/YAJhpi24/9rFRQ/ozLHzyf5yxr9tFHA34tA9LxlONcl5JASTo02uYMbKl478pk9sh fK4bMmFQZt44gtZnEmEjh6ZS1+6NqmA9ip2qumoE6d4z95waW78h/QaV55+fqA//H+sD 7P1CanSe+FNDiL4hNwlnqxGr+FVIf8DeR0insitaa2JZyVuCZzKa7UWwX/HO7Sm9dGtm XGnKdy8PmqcW3CDRJ4aoaU0lHMrnwdS1eOTB6a8bCLK9Rd9BqprBmAj3BO+j+GPBCKFj mJIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0NMcmRX/w2n3M3EQea1/2ZhpgRQyZV6LdD1tfFyUHrY=; fh=KSy1dDf6lUvgdoPctzpy47/Mjv6bi14N3nvpkZLQDdc=; b=wRdLaWqOcb/BFENqMFe4vgz2GJ34v79WUVGSoFiD3u+zQu47VJ6RnqV+vRmnG/c+h7 rfhvAhoTRsXrmCAzjg8j9FLEQ5X61IAPrvcDXlJGk/if8+RRrH3bo91kjVPq7S5hgW9g o0LDUXvzB1mEQOqoc/UWnDKxioSF/xhGw7nH3lIgBWd74c7rfLDQo/yK4VKA4EfD9/ET TZghY/zPglJbQVO38NrFbU+yX53hoDfzLxgCHJRzjWwIic7jYh0ihVW+LSy0LbdwFq8o ZrFCuWTcqcA7123oCCyFgAHwBn2Zgh6yFehAMXFW6YIdY3xbzUQxrs/MJdeP6bUNfQNv tYNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=fPPxKFrT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id y15-20020a17090322cf00b001c470c25409si4841745plg.327.2023.09.22.12.29.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 12:29:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=fPPxKFrT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A0FA883421A5; Fri, 22 Sep 2023 00:10:40 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230515AbjIVHKj (ORCPT + 30 others); Fri, 22 Sep 2023 03:10:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230253AbjIVHKd (ORCPT ); Fri, 22 Sep 2023 03:10:33 -0400 Received: from out-217.mta0.migadu.com (out-217.mta0.migadu.com [91.218.175.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFDC318F for ; Fri, 22 Sep 2023 00:10:27 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695366626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0NMcmRX/w2n3M3EQea1/2ZhpgRQyZV6LdD1tfFyUHrY=; b=fPPxKFrTLFfvL3jUV7itB9gMnKWR/r7Snjv26hnF+iiC5cBKWt1EOcJiso/9kwp07Pc/E/ xO9LzRFYie1qlDtSrkxynAhhzP/0j+M5qMOLAGoypKYac3zL2PZvrY3sznXwcP3QWg6wwc 6fDf5t+PpwLmQdL+crOn+jgf7v0Hej8= From: Yajun Deng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, glider@google.com, elver@google.com, dvyukov@google.com, rppt@kernel.org, david@redhat.com, osalvador@suse.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Yajun Deng Subject: [PATCH 4/4] mm: don't set page count in deferred_init_pages Date: Fri, 22 Sep 2023 15:09:23 +0800 Message-Id: <20230922070923.355656-5-yajun.deng@linux.dev> In-Reply-To: <20230922070923.355656-1-yajun.deng@linux.dev> References: <20230922070923.355656-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Fri, 22 Sep 2023 00:10:40 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777767230327661908 X-GMAIL-MSGID: 1777767230327661908 The operations of page count in deferred_init_pages and deferred_free_range is the opposite operation. It's unnecessary and time-consuming. Don't set page count in deferred_init_pages, as it'll be reset later. The following data was tested on an x86 machine with 190GB of RAM. before: node 0 deferred pages initialised in 78ms after: node 0 deferred pages initialised in 72ms Signed-off-by: Yajun Deng --- mm/mm_init.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 1cc310f706a9..fe78f6916c66 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1984,7 +1984,7 @@ static void __init deferred_free_range(unsigned long pfn, if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) { for (i = 0; i < nr_pages; i += pageblock_nr_pages) set_pageblock_migratetype(page + i, MIGRATE_MOVABLE); - __free_pages_core(page, MAX_ORDER, MEMINIT_LATE); + __free_pages_core(page, MAX_ORDER, MEMINIT_EARLY); return; } @@ -1994,7 +1994,7 @@ static void __init deferred_free_range(unsigned long pfn, for (i = 0; i < nr_pages; i++, page++, pfn++) { if (pageblock_aligned(pfn)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, 0, MEMINIT_LATE); + __free_pages_core(page, 0, MEMINIT_EARLY); } } @@ -2068,7 +2068,7 @@ static unsigned long __init deferred_init_pages(struct zone *zone, } else { page++; } - __init_single_page(page, pfn, zid, nid, true, false); + __init_single_page(page, pfn, zid, nid, false, false); nr_pages++; } return (nr_pages);