From patchwork Mon Jul 24 13:46:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Usama Arif X-Patchwork-Id: 125021 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9010:0:b0:3e4:2afc:c1 with SMTP id l16csp1835561vqg; Mon, 24 Jul 2023 07:25:14 -0700 (PDT) X-Google-Smtp-Source: APBJJlHlm8qHI4D/pzvdQZlLygvVm9N+oymxy+JA0nMKYut6Qab0CqljzoE4L1JH2fwH81GDQ0HZ X-Received: by 2002:a17:907:2bfa:b0:99b:56d4:82dc with SMTP id gv58-20020a1709072bfa00b0099b56d482dcmr9948984ejc.67.1690208713986; Mon, 24 Jul 2023 07:25:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690208713; cv=none; d=google.com; s=arc-20160816; b=QWozVEIabut5zP6v03YltnKLBLluu+wqL+dYs7Ri0p8PCpqmSbWjo9qm5WUPk140SR wCqjVufuCCiSylQ4CQv4WP/TlY9OAvr3pemHKk7zaUpKjqjVx6t928qaRLUI2/bK0jJP k5aT0p/5tWFv1XmwzYMBgHaP+5FILubXvsD/cZ/Jz3udPnr9rjCL+kOWKOuFinYaNAQx lMLKG2awtYJdevmFHZC/8qp5Ii3d17s8rzWFafElx+XdYYkN4FylalCusVkMjrnV7goZ 9uVbhRKqcfvFncQTW50gkkjpJMiG3VeQMhn5LJXF/88FvP80bATRyT8HbjGuKLmivWAh D1aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=l4pGELgjvEXBJuRM+kyS6zBxvRB1Y0OOMkMsDX033Vw=; fh=6ywE3cadAEHt3CqWKbi2najT+v0c7si89uhbZ0SfPzA=; b=lV2adXxDpZmDzjZI5LhuhdkagB7Wmz65WLpndGbiJQloCNGOS+S6tviAZf6BqR4/ed dJSzHGCkTH6mKCiHOLtVgvUr+pgsiTDBXUsW4HEkKHs6yqf+mFffKJ/DqAmw3noKoeIg IzkDcqXysN54BjDfDnBOAbB8xSBusSmkqtTBvs1FIQ8RQd6PW5Cgvet9uLupGUOvnc0H 3Kdjqq6t2NZN0RkkFiUaQLN16IxTV7h71FEuVp3XbAERmNKE2Jq3xklRdjgtL8L/WGDN eUt6/vDTGVdIuPCp+ZU438en8jRo+iQ+KuBHugqFKcmwmsu4xBaXdIfcLGp4ToSRzcGt Y+Cw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=U9TJoPZM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u23-20020a170906b11700b0099270bd1b10si5871320ejy.208.2023.07.24.07.24.48; Mon, 24 Jul 2023 07:25:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=U9TJoPZM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229501AbjGXNuS (ORCPT + 99 others); Mon, 24 Jul 2023 09:50:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231553AbjGXNtw (ORCPT ); Mon, 24 Jul 2023 09:49:52 -0400 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BA991BD8 for ; Mon, 24 Jul 2023 06:46:53 -0700 (PDT) Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-3fb4146e8ceso34268115e9.0 for ; Mon, 24 Jul 2023 06:46:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1690206411; x=1690811211; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l4pGELgjvEXBJuRM+kyS6zBxvRB1Y0OOMkMsDX033Vw=; b=U9TJoPZM8Vb+e/+TDxZOJ/G64+DOvXK8duWaCY98EJ9LR85s7Ir6jydu8BZa479KxZ jRykc91Lr65/WdO14wqIbrmESX+QVJSfPKmbkz08W6PegHJYRKhZsHYX5/ZeohmFO5I6 ROm+xlqvOa/+KqvRaMncQyPQY2le2+8HqRa+EVD/36i0+8/kad2mKe6z940vhrF7rJAO MA+rdL862CQeWZy+wv9ZnuKLEUaUrm6Ho1ie1q6HfyD0bJpWH4Cwb1xB+e3/cLjPXN9C v8pFji+l0xoy/MHWdK9ePxYNKHToDuhq5eys6H6aN5bmin9SIzYcVj3G2fhYuOBSlPSk kY4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690206411; x=1690811211; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l4pGELgjvEXBJuRM+kyS6zBxvRB1Y0OOMkMsDX033Vw=; b=keKkqPfnskHUYymN/Qw3p+uXYNwNhkabm1tSoPmCi3PSLpqQhWQaJCppKzQh6qZCp4 xo3ITe5nhKQnQyjnt4zDJ88qtcCYx2utMw+I6kXI6YDKoIesuzwP7FQiZnze3VnJHY6o 7jjbmiKMPZ90BNwCsBcMRX5YBRPKftHtctUeQIrESYLFUgmR6sEsgy+ef5wn2tUt/5Vv y5EBc/Rc1eVZnYVZRJikj6an+drbMnVmQnZckG1W34OaxLHu/CV3d5yUl9HlC+AQ3FSM qFsTdayJwNfIOh5lI+KSAeT2Fve7DqM7QnZYSaSEFM/q3EJvvBf8Vj60uY8ki3DocJIc DRQQ== X-Gm-Message-State: ABy/qLbCbo/XemkDQNmub3SA+O74ShCyCAaTV1h/8zC94g2DFbMObrTz p8zA/QY0V2IfHyk3Bi9WPUPT+Db7pjvTTJf0BMg= X-Received: by 2002:a1c:7703:0:b0:3fa:d160:fc6d with SMTP id t3-20020a1c7703000000b003fad160fc6dmr6581149wmi.30.1690206411484; Mon, 24 Jul 2023 06:46:51 -0700 (PDT) Received: from localhost.localdomain ([2a02:6b6a:b465:0:d7c4:7f46:8fed:f874]) by smtp.gmail.com with ESMTPSA id e19-20020a05600c219300b003fbe791a0e8sm10209354wme.0.2023.07.24.06.46.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Jul 2023 06:46:50 -0700 (PDT) From: Usama Arif To: linux-mm@kvack.org, muchun.song@linux.dev, mike.kravetz@oracle.com, rppt@kernel.org Cc: linux-kernel@vger.kernel.org, fam.zheng@bytedance.com, liangma@liangbit.com, simon.evans@bytedance.com, punit.agrawal@bytedance.com, Usama Arif Subject: [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO Date: Mon, 24 Jul 2023 14:46:44 +0100 Message-Id: <20230724134644.1299963-5-usama.arif@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230724134644.1299963-1-usama.arif@bytedance.com> References: <20230724134644.1299963-1-usama.arif@bytedance.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772312292295697848 X-GMAIL-MSGID: 1772312292295697848 If the region is for hugepages and if HVO is enabled, then those struct pages which will be freed later don't need to be initialized. This can save significant time when a large number of hugepages are allocated at boot time. As memmap_init_reserved_pages is only called at boot time, we don't need to worry about memory hotplug. Hugepage regions are kept separate from non hugepage regions in memblock_merge_regions so that initialization for unused struct pages can be skipped for the entire region. Signed-off-by: Usama Arif --- mm/hugetlb_vmemmap.c | 2 +- mm/hugetlb_vmemmap.h | 3 +++ mm/memblock.c | 27 ++++++++++++++++++++++----- 3 files changed, 26 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index bdf750a4786b..b5b7834e0f42 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -443,7 +443,7 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end, DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); -static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); +bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); /** diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 3525c514c061..8b9a1563f7b9 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -58,4 +58,7 @@ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h) return hugetlb_vmemmap_optimizable_size(h) != 0; } bool vmemmap_should_optimize(const struct hstate *h, const struct page *head); + +extern bool vmemmap_optimize_enabled; + #endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/memblock.c b/mm/memblock.c index e92d437bcb51..62072a0226de 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -21,6 +21,7 @@ #include #include "internal.h" +#include "hugetlb_vmemmap.h" #define INIT_MEMBLOCK_REGIONS 128 #define INIT_PHYSMEM_REGIONS 4 @@ -519,7 +520,8 @@ static void __init_memblock memblock_merge_regions(struct memblock_type *type, if (this->base + this->size != next->base || memblock_get_region_node(this) != memblock_get_region_node(next) || - this->flags != next->flags) { + this->flags != next->flags || + this->hugepage_size != next->hugepage_size) { BUG_ON(this->base + this->size > next->base); i++; continue; @@ -2125,10 +2127,25 @@ static void __init memmap_init_reserved_pages(void) /* initialize struct pages for the reserved regions */ for_each_reserved_mem_region(region) { nid = memblock_get_region_node(region); - start = region->base; - end = start + region->size; - - reserve_bootmem_region(start, end, nid); + /* + * If the region is for hugepages and if HVO is enabled, then those + * struct pages which will be freed later don't need to be initialized. + * This can save significant time when a large number of hugepages are + * allocated at boot time. As this is at boot time, we don't need to + * worry about memory hotplug. + */ + if (region->hugepage_size && vmemmap_optimize_enabled) { + for (start = region->base; + start < region->base + region->size; + start += region->hugepage_size) { + end = start + HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page); + reserve_bootmem_region(start, end, nid); + } + } else { + start = region->base; + end = start + region->size; + reserve_bootmem_region(start, end, nid); + } } }