Message ID | 20230727204624.1942372-7-usama.arif@bytedance.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:918b:0:b0:3e4:2afc:c1 with SMTP id s11csp25925vqg; Thu, 27 Jul 2023 14:40:28 -0700 (PDT) X-Google-Smtp-Source: APBJJlEq9T76pQuEMNqdzB/s8eZCK9CcOFZRc+BSJ2O/oaxuJmHsfxzXPkAksgqsqUbgLDzUJ9bz X-Received: by 2002:a05:6a00:ac4:b0:682:f48a:726a with SMTP id c4-20020a056a000ac400b00682f48a726amr438103pfl.25.1690494028325; Thu, 27 Jul 2023 14:40:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690494028; cv=none; d=google.com; s=arc-20160816; b=R4TC29kHcZb1KriHU5OwAlM3+2w0g0m2vGdlNZ+5BlQ+Zcdu8PjbCZ7TUK7NlGiGU8 wLBm7d8xYX+tVHNQU3CmoxWKL2IUKOBeBbnATm33sEhwndTgf5eRahd6Uegwe6yR/IvC 1G2I2E/eeqGxV4gKcBekIkueT27WmAtEpKRoegzGEN6WfqHcRp7GJfZTQrIXzpZnnOrr Q1fAecO0off6dajj6ka7sGZYuHq2X93zJbhAFOf/Lm65ewWy0/iI0S3cHdB1E5Oc8nql /zhGjgWdxK6XvnsPBEVuCF1yh0hqlki0fEnV6gxUzVwSa6hHFFmaSEoinCkSmwakBDwQ N5Kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ckgW/P+ejqydrp6JQqQrrU5HjkGQNsndgwMWGYDB/pc=; fh=6ywE3cadAEHt3CqWKbi2najT+v0c7si89uhbZ0SfPzA=; b=W/Xjl+9hV1mbMmGORpG36lMiVwR4Ywc5YVkvs1B4Q+5ZVsxFtW/O8cUndh9ZsSV7Ae xdzx3/Q+LgQN4Tjv60Po74FptTTds5pevbdSogJiWwzRz9uTdEewHbjWXmKlkqGWhoyv a9eSVChILBrdeCG1oA0LZ6oN8ZnYbFWct+X+Y+ZhjaVtcSdqM5hxfi8r3+IYGy4dLxki oYPYqsJv/RfXWlpUGgeVkI8N+nZF/HOLGVtXhP5/stugleP40QnJrtnlgo3T3Quk9AWG 2lCoHc6o35Zo93QPT2+jB8LEE+BdmYpWV3nE3aERW/XIl8eoq7AN4EPKIDZRvCUIPJwE /d9Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=MHW+96et; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m12-20020a056a00080c00b00686bbfbdb6dsi1954979pfk.169.2023.07.27.14.40.14; Thu, 27 Jul 2023 14:40:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=MHW+96et; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229509AbjG0UrK (ORCPT <rfc822;kloczko.tomasz@gmail.com> + 99 others); Thu, 27 Jul 2023 16:47:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232217AbjG0UrB (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 27 Jul 2023 16:47:01 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 386382D60 for <linux-kernel@vger.kernel.org>; Thu, 27 Jul 2023 13:46:38 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-3174aac120aso1423313f8f.2 for <linux-kernel@vger.kernel.org>; Thu, 27 Jul 2023 13:46:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1690490797; x=1691095597; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ckgW/P+ejqydrp6JQqQrrU5HjkGQNsndgwMWGYDB/pc=; b=MHW+96etwxTrHcysNxNv4Jt5Keu/21p5ZeNuDIbQx9GnIkEA1kq7sTTqw6Q/8BayM4 1Acht/ZPiVv8tOChy+mgBTfHG6ixCBKorlj2dJl7YTk0/sTWgbMWE9WJCFXy+OQ9Iijl F797zrYkH1WHN8+kZpWGtO4V6mpRCt3ijeyEEL/x0uHMVkx2JVs2E0JF144vy0BqQwCm brRLxremJQxfBWLsYq+wA+OhATOq4v3hK/HHE03k794S7TLjgAEkUPQNfZuGNCFBakRv 3NI8Jepy+Nx2mW0XqXn6WjNkB/tP3+1AiL7EGltSxqblDgE7Db3FfhwIHlnqEs7zbPpH c/9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690490797; x=1691095597; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ckgW/P+ejqydrp6JQqQrrU5HjkGQNsndgwMWGYDB/pc=; b=HcWvJSbMTkhtzPPMgy1c8PJUZ1VeaGPrrkHY2pYet46UdygVtss9oCzPVKcvTe54Of GFptEERuCyulR0+3BQnd+Xi5c4qTwRUi6vZa9VSV7GSrOhjrkUDsiy8m+l+TLKcMlw/9 04VlTdOcJlrKfOSspUJbWsVWzC084xNSbAhqxYIanqCr3UhYlISQrMwPStMhqRh7NX/8 CiSSsc/NV4fZxhr1qNJYtzYQi8BHZHDqd6TTvHteeJJsXP5CrCiGEHTzUIaTtsOdzXl5 0F3GHHss1U4MpAzRPWUrpSePHQCqUtwcRQYg/xedRpbnbqeiCKsqES12+0aQ/twvGsVw nkjQ== X-Gm-Message-State: ABy/qLbewnaQyyodr5P7r8U4wej9aBuz3Z54VvVNXpAzqORGF8rV2yee tUPAXQWL5CiH5K8vo9HC0Y1u0A== X-Received: by 2002:adf:db44:0:b0:314:34dd:aaec with SMTP id f4-20020adfdb44000000b0031434ddaaecmr229166wrj.8.1690490796779; Thu, 27 Jul 2023 13:46:36 -0700 (PDT) Received: from localhost.localdomain ([2a02:6b6a:b465:0:7e3e:db9e:70fa:9ccb]) by smtp.gmail.com with ESMTPSA id k1-20020a7bc401000000b003fc06169abdsm2701400wmi.2.2023.07.27.13.46.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:46:36 -0700 (PDT) From: Usama Arif <usama.arif@bytedance.com> To: linux-mm@kvack.org, muchun.song@linux.dev, mike.kravetz@oracle.com, rppt@kernel.org Cc: linux-kernel@vger.kernel.org, fam.zheng@bytedance.com, liangma@liangbit.com, simon.evans@bytedance.com, punit.agrawal@bytedance.com, Usama Arif <usama.arif@bytedance.com> Subject: [v1 6/6] mm: hugetlb: Skip initialization of struct pages freed later by HVO Date: Thu, 27 Jul 2023 21:46:24 +0100 Message-Id: <20230727204624.1942372-7-usama.arif@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230727204624.1942372-1-usama.arif@bytedance.com> References: <20230727204624.1942372-1-usama.arif@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772611466458441448 X-GMAIL-MSGID: 1772611466458441448 |
Series |
mm/memblock: Skip prep and initialization of struct pages freed later by HVO
|
|
Commit Message
Usama Arif
July 27, 2023, 8:46 p.m. UTC
This is done by marking the region for which to skip initialization
with the MEMBLOCK_RSRV_NOINIT flag.
If the region is for hugepages and if HVO is enabled, then those
struct pages which will be freed later don't need to be initialized.
This can save significant time when a large number of hugepages are
allocated at boot time. HUGETLB_VMEMMAP_RESERVE_SIZE
struct pages at the start of hugepage still need to be initialized.
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
mm/hugetlb.c | 21 +++++++++++++++++++++
mm/hugetlb_vmemmap.c | 2 +-
mm/hugetlb_vmemmap.h | 3 +++
3 files changed, 25 insertions(+), 1 deletion(-)
Comments
Hi Usama, kernel test robot noticed the following build warnings: [auto build test WARNING on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Usama-Arif/mm-hugetlb-Skip-prep-of-tail-pages-when-HVO-is-enabled/20230728-044839 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20230727204624.1942372-7-usama.arif%40bytedance.com patch subject: [v1 6/6] mm: hugetlb: Skip initialization of struct pages freed later by HVO config: i386-debian-10.3 (https://download.01.org/0day-ci/archive/20230729/202307290029.Kr5EEBeY-lkp@intel.com/config) compiler: gcc-12 (Debian 12.2.0-14) 12.2.0 reproduce: (https://download.01.org/0day-ci/archive/20230729/202307290029.Kr5EEBeY-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202307290029.Kr5EEBeY-lkp@intel.com/ All warnings (new ones prefixed by >>): In file included from mm/hugetlb.c:49: mm/hugetlb_vmemmap.h:56:6: warning: no previous prototype for 'vmemmap_should_optimize' [-Wmissing-prototypes] 56 | bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) | ^~~~~~~~~~~~~~~~~~~~~~~ mm/hugetlb.c: In function '__alloc_bootmem_huge_page': mm/hugetlb.c:3198:17: error: 'HUGETLB_VMEMMAP_RESERVE_SIZE' undeclared (first use in this function) 3198 | HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ mm/hugetlb.c:3198:17: note: each undeclared identifier is reported only once for each function it appears in >> mm/hugetlb.c:3210:42: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] 3210 | (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); | ^ >> mm/hugetlb.c:3210:33: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] 3210 | (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); | ^ mm/hugetlb.c:3233:42: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] 3233 | (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); | ^ mm/hugetlb.c:3233:33: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] 3233 | (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); | ^ vim +3210 mm/hugetlb.c 3190 3191 int alloc_bootmem_huge_page(struct hstate *h, int nid) 3192 __attribute__ ((weak, alias("__alloc_bootmem_huge_page"))); 3193 int __alloc_bootmem_huge_page(struct hstate *h, int nid) 3194 { 3195 struct huge_bootmem_page *m = NULL; /* initialize for clang */ 3196 int nr_nodes, node; 3197 phys_addr_t hugetlb_vmemmap_reserve_size = 3198 HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page); 3199 phys_addr_t noinit_base; 3200 3201 /* do node specific alloc */ 3202 if (nid != NUMA_NO_NODE) { 3203 m = memblock_alloc_try_nid_raw(huge_page_size(h), huge_page_size(h), 3204 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid); 3205 if (!m) 3206 return 0; 3207 3208 if (vmemmap_optimize_enabled && hugetlb_vmemmap_optimizable(h)) { 3209 noinit_base = virt_to_phys( > 3210 (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); 3211 memblock_rsrv_mark_noinit( 3212 noinit_base, 3213 huge_page_size(h) - hugetlb_vmemmap_reserve_size); 3214 } 3215 3216 goto found; 3217 } 3218 /* allocate from next node when distributing huge pages */ 3219 for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) { 3220 m = memblock_alloc_try_nid_raw( 3221 huge_page_size(h), huge_page_size(h), 3222 0, MEMBLOCK_ALLOC_ACCESSIBLE, node); 3223 /* 3224 * Use the beginning of the huge page to store the 3225 * huge_bootmem_page struct (until gather_bootmem 3226 * puts them into the mem_map). 3227 */ 3228 if (!m) 3229 return 0; 3230 3231 if (vmemmap_optimize_enabled && hugetlb_vmemmap_optimizable(h)) { 3232 noinit_base = virt_to_phys( 3233 (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); 3234 memblock_rsrv_mark_noinit( 3235 noinit_base, 3236 huge_page_size(h) - hugetlb_vmemmap_reserve_size); 3237 } 3238 3239 goto found; 3240 } 3241 3242 found: 3243 /* Put them into a private list first because mem_map is not up yet */ 3244 INIT_LIST_HEAD(&m->list); 3245 list_add(&m->list, &huge_boot_pages); 3246 m->hstate = h; 3247 return 1; 3248 } 3249
Hi Usama, kernel test robot noticed the following build errors: [auto build test ERROR on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Usama-Arif/mm-hugetlb-Skip-prep-of-tail-pages-when-HVO-is-enabled/20230728-044839 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20230727204624.1942372-7-usama.arif%40bytedance.com patch subject: [v1 6/6] mm: hugetlb: Skip initialization of struct pages freed later by HVO config: arm64-randconfig-r032-20230727 (https://download.01.org/0day-ci/archive/20230729/202307290124.suQ4U8Y4-lkp@intel.com/config) compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a) reproduce: (https://download.01.org/0day-ci/archive/20230729/202307290124.suQ4U8Y4-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202307290124.suQ4U8Y4-lkp@intel.com/ All errors (new ones prefixed by >>): In file included from mm/hugetlb.c:49: mm/hugetlb_vmemmap.h:56:6: warning: no previous prototype for function 'vmemmap_should_optimize' [-Wmissing-prototypes] 56 | bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) | ^ mm/hugetlb_vmemmap.h:56:1: note: declare 'static' if the function is not intended to be used outside of this translation unit 56 | bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) | ^ | static >> mm/hugetlb.c:3198:3: error: use of undeclared identifier 'HUGETLB_VMEMMAP_RESERVE_SIZE' 3198 | HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page); | ^ 1 warning and 1 error generated. vim +/HUGETLB_VMEMMAP_RESERVE_SIZE +3198 mm/hugetlb.c 3190 3191 int alloc_bootmem_huge_page(struct hstate *h, int nid) 3192 __attribute__ ((weak, alias("__alloc_bootmem_huge_page"))); 3193 int __alloc_bootmem_huge_page(struct hstate *h, int nid) 3194 { 3195 struct huge_bootmem_page *m = NULL; /* initialize for clang */ 3196 int nr_nodes, node; 3197 phys_addr_t hugetlb_vmemmap_reserve_size = > 3198 HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page); 3199 phys_addr_t noinit_base; 3200 3201 /* do node specific alloc */ 3202 if (nid != NUMA_NO_NODE) { 3203 m = memblock_alloc_try_nid_raw(huge_page_size(h), huge_page_size(h), 3204 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid); 3205 if (!m) 3206 return 0; 3207 3208 if (vmemmap_optimize_enabled && hugetlb_vmemmap_optimizable(h)) { 3209 noinit_base = virt_to_phys( 3210 (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); 3211 memblock_rsrv_mark_noinit( 3212 noinit_base, 3213 huge_page_size(h) - hugetlb_vmemmap_reserve_size); 3214 } 3215 3216 goto found; 3217 } 3218 /* allocate from next node when distributing huge pages */ 3219 for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) { 3220 m = memblock_alloc_try_nid_raw( 3221 huge_page_size(h), huge_page_size(h), 3222 0, MEMBLOCK_ALLOC_ACCESSIBLE, node); 3223 /* 3224 * Use the beginning of the huge page to store the 3225 * huge_bootmem_page struct (until gather_bootmem 3226 * puts them into the mem_map). 3227 */ 3228 if (!m) 3229 return 0; 3230 3231 if (vmemmap_optimize_enabled && hugetlb_vmemmap_optimizable(h)) { 3232 noinit_base = virt_to_phys( 3233 (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); 3234 memblock_rsrv_mark_noinit( 3235 noinit_base, 3236 huge_page_size(h) - hugetlb_vmemmap_reserve_size); 3237 } 3238 3239 goto found; 3240 } 3241 3242 found: 3243 /* Put them into a private list first because mem_map is not up yet */ 3244 INIT_LIST_HEAD(&m->list); 3245 list_add(&m->list, &huge_boot_pages); 3246 m->hstate = h; 3247 return 1; 3248 } 3249
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c1fcf2af591a..bb2b12f41026 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3166,6 +3166,9 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid) { struct huge_bootmem_page *m = NULL; /* initialize for clang */ int nr_nodes, node; + phys_addr_t hugetlb_vmemmap_reserve_size = + HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page); + phys_addr_t noinit_base; /* do node specific alloc */ if (nid != NUMA_NO_NODE) { @@ -3173,6 +3176,15 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid) 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid); if (!m) return 0; + + if (vmemmap_optimize_enabled && hugetlb_vmemmap_optimizable(h)) { + noinit_base = virt_to_phys( + (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); + memblock_rsrv_mark_noinit( + noinit_base, + huge_page_size(h) - hugetlb_vmemmap_reserve_size); + } + goto found; } /* allocate from next node when distributing huge pages */ @@ -3187,6 +3199,15 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid) */ if (!m) return 0; + + if (vmemmap_optimize_enabled && hugetlb_vmemmap_optimizable(h)) { + noinit_base = virt_to_phys( + (void *)((phys_addr_t) m + hugetlb_vmemmap_reserve_size)); + memblock_rsrv_mark_noinit( + noinit_base, + huge_page_size(h) - hugetlb_vmemmap_reserve_size); + } + goto found; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index bdf750a4786b..b5b7834e0f42 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -443,7 +443,7 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end, DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); -static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); +bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); /** diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 07555d2dc0cb..cb5171abe683 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -64,4 +64,7 @@ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h) { return hugetlb_vmemmap_optimizable_size(h) != 0; } + +extern bool vmemmap_optimize_enabled; + #endif /* _LINUX_HUGETLB_VMEMMAP_H */