Message ID | 20240102131249.76622-2-gang.li@linux.dev |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-14382-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp4435803dyb; Tue, 2 Jan 2024 05:13:48 -0800 (PST) X-Google-Smtp-Source: AGHT+IH+UudUXGZcZXemluCIsjhLcTpyE6YfSD0VDQmVf1FUkWOP+f9wexN4dtpLE0JuChxaQ6ak X-Received: by 2002:a05:6402:268b:b0:554:c9af:a66e with SMTP id w11-20020a056402268b00b00554c9afa66emr13598234edd.9.1704201228702; Tue, 02 Jan 2024 05:13:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704201228; cv=none; d=google.com; s=arc-20160816; b=zpMSCTiWQsxSfE6jmeKFDiF11vS6ljCwAWhxuOieFD76LjPpvlirEEyo47kaQr5HCz 7OFLWoS6cyTas/ahjs9IJtjra5BMtOo6UuOBzuKLQm9Dg7yjNbCP5lIHuuWy92V55LLh 7vqNmvhk4eR0figHcf9+ytzZss1wISfZoKu9hQmvklO+FLQ6nCGtDDypi3oyEXaf4l/u BX3hEpgMBKomPRUPaQoPMZSuTvePnfZ7q+hFKV32xPwkCk0/zy0aICKm0WLT7OVJE3bl Tmdh3yZXHylNwg8o2jJ0Q3tRuQIPePJfZ/dP3AO3tlnKC4DHRdjoWSGLcFz8lmV3kuFw Gi9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=5jUJr4R6zMb6dInOd1l8MHQjoDUi83oQkc19tydUOUM=; fh=5n5b5+XOwLgGNDePhvfXysty56rnkWOs9jaHEODdVeE=; b=oqxEGEA41k0XBakRcW+TP06kCFcLOo8P6RXh+ifLx2ek7wR8TCi6bOPSe9Fgzs2bsN pb/nDEKdYvoR9kYZjd3uc765nQZ1JUBe+voJzKooR7umCYr4vOuhl54pV79BFeG/xujj Y/VTArk4fBLXLqpMkXY6XFeA30o5PQ+I+BVClOinPci1E5hqqObAgs/B6eR7tcokgNA/ fXTBRK3Ov9VZPDhX0we04NLhih/MWIk4E8uWoecSqo4U3jqMtEm599VwB7GaMDbOpkQw wkYGaJn5yDni38TDram/iWBwgxSAas3w9wRTeZPqrXDgticiwX1CzrnVxHawNIWefsEq TqYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=wXNKNFN5; spf=pass (google.com: domain of linux-kernel+bounces-14382-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-14382-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id q11-20020a50aa8b000000b0055480454c46si9190822edc.267.2024.01.02.05.13.48 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 05:13:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-14382-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=wXNKNFN5; spf=pass (google.com: domain of linux-kernel+bounces-14382-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-14382-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 5141C1F22A30 for <ouuuleilei@gmail.com>; Tue, 2 Jan 2024 13:13:48 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E5125101C4; Tue, 2 Jan 2024 13:13:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="wXNKNFN5" X-Original-To: linux-kernel@vger.kernel.org Received: from out-186.mta1.migadu.com (out-186.mta1.migadu.com [95.215.58.186]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 411B7FBEF for <linux-kernel@vger.kernel.org>; Tue, 2 Jan 2024 13:13:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1704201199; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5jUJr4R6zMb6dInOd1l8MHQjoDUi83oQkc19tydUOUM=; b=wXNKNFN5V2QUIIEBJ0cDhA/1XejAaQRMJMF38Uej4HkKHGocMOKkJyR8FJHUYIBh8b/xd0 hARFT4JBoor5YBUOWh0h26mYqSlEI0jo+wKgsAE/hBQ5trdzV5u88ffrxWuDT82wCOF+uR okbyj35zpon6VTuBdHmYdXzMfgwdPIQ= From: Gang Li <gang.li@linux.dev> To: David Hildenbrand <david@redhat.com>, David Rientjes <rientjes@google.com>, Mike Kravetz <mike.kravetz@oracle.com>, Muchun Song <muchun.song@linux.dev>, Andrew Morton <akpm@linux-foundation.org>, Tim Chen <tim.c.chen@linux.intel.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li <gang.li@linux.dev> Subject: [PATCH v3 1/7] hugetlb: code clean for hugetlb_hstate_alloc_pages Date: Tue, 2 Jan 2024 21:12:43 +0800 Message-Id: <20240102131249.76622-2-gang.li@linux.dev> In-Reply-To: <20240102131249.76622-1-gang.li@linux.dev> References: <20240102131249.76622-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786984507566359699 X-GMAIL-MSGID: 1786984507566359699 |
Series |
hugetlb: parallelize hugetlb page init on boot
|
|
Commit Message
Gang Li
Jan. 2, 2024, 1:12 p.m. UTC
The readability of `hugetlb_hstate_alloc_pages` is poor. By cleaning the
code, its readability can be improved, facilitating future modifications.
This patch extracts two functions to reduce the complexity of
`hugetlb_hstate_alloc_pages` and has no functional changes.
- hugetlb_hstate_alloc_pages_node_specific() to handle iterates through
each online node and performs allocation if necessary.
- hugetlb_hstate_alloc_pages_report() report error during allocation.
And the value of h->max_huge_pages is updated accordingly.
Signed-off-by: Gang Li <gang.li@linux.dev>
---
mm/hugetlb.c | 46 +++++++++++++++++++++++++++++-----------------
1 file changed, 29 insertions(+), 17 deletions(-)
Comments
On 2024/1/2 21:12, Gang Li wrote: > The readability of `hugetlb_hstate_alloc_pages` is poor. By cleaning the > code, its readability can be improved, facilitating future modifications. > > This patch extracts two functions to reduce the complexity of > `hugetlb_hstate_alloc_pages` and has no functional changes. > > - hugetlb_hstate_alloc_pages_node_specific() to handle iterates through > each online node and performs allocation if necessary. > - hugetlb_hstate_alloc_pages_report() report error during allocation. > And the value of h->max_huge_pages is updated accordingly. > > Signed-off-by: Gang Li <gang.li@linux.dev> > --- > mm/hugetlb.c | 46 +++++++++++++++++++++++++++++----------------- > 1 file changed, 29 insertions(+), 17 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ed1581b670d42..2606135ec55e6 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -3482,6 +3482,33 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) > h->max_huge_pages_node[nid] = i; > } > > +static bool __init hugetlb_hstate_alloc_pages_node_specific(struct hstate *h) I'd like to rename this to hugetlb_hstate_alloc_pages_specific_nodes. Otherwise, LGTM. Reviewed-by: Muchun Song <muchun.song@linux.dev> > +{ > + int i; > + bool node_specific_alloc = false; > + > + for_each_online_node(i) { > + if (h->max_huge_pages_node[i] > 0) { > + hugetlb_hstate_alloc_pages_onenode(h, i); > + node_specific_alloc = true; > + } > + } > + > + return node_specific_alloc; > +} > + > +static void __init hugetlb_hstate_alloc_pages_report(unsigned long allocated, struct hstate *h) > +{ > + if (allocated < h->max_huge_pages) { > + char buf[32]; > + > + string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); > + pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", > + h->max_huge_pages, buf, allocated); > + h->max_huge_pages = allocated; > + } > +} > + > /* > * NOTE: this routine is called in different contexts for gigantic and > * non-gigantic pages. > @@ -3499,7 +3526,6 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) > struct folio *folio; > LIST_HEAD(folio_list); > nodemask_t *node_alloc_noretry; > - bool node_specific_alloc = false; > > /* skip gigantic hugepages allocation if hugetlb_cma enabled */ > if (hstate_is_gigantic(h) && hugetlb_cma_size) { > @@ -3508,14 +3534,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) > } > > /* do node specific alloc */ > - for_each_online_node(i) { > - if (h->max_huge_pages_node[i] > 0) { > - hugetlb_hstate_alloc_pages_onenode(h, i); > - node_specific_alloc = true; > - } > - } > - > - if (node_specific_alloc) > + if (hugetlb_hstate_alloc_pages_node_specific(h)) > return; > > /* below will do all node balanced alloc */ > @@ -3558,14 +3577,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) > /* list will be empty if hstate_is_gigantic */ > prep_and_add_allocated_folios(h, &folio_list); > > - if (i < h->max_huge_pages) { > - char buf[32]; > - > - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); > - pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", > - h->max_huge_pages, buf, i); > - h->max_huge_pages = i; > - } > + hugetlb_hstate_alloc_pages_report(i, h); > kfree(node_alloc_noretry); > } >
On Tue, 2024-01-02 at 21:12 +0800, Gang Li wrote: > The readability of `hugetlb_hstate_alloc_pages` is poor. By cleaning the > code, its readability can be improved, facilitating future modifications. > > This patch extracts two functions to reduce the complexity of > `hugetlb_hstate_alloc_pages` and has no functional changes. > > - hugetlb_hstate_alloc_pages_node_specific() to handle iterates through > each online node and performs allocation if necessary. > - hugetlb_hstate_alloc_pages_report() report error during allocation. > And the value of h->max_huge_pages is updated accordingly. Minor nit, I think hugetlb_hstate_alloc_pages_errcheck() is more descriptive than hugetlb_hstate_alloc_pages_report(). Otherwise Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> > > Signed-off-by: Gang Li <gang.li@linux.dev> > --- > mm/hugetlb.c | 46 +++++++++++++++++++++++++++++----------------- > 1 file changed, 29 insertions(+), 17 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ed1581b670d42..2606135ec55e6 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -3482,6 +3482,33 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) > h->max_huge_pages_node[nid] = i; > } > > +static bool __init hugetlb_hstate_alloc_pages_node_specific(struct hstate *h) > +{ > + int i; > + bool node_specific_alloc = false; > + > + for_each_online_node(i) { > + if (h->max_huge_pages_node[i] > 0) { > + hugetlb_hstate_alloc_pages_onenode(h, i); > + node_specific_alloc = true; > + } > + } > + > + return node_specific_alloc; > +} > + > +static void __init hugetlb_hstate_alloc_pages_report(unsigned long allocated, struct hstate *h) > +{ > + if (allocated < h->max_huge_pages) { > + char buf[32]; > + > + string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); > + pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", > + h->max_huge_pages, buf, allocated); > + h->max_huge_pages = allocated; > + } > +} > + > /* > * NOTE: this routine is called in different contexts for gigantic and > * non-gigantic pages. > @@ -3499,7 +3526,6 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) > struct folio *folio; > LIST_HEAD(folio_list); > nodemask_t *node_alloc_noretry; > - bool node_specific_alloc = false; > > /* skip gigantic hugepages allocation if hugetlb_cma enabled */ > if (hstate_is_gigantic(h) && hugetlb_cma_size) { > @@ -3508,14 +3534,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) > } > > /* do node specific alloc */ > - for_each_online_node(i) { > - if (h->max_huge_pages_node[i] > 0) { > - hugetlb_hstate_alloc_pages_onenode(h, i); > - node_specific_alloc = true; > - } > - } > - > - if (node_specific_alloc) > + if (hugetlb_hstate_alloc_pages_node_specific(h)) > return; > > /* below will do all node balanced alloc */ > @@ -3558,14 +3577,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) > /* list will be empty if hstate_is_gigantic */ > prep_and_add_allocated_folios(h, &folio_list); > > - if (i < h->max_huge_pages) { > - char buf[32]; > - > - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); > - pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", > - h->max_huge_pages, buf, i); > - h->max_huge_pages = i; > - } > + hugetlb_hstate_alloc_pages_report(i, h); > kfree(node_alloc_noretry); > } >
On 2024/1/10 18:19, Muchun Song wrote: > > > On 2024/1/2 21:12, Gang Li wrote: >> The readability of `hugetlb_hstate_alloc_pages` is poor. By cleaning the >> code, its readability can be improved, facilitating future modifications. >> >> This patch extracts two functions to reduce the complexity of >> `hugetlb_hstate_alloc_pages` and has no functional changes. >> >> - hugetlb_hstate_alloc_pages_node_specific() to handle iterates through >> each online node and performs allocation if necessary. >> - hugetlb_hstate_alloc_pages_report() report error during allocation. >> And the value of h->max_huge_pages is updated accordingly. >> >> Signed-off-by: Gang Li <gang.li@linux.dev> >> --- >> mm/hugetlb.c | 46 +++++++++++++++++++++++++++++----------------- >> 1 file changed, 29 insertions(+), 17 deletions(-) >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index ed1581b670d42..2606135ec55e6 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -3482,6 +3482,33 @@ static void __init >> hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) >> h->max_huge_pages_node[nid] = i; >> } >> +static bool __init hugetlb_hstate_alloc_pages_node_specific(struct >> hstate *h) > > I'd like to rename this to hugetlb_hstate_alloc_pages_specific_nodes. > > Otherwise, LGTM. > > Reviewed-by: Muchun Song <muchun.song@linux.dev> > Thanks! I will adjust it in the next version.
On 2024/1/11 05:55, Tim Chen wrote: > On Tue, 2024-01-02 at 21:12 +0800, Gang Li wrote: >> The readability of `hugetlb_hstate_alloc_pages` is poor. By cleaning the >> code, its readability can be improved, facilitating future modifications. >> >> This patch extracts two functions to reduce the complexity of >> `hugetlb_hstate_alloc_pages` and has no functional changes. >> >> - hugetlb_hstate_alloc_pages_node_specific() to handle iterates through >> each online node and performs allocation if necessary. >> - hugetlb_hstate_alloc_pages_report() report error during allocation. >> And the value of h->max_huge_pages is updated accordingly. > > Minor nit, I think hugetlb_hstate_alloc_pages_errcheck() is more > descriptive than hugetlb_hstate_alloc_pages_report(). Thanks! This looks more intuitive. > > Otherwise > > Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> >
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ed1581b670d42..2606135ec55e6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3482,6 +3482,33 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) h->max_huge_pages_node[nid] = i; } +static bool __init hugetlb_hstate_alloc_pages_node_specific(struct hstate *h) +{ + int i; + bool node_specific_alloc = false; + + for_each_online_node(i) { + if (h->max_huge_pages_node[i] > 0) { + hugetlb_hstate_alloc_pages_onenode(h, i); + node_specific_alloc = true; + } + } + + return node_specific_alloc; +} + +static void __init hugetlb_hstate_alloc_pages_report(unsigned long allocated, struct hstate *h) +{ + if (allocated < h->max_huge_pages) { + char buf[32]; + + string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); + pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", + h->max_huge_pages, buf, allocated); + h->max_huge_pages = allocated; + } +} + /* * NOTE: this routine is called in different contexts for gigantic and * non-gigantic pages. @@ -3499,7 +3526,6 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) struct folio *folio; LIST_HEAD(folio_list); nodemask_t *node_alloc_noretry; - bool node_specific_alloc = false; /* skip gigantic hugepages allocation if hugetlb_cma enabled */ if (hstate_is_gigantic(h) && hugetlb_cma_size) { @@ -3508,14 +3534,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) } /* do node specific alloc */ - for_each_online_node(i) { - if (h->max_huge_pages_node[i] > 0) { - hugetlb_hstate_alloc_pages_onenode(h, i); - node_specific_alloc = true; - } - } - - if (node_specific_alloc) + if (hugetlb_hstate_alloc_pages_node_specific(h)) return; /* below will do all node balanced alloc */ @@ -3558,14 +3577,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) /* list will be empty if hstate_is_gigantic */ prep_and_add_allocated_folios(h, &folio_list); - if (i < h->max_huge_pages) { - char buf[32]; - - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); - pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", - h->max_huge_pages, buf, i); - h->max_huge_pages = i; - } + hugetlb_hstate_alloc_pages_report(i, h); kfree(node_alloc_noretry); }