Message ID | 20230912162815.440749-3-zi.yan@sent.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9ecd:0:b0:3f2:4152:657d with SMTP id t13csp719119vqx; Tue, 12 Sep 2023 15:34:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF8DL0zcRA/hqtkq9uY8YBR+PvHE2m5ZNduUJA+uOltqg80oI9t9yAht5odeg5lCZWv780B X-Received: by 2002:a05:6a20:978a:b0:137:fa5:8519 with SMTP id hx10-20020a056a20978a00b001370fa58519mr659043pzc.31.1694558079059; Tue, 12 Sep 2023 15:34:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694558079; cv=none; d=google.com; s=arc-20160816; b=Gil9stAuz9LxKs/4ZeIb+08b8V8kLQoMjOMKdQ8QIxm7EvFCAAJCmv6BjeBzGwYaJz srrwJTnlWqvbcNjGfW/A8BUzSJxsKKsCAY6gXyBx6IBE0XyekEWCy6Ue1B5RAhzsGNlc uRK9xCB72qH2gUwoLjDiSeer4Lq7ortFsoxlDdqyx6MDCmT/Ri+ivAwVrxGlPQjqDjFp AE4SStUqZ8Wrx2V/NnJneT+oNnYng9CL7vuYLRYNBCjmkB7ErYfMvO45QJ978bkuZUyf MilHcsFff30DGPpdly8gn7I1Ew0T2XWNFFAnmQwEtyXCzJJFcqjUxic2swhi30EkhpT7 dU1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version:reply-to :references:in-reply-to:message-id:date:subject:cc:to:from :feedback-id:dkim-signature:dkim-signature; bh=l1QmaKsAs0paNUi+ts0YZyUc8/34+jxr73p6s3d8XgM=; fh=bUDnJOBC0/apF/24NaPi+H6rvMB40CebS5Pe6QbQjAo=; b=ep/MNKPThGO7ZYPgoKaqAgwvaGyxpGdYrEjYgWcPTub1T0U1mn+Omjvo8JgZcHahfD jkY/fSzNxM0Oq3fQMVi89g/9vZXZTQuB3F9OA21hWf09Af28TG510OE/LrK4Qgz5KAOs /LktV/mJSmm+xzz3pVzOedqTcz39yc3kom7BsQxosRU3HtYt7UWHpLUP8g2nCSW0+Jg4 G+fTO037axaK2U+8D6ZBrJOR17P64ZMMmrqrNIMTyR4Dp8CLP7zLNEU9DPNEUmbtMKnm CivZ1kIzbwqXoEytWxKc/vNjeMagEgpD9WPGZ25hsr8GGRKpXBDmlyKGGWaQPqk2MeJE ni3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sent.com header.s=fm1 header.b="CGv0U/fN"; dkim=pass header.i=@messagingengine.com header.s=fm1 header.b=Vo2kSa9N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id p1-20020a17090a868100b0026b54eb88d4si208291pjn.126.2023.09.12.15.34.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 15:34:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm1 header.b="CGv0U/fN"; dkim=pass header.i=@messagingengine.com header.s=fm1 header.b=Vo2kSa9N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 5D16F8024DF4; Tue, 12 Sep 2023 09:28:55 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236729AbjILQ2o (ORCPT <rfc822;pwkd43@gmail.com> + 37 others); Tue, 12 Sep 2023 12:28:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236773AbjILQ2i (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 12 Sep 2023 12:28:38 -0400 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D43210E5 for <linux-kernel@vger.kernel.org>; Tue, 12 Sep 2023 09:28:35 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 895995C022E; Tue, 12 Sep 2023 12:28:34 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 12 Sep 2023 12:28:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm1; t= 1694536114; x=1694622514; bh=l1QmaKsAs0paNUi+ts0YZyUc8/34+jxr73p 6s3d8XgM=; b=CGv0U/fN66Lrt9dDPPSCbQa5AY52dtGJOyDZ0tS7J1mGwebKfps NhByliGCEQkj6G9iRYleWIxPKatnk3rJ+a3YHxdHMGGVH37D5ZtF7No7vvUokyK9 m6rYYOfIbeW5T5WzXU1YQSQh/ujkDjAp/GS6wb3VFCgQtB3NsRu7HsCzxiO6Tay8 h//tS5qjIls/sjbucjzlUWQVvi/pHFFOKF+KaCd1kZ8XHVTxaT1oZ34RcKwXkOIP x7TR4ziqR4Q4enld9IeNFdHldT655BJNYA6nqAxlGEyVFDRzybk+DDVlSOlXPWS3 KDrLFpYI4P1+1qsZwB1jmNLdY9FPBG7A4Rw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t= 1694536114; x=1694622514; bh=l1QmaKsAs0paNUi+ts0YZyUc8/34+jxr73p 6s3d8XgM=; b=Vo2kSa9Nzs4JRJzpGNI+smMKi1kWj7MvyRZkuvWyDbwklDKbNhR VtKNQ5K4LAtVehWPTp3K0wnH3Z7lwEM2QbBQ0lmVuTrZ5eJ/iF9xFeCDE3GBLowJ dswvJsUbW3/QBcQjmazYMI/Db03ANvlW6HavV4F6KDvvaDu7IUCgtWTqpqoOn2hj f5x6uu8UTIuEgfADSSmO37zlt8Gk4Jwii64Y4doC0dCIJ07ZtWm0Sw1J9A0ZdAi3 9awfWb0w/+pt1bzsGqK8LIC+Qz5nXcei30LmyucoiQS5yL4irtCavl0HrIoVZTR+ supWj9q8RT+NIUfYfm4Uwl/Wuw1hB81+R6g== X-ME-Sender: <xms:spEAZYM_66JvaRKHU3Mqt2H1UHBSiCmNLWnlc3004ri_7hTtOdID_Q> <xme:spEAZe_Y8ycEgCRA2-B5clsR6n4K5NHHqXeP9G4WxVih3VXmzQhpxZPmMT7v_vEN8 WSDuNAMK_Ym02XDBQ> X-ME-Received: <xmr:spEAZfRvbk2NexBHl-n-4gzkWPUwHIrK3BbVoVTaXwsrImIs0bRoSglvtn9QiJsUwKNv1EABChzsdx1-MU_FtuEyPqyKTdzEFmB9tPk_Pf7XgOsWA6KE_hWx> X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudeiiedguddtudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepge eghedugfduuddvleehheetgeeltdetieevuefhffevkefhveeufeeiieejgedvnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: <xmx:spEAZQtPbkKRaEvTWDIRA0ew_uUJbPHikp2bMDEhP0HzS4CTlEKGjg> <xmx:spEAZQdOYwjW7Cy21fknuNY5Adum0hU-1IFfYglXVCs9ArG51ropZw> <xmx:spEAZU0fvUtGk8FrgY6KgJsF0xFT4aBOhYfC3fE8xT-sS49XQmxvNw> <xmx:spEAZTtbE-Mo3hCjdNv5fZzZ2HbqfT4m22XJx94GpsXGkRTSpHkHqQ> Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 12 Sep 2023 12:28:33 -0400 (EDT) From: Zi Yan <zi.yan@sent.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Zi Yan <ziy@nvidia.com>, Ryan Roberts <ryan.roberts@arm.com>, Andrew Morton <akpm@linux-foundation.org>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, David Hildenbrand <david@redhat.com>, "Yin, Fengwei" <fengwei.yin@intel.com>, Yu Zhao <yuzhao@google.com>, Vlastimil Babka <vbabka@suse.cz>, Johannes Weiner <hannes@cmpxchg.org>, Baolin Wang <baolin.wang@linux.alibaba.com>, Kemeng Shi <shikemeng@huaweicloud.com>, Mel Gorman <mgorman@techsingularity.net>, Rohan Puri <rohan.puri15@gmail.com>, Mcgrof Chamberlain <mcgrof@kernel.org>, Adam Manzanares <a.manzanares@samsung.com>, John Hubbard <jhubbard@nvidia.com> Subject: [RFC PATCH 2/4] mm/compaction: optimize >0 order folio compaction with free page split. Date: Tue, 12 Sep 2023 12:28:13 -0400 Message-Id: <20230912162815.440749-3-zi.yan@sent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230912162815.440749-1-zi.yan@sent.com> References: <20230912162815.440749-1-zi.yan@sent.com> Reply-To: Zi Yan <ziy@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Tue, 12 Sep 2023 09:28:55 -0700 (PDT) X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1776872932546630290 X-GMAIL-MSGID: 1776872932546630290 |
Series |
Enable >0 order folio memory compaction
|
|
Commit Message
Zi Yan
Sept. 12, 2023, 4:28 p.m. UTC
From: Zi Yan <ziy@nvidia.com> During migration in a memory compaction, free pages are placed in an array of page lists based on their order. But the desired free page order (i.e., the order of a source page) might not be always present, thus leading to migration failures. Split a high order free pages when source migration page has a lower order to increase migration successful rate. Note: merging free pages when a migration fails and a lower order free page is returned via compaction_free() is possible, but there is too much work. Since the free pages are not buddy pages, it is hard to identify these free pages using existing PFN-based page merging algorithm. Signed-off-by: Zi Yan <ziy@nvidia.com> --- mm/compaction.c | 40 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 39 insertions(+), 1 deletion(-)
Comments
On 9/13/2023 12:28 AM, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > During migration in a memory compaction, free pages are placed in an array > of page lists based on their order. But the desired free page order (i.e., > the order of a source page) might not be always present, thus leading to > migration failures. Split a high order free pages when source migration > page has a lower order to increase migration successful rate. > > Note: merging free pages when a migration fails and a lower order free > page is returned via compaction_free() is possible, but there is too much > work. Since the free pages are not buddy pages, it is hard to identify > these free pages using existing PFN-based page merging algorithm. > > Signed-off-by: Zi Yan <ziy@nvidia.com> > --- > mm/compaction.c | 40 +++++++++++++++++++++++++++++++++++++++- > 1 file changed, 39 insertions(+), 1 deletion(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index 868e92e55d27..45747ab5f380 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -1801,9 +1801,46 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) > struct compact_control *cc = (struct compact_control *)data; > struct folio *dst; > int order = folio_order(src); > + bool has_isolated_pages = false; > > +again: > if (!cc->freepages[order].nr_free) { > - isolate_freepages(cc); > + int i; > + > + for (i = order + 1; i <= MAX_ORDER; i++) { > + if (cc->freepages[i].nr_free) { > + struct page *freepage = > + list_first_entry(&cc->freepages[i].pages, > + struct page, lru); > + > + int start_order = i; > + unsigned long size = 1 << start_order; > + > + list_del(&freepage->lru); > + cc->freepages[i].nr_free--; > + > + while (start_order > order) { > + start_order--; > + size >>= 1; > + > + list_add(&freepage[size].lru, > + &cc->freepages[start_order].pages); > + cc->freepages[start_order].nr_free++; > + set_page_private(&freepage[size], start_order); IIUC, these split pages should also call functions to initialize? e.g. prep_compound_page()? > + } > + post_alloc_hook(freepage, order, __GFP_MOVABLE); > + if (order) > + prep_compound_page(freepage, order); > + dst = page_folio(freepage); > + goto done; > + } > + } > + if (!has_isolated_pages) { > + isolate_freepages(cc); > + has_isolated_pages = true; > + goto again; > + } > + > if (!cc->freepages[order].nr_free) > return NULL; > } > @@ -1814,6 +1851,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) > post_alloc_hook(&dst->page, order, __GFP_MOVABLE); > if (order) > prep_compound_page(&dst->page, order); > +done: > cc->nr_freepages -= 1 << order; > return dst; > }
On 18 Sep 2023, at 3:34, Baolin Wang wrote: > On 9/13/2023 12:28 AM, Zi Yan wrote: >> From: Zi Yan <ziy@nvidia.com> >> >> During migration in a memory compaction, free pages are placed in an array >> of page lists based on their order. But the desired free page order (i.e., >> the order of a source page) might not be always present, thus leading to >> migration failures. Split a high order free pages when source migration >> page has a lower order to increase migration successful rate. >> >> Note: merging free pages when a migration fails and a lower order free >> page is returned via compaction_free() is possible, but there is too much >> work. Since the free pages are not buddy pages, it is hard to identify >> these free pages using existing PFN-based page merging algorithm. >> >> Signed-off-by: Zi Yan <ziy@nvidia.com> >> --- >> mm/compaction.c | 40 +++++++++++++++++++++++++++++++++++++++- >> 1 file changed, 39 insertions(+), 1 deletion(-) >> >> diff --git a/mm/compaction.c b/mm/compaction.c >> index 868e92e55d27..45747ab5f380 100644 >> --- a/mm/compaction.c >> +++ b/mm/compaction.c >> @@ -1801,9 +1801,46 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) >> struct compact_control *cc = (struct compact_control *)data; >> struct folio *dst; >> int order = folio_order(src); >> + bool has_isolated_pages = false; >> +again: >> if (!cc->freepages[order].nr_free) { >> - isolate_freepages(cc); >> + int i; >> + >> + for (i = order + 1; i <= MAX_ORDER; i++) { >> + if (cc->freepages[i].nr_free) { >> + struct page *freepage = >> + list_first_entry(&cc->freepages[i].pages, >> + struct page, lru); >> + >> + int start_order = i; >> + unsigned long size = 1 << start_order; >> + >> + list_del(&freepage->lru); >> + cc->freepages[i].nr_free--; >> + >> + while (start_order > order) { >> + start_order--; >> + size >>= 1; >> + >> + list_add(&freepage[size].lru, >> + &cc->freepages[start_order].pages); >> + cc->freepages[start_order].nr_free++; >> + set_page_private(&freepage[size], start_order); > > IIUC, these split pages should also call functions to initialize? e.g. prep_compound_page()? Not at this place. It is done right below and above "done" label. When free pages are on cc->freepages, we want to keep them without being post_alloc_hook() or prep_compound_page() processed for a possible future split. A free page is only initialized when it is returned by compaction_alloc(). > >> + } >> + post_alloc_hook(freepage, order, __GFP_MOVABLE); >> + if (order) >> + prep_compound_page(freepage, order); >> + dst = page_folio(freepage); >> + goto done; >> + } >> + } >> + if (!has_isolated_pages) { >> + isolate_freepages(cc); >> + has_isolated_pages = true; >> + goto again; >> + } >> + >> if (!cc->freepages[order].nr_free) >> return NULL; >> } >> @@ -1814,6 +1851,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) >> post_alloc_hook(&dst->page, order, __GFP_MOVABLE); >> if (order) >> prep_compound_page(&dst->page, order); >> +done: >> cc->nr_freepages -= 1 << order; >> return dst; >> } -- Best Regards, Yan, Zi
On 9/19/2023 1:20 AM, Zi Yan wrote: > On 18 Sep 2023, at 3:34, Baolin Wang wrote: > >> On 9/13/2023 12:28 AM, Zi Yan wrote: >>> From: Zi Yan <ziy@nvidia.com> >>> >>> During migration in a memory compaction, free pages are placed in an array >>> of page lists based on their order. But the desired free page order (i.e., >>> the order of a source page) might not be always present, thus leading to >>> migration failures. Split a high order free pages when source migration >>> page has a lower order to increase migration successful rate. >>> >>> Note: merging free pages when a migration fails and a lower order free >>> page is returned via compaction_free() is possible, but there is too much >>> work. Since the free pages are not buddy pages, it is hard to identify >>> these free pages using existing PFN-based page merging algorithm. >>> >>> Signed-off-by: Zi Yan <ziy@nvidia.com> >>> --- >>> mm/compaction.c | 40 +++++++++++++++++++++++++++++++++++++++- >>> 1 file changed, 39 insertions(+), 1 deletion(-) >>> >>> diff --git a/mm/compaction.c b/mm/compaction.c >>> index 868e92e55d27..45747ab5f380 100644 >>> --- a/mm/compaction.c >>> +++ b/mm/compaction.c >>> @@ -1801,9 +1801,46 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) >>> struct compact_control *cc = (struct compact_control *)data; >>> struct folio *dst; >>> int order = folio_order(src); >>> + bool has_isolated_pages = false; >>> +again: >>> if (!cc->freepages[order].nr_free) { >>> - isolate_freepages(cc); >>> + int i; >>> + >>> + for (i = order + 1; i <= MAX_ORDER; i++) { >>> + if (cc->freepages[i].nr_free) { >>> + struct page *freepage = >>> + list_first_entry(&cc->freepages[i].pages, >>> + struct page, lru); >>> + >>> + int start_order = i; >>> + unsigned long size = 1 << start_order; >>> + >>> + list_del(&freepage->lru); >>> + cc->freepages[i].nr_free--; >>> + >>> + while (start_order > order) { >>> + start_order--; >>> + size >>= 1; >>> + >>> + list_add(&freepage[size].lru, >>> + &cc->freepages[start_order].pages); >>> + cc->freepages[start_order].nr_free++; >>> + set_page_private(&freepage[size], start_order); >> >> IIUC, these split pages should also call functions to initialize? e.g. prep_compound_page()? > > Not at this place. It is done right below and above "done" label. When free pages > are on cc->freepages, we want to keep them without being post_alloc_hook() or > prep_compound_page() processed for a possible future split. A free page is > only initialized when it is returned by compaction_alloc(). Ah, I see. Thanks for explanation.
diff --git a/mm/compaction.c b/mm/compaction.c index 868e92e55d27..45747ab5f380 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1801,9 +1801,46 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) struct compact_control *cc = (struct compact_control *)data; struct folio *dst; int order = folio_order(src); + bool has_isolated_pages = false; +again: if (!cc->freepages[order].nr_free) { - isolate_freepages(cc); + int i; + + for (i = order + 1; i <= MAX_ORDER; i++) { + if (cc->freepages[i].nr_free) { + struct page *freepage = + list_first_entry(&cc->freepages[i].pages, + struct page, lru); + + int start_order = i; + unsigned long size = 1 << start_order; + + list_del(&freepage->lru); + cc->freepages[i].nr_free--; + + while (start_order > order) { + start_order--; + size >>= 1; + + list_add(&freepage[size].lru, + &cc->freepages[start_order].pages); + cc->freepages[start_order].nr_free++; + set_page_private(&freepage[size], start_order); + } + post_alloc_hook(freepage, order, __GFP_MOVABLE); + if (order) + prep_compound_page(freepage, order); + dst = page_folio(freepage); + goto done; + } + } + if (!has_isolated_pages) { + isolate_freepages(cc); + has_isolated_pages = true; + goto again; + } + if (!cc->freepages[order].nr_free) return NULL; } @@ -1814,6 +1851,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) post_alloc_hook(&dst->page, order, __GFP_MOVABLE); if (order) prep_compound_page(&dst->page, order); +done: cc->nr_freepages -= 1 << order; return dst; }