[RFC,3/4] mm/compaction: optimize >0 order folio compaction by sorting source pages.
Message ID | 20230912162815.440749-4-zi.yan@sent.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9ecd:0:b0:3f2:4152:657d with SMTP id t13csp777774vqx; Tue, 12 Sep 2023 18:02:32 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEi/UDYhgAY5bLP1E0NG4o5BQl4BXdfnxq4GsUgMw0j/ubLuhlHXbwbuaMq5lbAqMoud3QJ X-Received: by 2002:a05:620a:2f4:b0:76f:1850:b554 with SMTP id a20-20020a05620a02f400b0076f1850b554mr955169qko.72.1694566952098; Tue, 12 Sep 2023 18:02:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694566952; cv=none; d=google.com; s=arc-20160816; b=oxwFCH9j/pHNKi5FlhM7n9DVXAGs+rgrb6NEADV1jUGp0cg1i80y3Wk5q6hA1+k97t zWuYjLpcTQ4cT8M7LPdIiOLyd54xbuJI4GIGjuWdefDCacpGF/SSF+f0Jzi2gBGSeb92 HvFfzRF7LbTinzuuWyEVkcS//PLKc5zSwrWeC8tiTz2xY2+kmya+WAIwT03Sey8tLuCA LCGYt2aLvv9JMUg7mVL5ohBSljBXMMDPMv8nbeJrK0Pupjl74R2lEak4EL98ANVHg+T5 N2uTPjKZ7oqz3NyMzno00s7K9msIRxxkmbJDf1UHgiGfjzWG1jLGzzbg6SUA0txONaDS 9aZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version:reply-to :references:in-reply-to:message-id:date:subject:cc:to:from :feedback-id:dkim-signature:dkim-signature; bh=byTyyd05JmDbEdKIibYodeYHRw97CddSTrRNeSPfl1g=; fh=bUDnJOBC0/apF/24NaPi+H6rvMB40CebS5Pe6QbQjAo=; b=QaaQ6rAIlrpKfPxU46xM3TX8r1KVskFBvmHBr6BlCwNsHuKGV8m8NjWmQ+2LYfuhum ziWxAzmPvDcwU/XKDo4uH8P0vG1IlzP1x0O3O24bomaw8Z07gL1jlIWTw59BRayyIzW2 qf/ZABgt5JOGI83sy8ufGfNLN63uiTY6uurEzlBFLoS/ZsCOxB5Qu2JrEsT3O6GoenEQ fWhnOw/mi1hXr0PO7Gu7SDU4P4lKtCrvpK6XXADvYIpwzEKDaSjJgOA20JYXUNea9/mZ nCrrDBTC9g0gv70RDcWcIfj9sN0/v3y/V4xM7tPKxOxKqpKWI0YCGDMP218eCKcYadgm Nutw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sent.com header.s=fm1 header.b=QqdKC9Fa; dkim=pass header.i=@messagingengine.com header.s=fm1 header.b=AN0P2K2R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id cd11-20020a056a00420b00b0068e4704fd5bsi8843834pfb.346.2023.09.12.18.02.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 18:02:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm1 header.b=QqdKC9Fa; dkim=pass header.i=@messagingengine.com header.s=fm1 header.b=AN0P2K2R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 1002B8024DEE; Tue, 12 Sep 2023 09:29:17 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236697AbjILQ2s (ORCPT <rfc822;pwkd43@gmail.com> + 37 others); Tue, 12 Sep 2023 12:28:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236783AbjILQ2j (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 12 Sep 2023 12:28:39 -0400 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 101931704 for <linux-kernel@vger.kernel.org>; Tue, 12 Sep 2023 09:28:36 -0700 (PDT) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 7034B5C023E; Tue, 12 Sep 2023 12:28:35 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Tue, 12 Sep 2023 12:28:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm1; t= 1694536115; x=1694622515; bh=byTyyd05JmDbEdKIibYodeYHRw97CddSTrR NeSPfl1g=; b=QqdKC9Fa63nIIylhuX3PCGQuwV9dmtWfR/mazLyGKQ2+4OZ9meo 6u0MbfmLELVh08tPW3VpY7eE0jUGpD0f0hBCcPsYbrayTPcKt/3g1vA79iRGirU0 0DDb98OuKZ+PT4Apw5Utx1zuXFyQY16Z0/Xa27jNbU38TSEDSAa5eVX1SRabMaK3 rU5n6S7L6TLL+OXTLhBTgYibdxYKKNvNlyE2JwryH+czxTQVES2TV7N744KECOgg Q99ZR3+1ToHDH6Px8nmnpvYBezdgguG/oJPurUO7UvHR36q6uUvVvj2jSaeqOSNh M2o69Dm4EJYBpPihti1BcLJfCecaK0eNMHg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t= 1694536115; x=1694622515; bh=byTyyd05JmDbEdKIibYodeYHRw97CddSTrR NeSPfl1g=; b=AN0P2K2RSLzgy6uNFWDc4/uLPjZXoOm9SaCVkKxTpK1h/+O5YvW RH69TUb9/Pk7kdQYyfQeDpfaqTKnldImPdLpBLg9I0r1pC5C9YOe/zFQRLyjcjLE 6CLXwwzTneyVPkCjv0P3DNAcfh/58tJXgnOFP80Yru1KuPKDPNQdE3qR4Uvj3JqR UVuLKi632KaIKJwUpl2haQIKZ6+sIBk9SHD1GUADGTtx8GlNig7dqgijjl9pMXs0 U0yMjxL/RaikpR8MC+LgeXuFHTAAgRRMhoivHoE4sfhDdPGdIZu8QjRvxXE8Lg5X 2B1jHKabqKGec0yBj+SglEIA9NrhPDanLKA== X-ME-Sender: <xms:s5EAZV02gbRI-7aIaCmHKQPrls2JcuSX6MH-PPH_k3yZywv4EmXueQ> <xme:s5EAZcHPjf2RU9x5_ORH4Slx6QgoFUfgknQOHy3H03AEgKs5gt-xThlm3u2RFz9g9 B5tLvA9QG02MvS2vQ> X-ME-Received: <xmr:s5EAZV4zVTk8TAxwhd2VTULBjVH_9WkJiUsUpYWgzlF8uTh6-mK3Oaue2CIHOD2bf5EkZA423pZEOdSXeFn2F0TYiK2zuGFc6UasHHoSmKL_U8529xPMofbl> X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudeiiedguddtudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepge eghedugfduuddvleehheetgeeltdetieevuefhffevkefhveeufeeiieejgedvnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: <xmx:s5EAZS2_-BhctqixTBYkpewLV_08BK2TN1qitYizK6p6jJuj1EQiYA> <xmx:s5EAZYGYa53mkHSzg-AQYXhbuBnwSdlQwDoj8roClTs1Mfv-m5Xw6Q> <xmx:s5EAZT9xpqfPkSsLyNnDw6u0dn7pQxrHnlsmdg5S7vmkhev_NS2aww> <xmx:s5EAZWVgmnQVjnDMLeKZcN5yXKEUAAvBbm8ADgdGxAA-WYn_1P5dxA> Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 12 Sep 2023 12:28:34 -0400 (EDT) From: Zi Yan <zi.yan@sent.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Zi Yan <ziy@nvidia.com>, Ryan Roberts <ryan.roberts@arm.com>, Andrew Morton <akpm@linux-foundation.org>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, David Hildenbrand <david@redhat.com>, "Yin, Fengwei" <fengwei.yin@intel.com>, Yu Zhao <yuzhao@google.com>, Vlastimil Babka <vbabka@suse.cz>, Johannes Weiner <hannes@cmpxchg.org>, Baolin Wang <baolin.wang@linux.alibaba.com>, Kemeng Shi <shikemeng@huaweicloud.com>, Mel Gorman <mgorman@techsingularity.net>, Rohan Puri <rohan.puri15@gmail.com>, Mcgrof Chamberlain <mcgrof@kernel.org>, Adam Manzanares <a.manzanares@samsung.com>, John Hubbard <jhubbard@nvidia.com> Subject: [RFC PATCH 3/4] mm/compaction: optimize >0 order folio compaction by sorting source pages. Date: Tue, 12 Sep 2023 12:28:14 -0400 Message-Id: <20230912162815.440749-4-zi.yan@sent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230912162815.440749-1-zi.yan@sent.com> References: <20230912162815.440749-1-zi.yan@sent.com> Reply-To: Zi Yan <ziy@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Tue, 12 Sep 2023 09:29:17 -0700 (PDT) X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1776855697309131512 X-GMAIL-MSGID: 1776882236553179477 |
Series |
Enable >0 order folio memory compaction
|
|
Commit Message
Zi Yan
Sept. 12, 2023, 4:28 p.m. UTC
From: Zi Yan <ziy@nvidia.com> It should maximize high order free page use and minimize free page splits. It might be useful before free page merging is implemented. Signed-off-by: Zi Yan <ziy@nvidia.com> --- mm/compaction.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
Comments
On 12 Sep 2023, at 13:56, Johannes Weiner wrote: > On Tue, Sep 12, 2023 at 12:28:14PM -0400, Zi Yan wrote: >> From: Zi Yan <ziy@nvidia.com> >> >> It should maximize high order free page use and minimize free page splits. >> It might be useful before free page merging is implemented. > > The premise sounds reasonable to me: start with the largest chunks in > the hope of producing the desired block size before having to piece > things together from the order-0s dribbles. > >> @@ -145,6 +145,38 @@ static void sort_free_pages(struct list_head *src, struct free_list *dst) >> } >> } >> >> +static void sort_folios_by_order(struct list_head *pages) >> +{ >> + struct free_list page_list[MAX_ORDER + 1]; >> + int order; >> + struct folio *folio, *next; >> + >> + for (order = 0; order <= MAX_ORDER; order++) { >> + INIT_LIST_HEAD(&page_list[order].pages); >> + page_list[order].nr_free = 0; >> + } >> + >> + list_for_each_entry_safe(folio, next, pages, lru) { >> + order = folio_order(folio); >> + >> + if (order > MAX_ORDER) >> + continue; >> + >> + list_move(&folio->lru, &page_list[order].pages); >> + page_list[order].nr_free++; >> + } >> + >> + for (order = MAX_ORDER; order >= 0; order--) { >> + if (page_list[order].nr_free) { >> + >> + list_for_each_entry_safe(folio, next, >> + &page_list[order].pages, lru) { >> + list_move_tail(&folio->lru, pages); >> + } >> + } >> + } >> +} >> + >> #ifdef CONFIG_COMPACTION >> bool PageMovable(struct page *page) >> { >> @@ -2636,6 +2668,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) >> pageblock_start_pfn(cc->migrate_pfn - 1)); >> } >> >> + sort_folios_by_order(&cc->migratepages); > > Would it make sense to have isolate_migratepages_block() produce a > sorted list already? By collecting into a struct free_list in there > and finishing with that `for (order = MAX...) list_add_tail()' loop. > > That would save quite a bit of shuffling around. Compaction can be > hot, and is expected to get hotter with growing larger order pressure. Yes, that sounds reasonable. Will do that in the next version. > > The contig allocator doesn't care about ordering, but it should be > possible to gate the sorting reasonably on !cc->alloc_contig. Right. For !cc->alloc_contig, pages are put in struct free_list and later sorted and moved to cc->migratepages. For cc->alloc_contig, pages are directly put on cc->migratepages. > > An optimization down the line could be to skip the sorted list > assembly for the compaction case entirely, have compact_zone() work > directly on struct free_list, starting with the highest order and > checking compact_finished() in between orders. Sounds reasonable. It actually makes me think more and realize sorting source pages might not be optimal all the time. Because in general migrating higher order folios first would generate larger free spaces, which might meet the compaction goal faster. But in some cases, the target order, e.g., order 4, free pages can be generated by migrating one order-3 page and other 8 order-0 pages around. This means we might waste effort by migrating all high order page first. I guess there will be a lot of possible optimizations for different situations. :) -- Best Regards, Yan, Zi
diff --git a/mm/compaction.c b/mm/compaction.c index 45747ab5f380..4300d877b824 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -145,6 +145,38 @@ static void sort_free_pages(struct list_head *src, struct free_list *dst) } } +static void sort_folios_by_order(struct list_head *pages) +{ + struct free_list page_list[MAX_ORDER + 1]; + int order; + struct folio *folio, *next; + + for (order = 0; order <= MAX_ORDER; order++) { + INIT_LIST_HEAD(&page_list[order].pages); + page_list[order].nr_free = 0; + } + + list_for_each_entry_safe(folio, next, pages, lru) { + order = folio_order(folio); + + if (order > MAX_ORDER) + continue; + + list_move(&folio->lru, &page_list[order].pages); + page_list[order].nr_free++; + } + + for (order = MAX_ORDER; order >= 0; order--) { + if (page_list[order].nr_free) { + + list_for_each_entry_safe(folio, next, + &page_list[order].pages, lru) { + list_move_tail(&folio->lru, pages); + } + } + } +} + #ifdef CONFIG_COMPACTION bool PageMovable(struct page *page) { @@ -2636,6 +2668,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) pageblock_start_pfn(cc->migrate_pfn - 1)); } + sort_folios_by_order(&cc->migratepages); + err = migrate_pages(&cc->migratepages, compaction_alloc, compaction_free, (unsigned long)cc, cc->mode, MR_COMPACTION, &nr_succeeded);