Message ID | 64899ad0bb78cde88b52abed1a5a5abbc9919998.1697632761.git.baolin.wang@linux.alibaba.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4774400vqb; Wed, 18 Oct 2023 06:05:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHU2/B1fionVUB9rXFclw+PLAMckdJXDMDwIbsMpRXiA9uYtaa2dAxKTiaqzR9zCrG01Elm X-Received: by 2002:a17:902:f30a:b0:1ca:2ec4:7f4d with SMTP id c10-20020a170902f30a00b001ca2ec47f4dmr5197928ple.3.1697634312137; Wed, 18 Oct 2023 06:05:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697634312; cv=none; d=google.com; s=arc-20160816; b=CNglCJ6xMUvDSdATVkq0rkUwV1RIEt6f8jzbzC3F11OnghUHA4osTR1r+3b/lZF9VE BqY9vI6/rAsOxEdvv/3h8V8mM84u50hDlE9m0rSO9WnE2tRAKSTx8Hgpat14zMFCLnyC OHZPopTwxgVsKT3fcCHNYyAnJAnyx11xQ8dX4Ao95qaNLyGVjBLTrBp2DTIhvETF6PeJ q+TdIZY1DlgvJcw4yIzVwjD738LR0yylQFd+xxhTuT8hwv+yOsi2QMGzgyHxcBa2/kVx 9gnm2WwsD4oe+12cxfZqhWuxLhxQ5ecNLb4boTlDgLeKqgReWtHoVOeN3apw7lzySXEw 9Fow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=F4JlYDoyYrXpWINTyrX3YfF45mGlBOQzVjEtg1OC3nI=; fh=uX6stTzXJiDiy4J7fKo6VPelahmfIsNurg7crTSnpGQ=; b=ht3/jAXCvmLc6urM1x/0HwDv4QGUkGrRl8JTATQDKIYUEkGA6kGrTxSCFSwK8Th1x1 D74JUAB1Bwf7FCThAJbamVWpywQDa2hZPvGUgCu7r3DY+eViFvpxyqnDaipwNIozSHFL mDd6t2aWk/6m0YU2v7m1rqybGryrykeljbKzpEA0WcVafIW7zAxQ9o7Nsrg/urUTunyN ebGsha9Tc22a0+Jkd86HHLg7JwwUT9yvqZGWsB8jwMUKwyEtQov/PJfF/sm4tMa9f8T/ O8tVFz0luyNcDYHrBSd9SGH6KA5o8Lzm5SAXdOkympCefkmOQZUpXJbGqToYzEBbpkal BZ/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id l5-20020a170903120500b001c75866c987si4222778plh.584.2023.10.18.06.05.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 06:05:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 83D5B8024493; Wed, 18 Oct 2023 06:05:05 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230509AbjJRNEo (ORCPT <rfc822;zwp10758@gmail.com> + 23 others); Wed, 18 Oct 2023 09:04:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230296AbjJRNEn (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 18 Oct 2023 09:04:43 -0400 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7777E106 for <linux-kernel@vger.kernel.org>; Wed, 18 Oct 2023 06:04:39 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VuR0XGI_1697634275; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VuR0XGI_1697634275) by smtp.aliyun-inc.com; Wed, 18 Oct 2023 21:04:36 +0800 From: Baolin Wang <baolin.wang@linux.alibaba.com> To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, hughd@google.com, vbabka@suse.cz, ying.huang@intel.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm: migrate: record the mlocked page status to remove unnecessary lru drain Date: Wed, 18 Oct 2023 21:04:32 +0800 Message-Id: <64899ad0bb78cde88b52abed1a5a5abbc9919998.1697632761.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.7 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Wed, 18 Oct 2023 06:05:05 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780098596144994746 X-GMAIL-MSGID: 1780098596144994746 |
Series |
mm: migrate: record the mlocked page status to remove unnecessary lru drain
|
|
Commit Message
Baolin Wang
Oct. 18, 2023, 1:04 p.m. UTC
When doing compaction, I found the lru_add_drain() is an obvious hotspot
when migrating pages. The distribution of this hotspot is as follows:
- 18.75% compact_zone
- 17.39% migrate_pages
- 13.79% migrate_pages_batch
- 11.66% migrate_folio_move
- 7.02% lru_add_drain
+ 7.02% lru_add_drain_cpu
+ 3.00% move_to_new_folio
1.23% rmap_walk
+ 1.92% migrate_folio_unmap
+ 3.20% migrate_pages_sync
+ 0.90% isolate_migratepages
The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate:
__unmap_and_move() push good newpage to LRU") to drain the newpage to LRU
immediately, to help to build up the correct newpage->mlock_count in
remove_migration_ptes() for mlocked pages. However, if there are no mlocked
pages are migrating, then we can avoid this lru drain operation, especailly
for the heavy concurrent scenarios.
So we can record the source pages' mlocked status in migrate_folio_unmap(),
and only drain the lru list when the mlocked status is set in migrate_folio_move().
In addition, the page was already isolated from lru when migrating, so we
check the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap().
After this patch, I can see the hotpot of the lru_add_drain() is gone:
- 9.41% migrate_pages_batch
- 6.15% migrate_folio_move
- 3.64% move_to_new_folio
+ 1.80% migrate_folio_extra
+ 1.70% buffer_migrate_folio
+ 1.41% rmap_walk
+ 0.62% folio_add_lru
+ 3.07% migrate_folio_unmap
Meanwhile, the compaction latency shows some improvements when running
thpscale:
base patched
Amean fault-both-1 1131.22 ( 0.00%) 1112.55 * 1.65%*
Amean fault-both-3 2489.75 ( 0.00%) 2324.15 * 6.65%*
Amean fault-both-5 3257.37 ( 0.00%) 3183.18 * 2.28%*
Amean fault-both-7 4257.99 ( 0.00%) 4079.04 * 4.20%*
Amean fault-both-12 6614.02 ( 0.00%) 6075.60 * 8.14%*
Amean fault-both-18 10607.78 ( 0.00%) 8978.86 * 15.36%*
Amean fault-both-24 14911.65 ( 0.00%) 11619.55 * 22.08%*
Amean fault-both-30 14954.67 ( 0.00%) 14925.66 * 0.19%*
Amean fault-both-32 16654.87 ( 0.00%) 15580.31 * 6.45%*
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/migrate.c | 50 ++++++++++++++++++++++++++++++++++++++------------
1 file changed, 38 insertions(+), 12 deletions(-)
Comments
On 18 Oct 2023, at 9:04, Baolin Wang wrote: > When doing compaction, I found the lru_add_drain() is an obvious hotspot > when migrating pages. The distribution of this hotspot is as follows: > - 18.75% compact_zone > - 17.39% migrate_pages > - 13.79% migrate_pages_batch > - 11.66% migrate_folio_move > - 7.02% lru_add_drain > + 7.02% lru_add_drain_cpu > + 3.00% move_to_new_folio > 1.23% rmap_walk > + 1.92% migrate_folio_unmap > + 3.20% migrate_pages_sync > + 0.90% isolate_migratepages > > The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: > __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU > immediately, to help to build up the correct newpage->mlock_count in > remove_migration_ptes() for mlocked pages. However, if there are no mlocked > pages are migrating, then we can avoid this lru drain operation, especailly > for the heavy concurrent scenarios. lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). > > So we can record the source pages' mlocked status in migrate_folio_unmap(), > and only drain the lru list when the mlocked status is set in migrate_folio_move(). > In addition, the page was already isolated from lru when migrating, so we > check the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap(). > > After this patch, I can see the hotpot of the lru_add_drain() is gone: > - 9.41% migrate_pages_batch > - 6.15% migrate_folio_move > - 3.64% move_to_new_folio > + 1.80% migrate_folio_extra > + 1.70% buffer_migrate_folio > + 1.41% rmap_walk > + 0.62% folio_add_lru > + 3.07% migrate_folio_unmap > > Meanwhile, the compaction latency shows some improvements when running > thpscale: > base patched > Amean fault-both-1 1131.22 ( 0.00%) 1112.55 * 1.65%* > Amean fault-both-3 2489.75 ( 0.00%) 2324.15 * 6.65%* > Amean fault-both-5 3257.37 ( 0.00%) 3183.18 * 2.28%* > Amean fault-both-7 4257.99 ( 0.00%) 4079.04 * 4.20%* > Amean fault-both-12 6614.02 ( 0.00%) 6075.60 * 8.14%* > Amean fault-both-18 10607.78 ( 0.00%) 8978.86 * 15.36%* > Amean fault-both-24 14911.65 ( 0.00%) 11619.55 * 22.08%* > Amean fault-both-30 14954.67 ( 0.00%) 14925.66 * 0.19%* > Amean fault-both-32 16654.87 ( 0.00%) 15580.31 * 6.45%* > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/migrate.c | 50 ++++++++++++++++++++++++++++++++++++++------------ > 1 file changed, 38 insertions(+), 12 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 4caf405b6504..32c96f89710f 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1027,22 +1027,32 @@ union migration_ptr { > struct anon_vma *anon_vma; > struct address_space *mapping; > }; > + > +enum { > + PAGE_WAS_MAPPED = 1 << 0, > + PAGE_WAS_MLOCKED = 1 << 1, > +}; > + > static void __migrate_folio_record(struct folio *dst, > - unsigned long page_was_mapped, > + unsigned long page_flags, > struct anon_vma *anon_vma) > { > union migration_ptr ptr = { .anon_vma = anon_vma }; > dst->mapping = ptr.mapping; > - dst->private = (void *)page_was_mapped; > + dst->private = (void *)page_flags; > } > > static void __migrate_folio_extract(struct folio *dst, > int *page_was_mappedp, > + int *page_was_mlocked, > struct anon_vma **anon_vmap) > { > union migration_ptr ptr = { .mapping = dst->mapping }; > + unsigned long page_flags = (unsigned long)dst->private; > + > *anon_vmap = ptr.anon_vma; > - *page_was_mappedp = (unsigned long)dst->private; > + *page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0; > + *page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0; > dst->mapping = NULL; > dst->private = NULL; > } > @@ -1103,7 +1113,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, > { > struct folio *dst; > int rc = -EAGAIN; > - int page_was_mapped = 0; > + int page_was_mapped = 0, page_was_mlocked = 0; > struct anon_vma *anon_vma = NULL; > bool is_lru = !__folio_test_movable(src); > bool locked = false; > @@ -1157,6 +1167,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, > folio_lock(src); > } > locked = true; > + page_was_mlocked = folio_test_mlocked(src); > > if (folio_test_writeback(src)) { > /* > @@ -1206,7 +1217,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, > dst_locked = true; > > if (unlikely(!is_lru)) { > - __migrate_folio_record(dst, page_was_mapped, anon_vma); > + __migrate_folio_record(dst, 0, anon_vma); > return MIGRATEPAGE_UNMAP; > } > > @@ -1236,7 +1247,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, > } > > if (!folio_mapped(src)) { > - __migrate_folio_record(dst, page_was_mapped, anon_vma); > + unsigned int page_flags = 0; > + > + if (page_was_mapped) > + page_flags |= PAGE_WAS_MAPPED; > + if (page_was_mlocked) > + page_flags |= PAGE_WAS_MLOCKED; > + __migrate_folio_record(dst, page_flags, anon_vma); > return MIGRATEPAGE_UNMAP; > } > > @@ -1261,12 +1278,13 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, > struct list_head *ret) > { > int rc; > - int page_was_mapped = 0; > + int page_was_mapped = 0, page_was_mlocked = 0; > struct anon_vma *anon_vma = NULL; > bool is_lru = !__folio_test_movable(src); > struct list_head *prev; > > - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); > + __migrate_folio_extract(dst, &page_was_mapped, > + &page_was_mlocked, &anon_vma); It is better to read out the flag, then check page_was_mapped and page_was_mlocked to avoid future __migrate_folio_extract() interface churns. > prev = dst->lru.prev; > list_del(&dst->lru); > > @@ -1287,7 +1305,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, > * isolated from the unevictable LRU: but this case is the easiest. > */ > folio_add_lru(dst); > - if (page_was_mapped) > + if (page_was_mlocked) > lru_add_drain(); Like I said at the top, this would be if (page_was_mapped || page_was_mlocked). > > if (page_was_mapped) > @@ -1321,8 +1339,15 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, > * right list unless we want to retry. > */ > if (rc == -EAGAIN) { > + unsigned int page_flags = 0; > + > + if (page_was_mapped) > + page_flags |= PAGE_WAS_MAPPED; > + if (page_was_mlocked) > + page_flags |= PAGE_WAS_MLOCKED; > + > list_add(&dst->lru, prev); > - __migrate_folio_record(dst, page_was_mapped, anon_vma); > + __migrate_folio_record(dst, page_flags, anon_vma); > return rc; > } > > @@ -1799,10 +1824,11 @@ static int migrate_pages_batch(struct list_head *from, > dst = list_first_entry(&dst_folios, struct folio, lru); > dst2 = list_next_entry(dst, lru); > list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { > - int page_was_mapped = 0; > + int page_was_mapped = 0, page_was_mlocked = 0; > struct anon_vma *anon_vma = NULL; > > - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); > + __migrate_folio_extract(dst, &page_was_mapped, > + &page_was_mlocked, &anon_vma); > migrate_folio_undo_src(folio, page_was_mapped, anon_vma, > true, ret_folios); > list_del(&dst->lru); > -- > 2.39.3 -- Best Regards, Yan, Zi
Zi Yan <ziy@nvidia.com> writes: > On 18 Oct 2023, at 9:04, Baolin Wang wrote: > >> When doing compaction, I found the lru_add_drain() is an obvious hotspot >> when migrating pages. The distribution of this hotspot is as follows: >> - 18.75% compact_zone >> - 17.39% migrate_pages >> - 13.79% migrate_pages_batch >> - 11.66% migrate_folio_move >> - 7.02% lru_add_drain >> + 7.02% lru_add_drain_cpu >> + 3.00% move_to_new_folio >> 1.23% rmap_walk >> + 1.92% migrate_folio_unmap >> + 3.20% migrate_pages_sync >> + 0.90% isolate_migratepages >> >> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >> immediately, to help to build up the correct newpage->mlock_count in >> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >> pages are migrating, then we can avoid this lru drain operation, especailly >> for the heavy concurrent scenarios. > > lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch > have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). lru_add_drain() is called after the page reference count checking in move_to_new_folio(). So, I don't this is an issue. >> >> So we can record the source pages' mlocked status in migrate_folio_unmap(), >> and only drain the lru list when the mlocked status is set in migrate_folio_move(). >> In addition, the page was already isolated from lru when migrating, so we >> check the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap(). >> >> After this patch, I can see the hotpot of the lru_add_drain() is gone: >> - 9.41% migrate_pages_batch >> - 6.15% migrate_folio_move >> - 3.64% move_to_new_folio >> + 1.80% migrate_folio_extra >> + 1.70% buffer_migrate_folio >> + 1.41% rmap_walk >> + 0.62% folio_add_lru >> + 3.07% migrate_folio_unmap >> >> Meanwhile, the compaction latency shows some improvements when running >> thpscale: >> base patched >> Amean fault-both-1 1131.22 ( 0.00%) 1112.55 * 1.65%* >> Amean fault-both-3 2489.75 ( 0.00%) 2324.15 * 6.65%* >> Amean fault-both-5 3257.37 ( 0.00%) 3183.18 * 2.28%* >> Amean fault-both-7 4257.99 ( 0.00%) 4079.04 * 4.20%* >> Amean fault-both-12 6614.02 ( 0.00%) 6075.60 * 8.14%* >> Amean fault-both-18 10607.78 ( 0.00%) 8978.86 * 15.36%* >> Amean fault-both-24 14911.65 ( 0.00%) 11619.55 * 22.08%* >> Amean fault-both-30 14954.67 ( 0.00%) 14925.66 * 0.19%* >> Amean fault-both-32 16654.87 ( 0.00%) 15580.31 * 6.45%* >> >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> --- >> mm/migrate.c | 50 ++++++++++++++++++++++++++++++++++++++------------ >> 1 file changed, 38 insertions(+), 12 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index 4caf405b6504..32c96f89710f 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1027,22 +1027,32 @@ union migration_ptr { >> struct anon_vma *anon_vma; >> struct address_space *mapping; >> }; >> + >> +enum { >> + PAGE_WAS_MAPPED = 1 << 0, >> + PAGE_WAS_MLOCKED = 1 << 1, >> +}; >> + >> static void __migrate_folio_record(struct folio *dst, >> - unsigned long page_was_mapped, >> + unsigned long page_flags, >> struct anon_vma *anon_vma) >> { >> union migration_ptr ptr = { .anon_vma = anon_vma }; >> dst->mapping = ptr.mapping; >> - dst->private = (void *)page_was_mapped; >> + dst->private = (void *)page_flags; >> } >> >> static void __migrate_folio_extract(struct folio *dst, >> int *page_was_mappedp, >> + int *page_was_mlocked, >> struct anon_vma **anon_vmap) >> { >> union migration_ptr ptr = { .mapping = dst->mapping }; >> + unsigned long page_flags = (unsigned long)dst->private; >> + >> *anon_vmap = ptr.anon_vma; >> - *page_was_mappedp = (unsigned long)dst->private; >> + *page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0; >> + *page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0; >> dst->mapping = NULL; >> dst->private = NULL; >> } >> @@ -1103,7 +1113,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >> { >> struct folio *dst; >> int rc = -EAGAIN; >> - int page_was_mapped = 0; >> + int page_was_mapped = 0, page_was_mlocked = 0; >> struct anon_vma *anon_vma = NULL; >> bool is_lru = !__folio_test_movable(src); >> bool locked = false; >> @@ -1157,6 +1167,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >> folio_lock(src); >> } >> locked = true; >> + page_was_mlocked = folio_test_mlocked(src); >> >> if (folio_test_writeback(src)) { >> /* >> @@ -1206,7 +1217,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >> dst_locked = true; >> >> if (unlikely(!is_lru)) { >> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >> + __migrate_folio_record(dst, 0, anon_vma); >> return MIGRATEPAGE_UNMAP; >> } >> >> @@ -1236,7 +1247,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >> } >> >> if (!folio_mapped(src)) { >> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >> + unsigned int page_flags = 0; >> + >> + if (page_was_mapped) >> + page_flags |= PAGE_WAS_MAPPED; >> + if (page_was_mlocked) >> + page_flags |= PAGE_WAS_MLOCKED; >> + __migrate_folio_record(dst, page_flags, anon_vma); >> return MIGRATEPAGE_UNMAP; >> } >> >> @@ -1261,12 +1278,13 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >> struct list_head *ret) >> { >> int rc; >> - int page_was_mapped = 0; >> + int page_was_mapped = 0, page_was_mlocked = 0; >> struct anon_vma *anon_vma = NULL; >> bool is_lru = !__folio_test_movable(src); >> struct list_head *prev; >> >> - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); >> + __migrate_folio_extract(dst, &page_was_mapped, >> + &page_was_mlocked, &anon_vma); > > It is better to read out the flag, then check page_was_mapped and page_was_mlocked > to avoid future __migrate_folio_extract() interface churns. IHMO, in contrast, it's better to use separate flags in __migrate_folio_record() too to avoid to pack flags in each call site. >> prev = dst->lru.prev; >> list_del(&dst->lru); >> >> @@ -1287,7 +1305,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >> * isolated from the unevictable LRU: but this case is the easiest. >> */ >> folio_add_lru(dst); >> - if (page_was_mapped) >> + if (page_was_mlocked) >> lru_add_drain(); > > Like I said at the top, this would be if (page_was_mapped || page_was_mlocked). > >> >> if (page_was_mapped) >> @@ -1321,8 +1339,15 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >> * right list unless we want to retry. >> */ >> if (rc == -EAGAIN) { >> + unsigned int page_flags = 0; >> + >> + if (page_was_mapped) >> + page_flags |= PAGE_WAS_MAPPED; >> + if (page_was_mlocked) >> + page_flags |= PAGE_WAS_MLOCKED; >> + >> list_add(&dst->lru, prev); >> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >> + __migrate_folio_record(dst, page_flags, anon_vma); >> return rc; >> } >> >> @@ -1799,10 +1824,11 @@ static int migrate_pages_batch(struct list_head *from, >> dst = list_first_entry(&dst_folios, struct folio, lru); >> dst2 = list_next_entry(dst, lru); >> list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { >> - int page_was_mapped = 0; >> + int page_was_mapped = 0, page_was_mlocked = 0; >> struct anon_vma *anon_vma = NULL; >> >> - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); >> + __migrate_folio_extract(dst, &page_was_mapped, >> + &page_was_mlocked, &anon_vma); >> migrate_folio_undo_src(folio, page_was_mapped, anon_vma, >> true, ret_folios); >> list_del(&dst->lru); >> -- >> 2.39.3 -- Best Regards, Huang, Ying
On 10/19/2023 2:09 PM, Huang, Ying wrote: > Zi Yan <ziy@nvidia.com> writes: > >> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >> >>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>> when migrating pages. The distribution of this hotspot is as follows: >>> - 18.75% compact_zone >>> - 17.39% migrate_pages >>> - 13.79% migrate_pages_batch >>> - 11.66% migrate_folio_move >>> - 7.02% lru_add_drain >>> + 7.02% lru_add_drain_cpu >>> + 3.00% move_to_new_folio >>> 1.23% rmap_walk >>> + 1.92% migrate_folio_unmap >>> + 3.20% migrate_pages_sync >>> + 0.90% isolate_migratepages >>> >>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>> immediately, to help to build up the correct newpage->mlock_count in >>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>> pages are migrating, then we can avoid this lru drain operation, especailly >>> for the heavy concurrent scenarios. >> >> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). > > lru_add_drain() is called after the page reference count checking in > move_to_new_folio(). So, I don't this is an issue. Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>> So we can record the source pages' mlocked status in migrate_folio_unmap(), >>> and only drain the lru list when the mlocked status is set in migrate_folio_move(). >>> In addition, the page was already isolated from lru when migrating, so we >>> check the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap(). >>> >>> After this patch, I can see the hotpot of the lru_add_drain() is gone: >>> - 9.41% migrate_pages_batch >>> - 6.15% migrate_folio_move >>> - 3.64% move_to_new_folio >>> + 1.80% migrate_folio_extra >>> + 1.70% buffer_migrate_folio >>> + 1.41% rmap_walk >>> + 0.62% folio_add_lru >>> + 3.07% migrate_folio_unmap >>> >>> Meanwhile, the compaction latency shows some improvements when running >>> thpscale: >>> base patched >>> Amean fault-both-1 1131.22 ( 0.00%) 1112.55 * 1.65%* >>> Amean fault-both-3 2489.75 ( 0.00%) 2324.15 * 6.65%* >>> Amean fault-both-5 3257.37 ( 0.00%) 3183.18 * 2.28%* >>> Amean fault-both-7 4257.99 ( 0.00%) 4079.04 * 4.20%* >>> Amean fault-both-12 6614.02 ( 0.00%) 6075.60 * 8.14%* >>> Amean fault-both-18 10607.78 ( 0.00%) 8978.86 * 15.36%* >>> Amean fault-both-24 14911.65 ( 0.00%) 11619.55 * 22.08%* >>> Amean fault-both-30 14954.67 ( 0.00%) 14925.66 * 0.19%* >>> Amean fault-both-32 16654.87 ( 0.00%) 15580.31 * 6.45%* >>> >>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>> --- >>> mm/migrate.c | 50 ++++++++++++++++++++++++++++++++++++++------------ >>> 1 file changed, 38 insertions(+), 12 deletions(-) >>> >>> diff --git a/mm/migrate.c b/mm/migrate.c >>> index 4caf405b6504..32c96f89710f 100644 >>> --- a/mm/migrate.c >>> +++ b/mm/migrate.c >>> @@ -1027,22 +1027,32 @@ union migration_ptr { >>> struct anon_vma *anon_vma; >>> struct address_space *mapping; >>> }; >>> + >>> +enum { >>> + PAGE_WAS_MAPPED = 1 << 0, >>> + PAGE_WAS_MLOCKED = 1 << 1, >>> +}; >>> + >>> static void __migrate_folio_record(struct folio *dst, >>> - unsigned long page_was_mapped, >>> + unsigned long page_flags, >>> struct anon_vma *anon_vma) >>> { >>> union migration_ptr ptr = { .anon_vma = anon_vma }; >>> dst->mapping = ptr.mapping; >>> - dst->private = (void *)page_was_mapped; >>> + dst->private = (void *)page_flags; >>> } >>> >>> static void __migrate_folio_extract(struct folio *dst, >>> int *page_was_mappedp, >>> + int *page_was_mlocked, >>> struct anon_vma **anon_vmap) >>> { >>> union migration_ptr ptr = { .mapping = dst->mapping }; >>> + unsigned long page_flags = (unsigned long)dst->private; >>> + >>> *anon_vmap = ptr.anon_vma; >>> - *page_was_mappedp = (unsigned long)dst->private; >>> + *page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0; >>> + *page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0; >>> dst->mapping = NULL; >>> dst->private = NULL; >>> } >>> @@ -1103,7 +1113,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>> { >>> struct folio *dst; >>> int rc = -EAGAIN; >>> - int page_was_mapped = 0; >>> + int page_was_mapped = 0, page_was_mlocked = 0; >>> struct anon_vma *anon_vma = NULL; >>> bool is_lru = !__folio_test_movable(src); >>> bool locked = false; >>> @@ -1157,6 +1167,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>> folio_lock(src); >>> } >>> locked = true; >>> + page_was_mlocked = folio_test_mlocked(src); >>> >>> if (folio_test_writeback(src)) { >>> /* >>> @@ -1206,7 +1217,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>> dst_locked = true; >>> >>> if (unlikely(!is_lru)) { >>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>> + __migrate_folio_record(dst, 0, anon_vma); >>> return MIGRATEPAGE_UNMAP; >>> } >>> >>> @@ -1236,7 +1247,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>> } >>> >>> if (!folio_mapped(src)) { >>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>> + unsigned int page_flags = 0; >>> + >>> + if (page_was_mapped) >>> + page_flags |= PAGE_WAS_MAPPED; >>> + if (page_was_mlocked) >>> + page_flags |= PAGE_WAS_MLOCKED; >>> + __migrate_folio_record(dst, page_flags, anon_vma); >>> return MIGRATEPAGE_UNMAP; >>> } >>> >>> @@ -1261,12 +1278,13 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>> struct list_head *ret) >>> { >>> int rc; >>> - int page_was_mapped = 0; >>> + int page_was_mapped = 0, page_was_mlocked = 0; >>> struct anon_vma *anon_vma = NULL; >>> bool is_lru = !__folio_test_movable(src); >>> struct list_head *prev; >>> >>> - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); >>> + __migrate_folio_extract(dst, &page_was_mapped, >>> + &page_was_mlocked, &anon_vma); >> >> It is better to read out the flag, then check page_was_mapped and page_was_mlocked >> to avoid future __migrate_folio_extract() interface churns. > > IHMO, in contrast, it's better to use separate flags in > __migrate_folio_record() too to avoid to pack flags in each call site. Either way is okay for me. And avoiding to pack flags in each call site seems more reasonable to me. > >>> prev = dst->lru.prev; >>> list_del(&dst->lru); >>> >>> @@ -1287,7 +1305,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>> * isolated from the unevictable LRU: but this case is the easiest. >>> */ >>> folio_add_lru(dst); >>> - if (page_was_mapped) >>> + if (page_was_mlocked) >>> lru_add_drain(); >> >> Like I said at the top, this would be if (page_was_mapped || page_was_mlocked). I don't think so. Like I said above, we can drain lru list only if page was mlocked. >>> if (page_was_mapped) >>> @@ -1321,8 +1339,15 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>> * right list unless we want to retry. >>> */ >>> if (rc == -EAGAIN) { >>> + unsigned int page_flags = 0; >>> + >>> + if (page_was_mapped) >>> + page_flags |= PAGE_WAS_MAPPED; >>> + if (page_was_mlocked) >>> + page_flags |= PAGE_WAS_MLOCKED; >>> + >>> list_add(&dst->lru, prev); >>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>> + __migrate_folio_record(dst, page_flags, anon_vma); >>> return rc; >>> } >>> >>> @@ -1799,10 +1824,11 @@ static int migrate_pages_batch(struct list_head *from, >>> dst = list_first_entry(&dst_folios, struct folio, lru); >>> dst2 = list_next_entry(dst, lru); >>> list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { >>> - int page_was_mapped = 0; >>> + int page_was_mapped = 0, page_was_mlocked = 0; >>> struct anon_vma *anon_vma = NULL; >>> >>> - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); >>> + __migrate_folio_extract(dst, &page_was_mapped, >>> + &page_was_mlocked, &anon_vma); >>> migrate_folio_undo_src(folio, page_was_mapped, anon_vma, >>> true, ret_folios); >>> list_del(&dst->lru); >>> -- >>> 2.39.3 > > -- > Best Regards, > Huang, Ying
Hi Baolin, On 10/19/23 15:25, Baolin Wang wrote: > > > On 10/19/2023 2:09 PM, Huang, Ying wrote: >> Zi Yan <ziy@nvidia.com> writes: >> >>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>> >>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>> when migrating pages. The distribution of this hotspot is as follows: >>>> - 18.75% compact_zone >>>> - 17.39% migrate_pages >>>> - 13.79% migrate_pages_batch >>>> - 11.66% migrate_folio_move >>>> - 7.02% lru_add_drain >>>> + 7.02% lru_add_drain_cpu >>>> + 3.00% move_to_new_folio >>>> 1.23% rmap_walk >>>> + 1.92% migrate_folio_unmap >>>> + 3.20% migrate_pages_sync >>>> + 0.90% isolate_migratepages >>>> >>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>> immediately, to help to build up the correct newpage->mlock_count in >>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>> for the heavy concurrent scenarios. >>> >>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >> >> lru_add_drain() is called after the page reference count checking in >> move_to_new_folio(). So, I don't this is an issue. > > Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. I agree with your. My understanding also is that the lru_add_drain() is only needed for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. But I have question: why do we need use page_was_mlocked instead of check folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. Regards Yin, Fengwei > >>>> So we can record the source pages' mlocked status in migrate_folio_unmap(), >>>> and only drain the lru list when the mlocked status is set in migrate_folio_move(). >>>> In addition, the page was already isolated from lru when migrating, so we >>>> check the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap(). >>>> >>>> After this patch, I can see the hotpot of the lru_add_drain() is gone: >>>> - 9.41% migrate_pages_batch >>>> - 6.15% migrate_folio_move >>>> - 3.64% move_to_new_folio >>>> + 1.80% migrate_folio_extra >>>> + 1.70% buffer_migrate_folio >>>> + 1.41% rmap_walk >>>> + 0.62% folio_add_lru >>>> + 3.07% migrate_folio_unmap >>>> >>>> Meanwhile, the compaction latency shows some improvements when running >>>> thpscale: >>>> base patched >>>> Amean fault-both-1 1131.22 ( 0.00%) 1112.55 * 1.65%* >>>> Amean fault-both-3 2489.75 ( 0.00%) 2324.15 * 6.65%* >>>> Amean fault-both-5 3257.37 ( 0.00%) 3183.18 * 2.28%* >>>> Amean fault-both-7 4257.99 ( 0.00%) 4079.04 * 4.20%* >>>> Amean fault-both-12 6614.02 ( 0.00%) 6075.60 * 8.14%* >>>> Amean fault-both-18 10607.78 ( 0.00%) 8978.86 * 15.36%* >>>> Amean fault-both-24 14911.65 ( 0.00%) 11619.55 * 22.08%* >>>> Amean fault-both-30 14954.67 ( 0.00%) 14925.66 * 0.19%* >>>> Amean fault-both-32 16654.87 ( 0.00%) 15580.31 * 6.45%* >>>> >>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>> --- >>>> mm/migrate.c | 50 ++++++++++++++++++++++++++++++++++++++------------ >>>> 1 file changed, 38 insertions(+), 12 deletions(-) >>>> >>>> diff --git a/mm/migrate.c b/mm/migrate.c >>>> index 4caf405b6504..32c96f89710f 100644 >>>> --- a/mm/migrate.c >>>> +++ b/mm/migrate.c >>>> @@ -1027,22 +1027,32 @@ union migration_ptr { >>>> struct anon_vma *anon_vma; >>>> struct address_space *mapping; >>>> }; >>>> + >>>> +enum { >>>> + PAGE_WAS_MAPPED = 1 << 0, >>>> + PAGE_WAS_MLOCKED = 1 << 1, >>>> +}; >>>> + >>>> static void __migrate_folio_record(struct folio *dst, >>>> - unsigned long page_was_mapped, >>>> + unsigned long page_flags, >>>> struct anon_vma *anon_vma) >>>> { >>>> union migration_ptr ptr = { .anon_vma = anon_vma }; >>>> dst->mapping = ptr.mapping; >>>> - dst->private = (void *)page_was_mapped; >>>> + dst->private = (void *)page_flags; >>>> } >>>> >>>> static void __migrate_folio_extract(struct folio *dst, >>>> int *page_was_mappedp, >>>> + int *page_was_mlocked, >>>> struct anon_vma **anon_vmap) >>>> { >>>> union migration_ptr ptr = { .mapping = dst->mapping }; >>>> + unsigned long page_flags = (unsigned long)dst->private; >>>> + >>>> *anon_vmap = ptr.anon_vma; >>>> - *page_was_mappedp = (unsigned long)dst->private; >>>> + *page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0; >>>> + *page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0; >>>> dst->mapping = NULL; >>>> dst->private = NULL; >>>> } >>>> @@ -1103,7 +1113,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>>> { >>>> struct folio *dst; >>>> int rc = -EAGAIN; >>>> - int page_was_mapped = 0; >>>> + int page_was_mapped = 0, page_was_mlocked = 0; >>>> struct anon_vma *anon_vma = NULL; >>>> bool is_lru = !__folio_test_movable(src); >>>> bool locked = false; >>>> @@ -1157,6 +1167,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>>> folio_lock(src); >>>> } >>>> locked = true; >>>> + page_was_mlocked = folio_test_mlocked(src); >>>> >>>> if (folio_test_writeback(src)) { >>>> /* >>>> @@ -1206,7 +1217,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>>> dst_locked = true; >>>> >>>> if (unlikely(!is_lru)) { >>>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>>> + __migrate_folio_record(dst, 0, anon_vma); >>>> return MIGRATEPAGE_UNMAP; >>>> } >>>> >>>> @@ -1236,7 +1247,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>>> } >>>> >>>> if (!folio_mapped(src)) { >>>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>>> + unsigned int page_flags = 0; >>>> + >>>> + if (page_was_mapped) >>>> + page_flags |= PAGE_WAS_MAPPED; >>>> + if (page_was_mlocked) >>>> + page_flags |= PAGE_WAS_MLOCKED; >>>> + __migrate_folio_record(dst, page_flags, anon_vma); >>>> return MIGRATEPAGE_UNMAP; >>>> } >>>> >>>> @@ -1261,12 +1278,13 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>>> struct list_head *ret) >>>> { >>>> int rc; >>>> - int page_was_mapped = 0; >>>> + int page_was_mapped = 0, page_was_mlocked = 0; >>>> struct anon_vma *anon_vma = NULL; >>>> bool is_lru = !__folio_test_movable(src); >>>> struct list_head *prev; >>>> >>>> - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); >>>> + __migrate_folio_extract(dst, &page_was_mapped, >>>> + &page_was_mlocked, &anon_vma); >>> >>> It is better to read out the flag, then check page_was_mapped and page_was_mlocked >>> to avoid future __migrate_folio_extract() interface churns. >> >> IHMO, in contrast, it's better to use separate flags in >> __migrate_folio_record() too to avoid to pack flags in each call site. > > Either way is okay for me. And avoiding to pack flags in each call site seems more reasonable to me. > >> >>>> prev = dst->lru.prev; >>>> list_del(&dst->lru); >>>> >>>> @@ -1287,7 +1305,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>>> * isolated from the unevictable LRU: but this case is the easiest. >>>> */ >>>> folio_add_lru(dst); >>>> - if (page_was_mapped) >>>> + if (page_was_mlocked) >>>> lru_add_drain(); >>> >>> Like I said at the top, this would be if (page_was_mapped || page_was_mlocked). > > I don't think so. Like I said above, we can drain lru list only if page was mlocked. > >>>> if (page_was_mapped) >>>> @@ -1321,8 +1339,15 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>>> * right list unless we want to retry. >>>> */ >>>> if (rc == -EAGAIN) { >>>> + unsigned int page_flags = 0; >>>> + >>>> + if (page_was_mapped) >>>> + page_flags |= PAGE_WAS_MAPPED; >>>> + if (page_was_mlocked) >>>> + page_flags |= PAGE_WAS_MLOCKED; >>>> + >>>> list_add(&dst->lru, prev); >>>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>>> + __migrate_folio_record(dst, page_flags, anon_vma); >>>> return rc; >>>> } >>>> >>>> @@ -1799,10 +1824,11 @@ static int migrate_pages_batch(struct list_head *from, >>>> dst = list_first_entry(&dst_folios, struct folio, lru); >>>> dst2 = list_next_entry(dst, lru); >>>> list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { >>>> - int page_was_mapped = 0; >>>> + int page_was_mapped = 0, page_was_mlocked = 0; >>>> struct anon_vma *anon_vma = NULL; >>>> >>>> - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); >>>> + __migrate_folio_extract(dst, &page_was_mapped, >>>> + &page_was_mlocked, &anon_vma); >>>> migrate_folio_undo_src(folio, page_was_mapped, anon_vma, >>>> true, ret_folios); >>>> list_del(&dst->lru); >>>> -- >>>> 2.39.3 >> >> -- >> Best Regards, >> Huang, Ying
On 10/19/2023 4:22 PM, Yin Fengwei wrote: > Hi Baolin, > > On 10/19/23 15:25, Baolin Wang wrote: >> >> >> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>> Zi Yan <ziy@nvidia.com> writes: >>> >>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>> >>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>> - 18.75% compact_zone >>>>> - 17.39% migrate_pages >>>>> - 13.79% migrate_pages_batch >>>>> - 11.66% migrate_folio_move >>>>> - 7.02% lru_add_drain >>>>> + 7.02% lru_add_drain_cpu >>>>> + 3.00% move_to_new_folio >>>>> 1.23% rmap_walk >>>>> + 1.92% migrate_folio_unmap >>>>> + 3.20% migrate_pages_sync >>>>> + 0.90% isolate_migratepages >>>>> >>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>> for the heavy concurrent scenarios. >>>> >>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>> >>> lru_add_drain() is called after the page reference count checking in >>> move_to_new_folio(). So, I don't this is an issue. >> >> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. > > I agree with your. My understanding also is that the lru_add_drain() is only needed > for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. > > > But I have question: why do we need use page_was_mlocked instead of check > folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio().
On 10/19/2023 4:51 PM, Baolin Wang wrote: > > > On 10/19/2023 4:22 PM, Yin Fengwei wrote: >> Hi Baolin, >> >> On 10/19/23 15:25, Baolin Wang wrote: >>> >>> >>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>> Zi Yan <ziy@nvidia.com> writes: >>>> >>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>> >>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>> - 18.75% compact_zone >>>>>> - 17.39% migrate_pages >>>>>> - 13.79% migrate_pages_batch >>>>>> - 11.66% migrate_folio_move >>>>>> - 7.02% lru_add_drain >>>>>> + 7.02% lru_add_drain_cpu >>>>>> + 3.00% move_to_new_folio >>>>>> 1.23% rmap_walk >>>>>> + 1.92% migrate_folio_unmap >>>>>> + 3.20% migrate_pages_sync >>>>>> + 0.90% isolate_migratepages >>>>>> >>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>> for the heavy concurrent scenarios. >>>>> >>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>> >>>> lru_add_drain() is called after the page reference count checking in >>>> move_to_new_folio(). So, I don't this is an issue. >>> >>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >> >> I agree with your. My understanding also is that the lru_add_drain() is only needed >> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >> >> >> But I have question: why do we need use page_was_mlocked instead of check >> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. > > Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). Yes. This will clear mlock bit. What about set dst folio mlocked if source is before try_to_migrate_one()? And then check whether dst folio is mlocked after? And need clear mlocked if migration fails. I suppose the change is minor. Just a thought. Thanks. Regards Yin, Fengwei
On 19 Oct 2023, at 2:09, Huang, Ying wrote: > Zi Yan <ziy@nvidia.com> writes: > >> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >> >>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>> when migrating pages. The distribution of this hotspot is as follows: >>> - 18.75% compact_zone >>> - 17.39% migrate_pages >>> - 13.79% migrate_pages_batch >>> - 11.66% migrate_folio_move >>> - 7.02% lru_add_drain >>> + 7.02% lru_add_drain_cpu >>> + 3.00% move_to_new_folio >>> 1.23% rmap_walk >>> + 1.92% migrate_folio_unmap >>> + 3.20% migrate_pages_sync >>> + 0.90% isolate_migratepages >>> >>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>> immediately, to help to build up the correct newpage->mlock_count in >>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>> pages are migrating, then we can avoid this lru drain operation, especailly >>> for the heavy concurrent scenarios. >> >> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). > > lru_add_drain() is called after the page reference count checking in > move_to_new_folio(). So, I don't this is an issue. You are right. I missed that. Thanks for pointing this out. > >>> >>> So we can record the source pages' mlocked status in migrate_folio_unmap(), >>> and only drain the lru list when the mlocked status is set in migrate_folio_move(). >>> In addition, the page was already isolated from lru when migrating, so we >>> check the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap(). >>> >>> After this patch, I can see the hotpot of the lru_add_drain() is gone: >>> - 9.41% migrate_pages_batch >>> - 6.15% migrate_folio_move >>> - 3.64% move_to_new_folio >>> + 1.80% migrate_folio_extra >>> + 1.70% buffer_migrate_folio >>> + 1.41% rmap_walk >>> + 0.62% folio_add_lru >>> + 3.07% migrate_folio_unmap >>> >>> Meanwhile, the compaction latency shows some improvements when running >>> thpscale: >>> base patched >>> Amean fault-both-1 1131.22 ( 0.00%) 1112.55 * 1.65%* >>> Amean fault-both-3 2489.75 ( 0.00%) 2324.15 * 6.65%* >>> Amean fault-both-5 3257.37 ( 0.00%) 3183.18 * 2.28%* >>> Amean fault-both-7 4257.99 ( 0.00%) 4079.04 * 4.20%* >>> Amean fault-both-12 6614.02 ( 0.00%) 6075.60 * 8.14%* >>> Amean fault-both-18 10607.78 ( 0.00%) 8978.86 * 15.36%* >>> Amean fault-both-24 14911.65 ( 0.00%) 11619.55 * 22.08%* >>> Amean fault-both-30 14954.67 ( 0.00%) 14925.66 * 0.19%* >>> Amean fault-both-32 16654.87 ( 0.00%) 15580.31 * 6.45%* >>> >>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>> --- >>> mm/migrate.c | 50 ++++++++++++++++++++++++++++++++++++++------------ >>> 1 file changed, 38 insertions(+), 12 deletions(-) >>> >>> diff --git a/mm/migrate.c b/mm/migrate.c >>> index 4caf405b6504..32c96f89710f 100644 >>> --- a/mm/migrate.c >>> +++ b/mm/migrate.c >>> @@ -1027,22 +1027,32 @@ union migration_ptr { >>> struct anon_vma *anon_vma; >>> struct address_space *mapping; >>> }; >>> + >>> +enum { >>> + PAGE_WAS_MAPPED = 1 << 0, >>> + PAGE_WAS_MLOCKED = 1 << 1, >>> +}; >>> + >>> static void __migrate_folio_record(struct folio *dst, >>> - unsigned long page_was_mapped, >>> + unsigned long page_flags, >>> struct anon_vma *anon_vma) >>> { >>> union migration_ptr ptr = { .anon_vma = anon_vma }; >>> dst->mapping = ptr.mapping; >>> - dst->private = (void *)page_was_mapped; >>> + dst->private = (void *)page_flags; >>> } >>> >>> static void __migrate_folio_extract(struct folio *dst, >>> int *page_was_mappedp, >>> + int *page_was_mlocked, >>> struct anon_vma **anon_vmap) >>> { >>> union migration_ptr ptr = { .mapping = dst->mapping }; >>> + unsigned long page_flags = (unsigned long)dst->private; >>> + >>> *anon_vmap = ptr.anon_vma; >>> - *page_was_mappedp = (unsigned long)dst->private; >>> + *page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0; >>> + *page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0; >>> dst->mapping = NULL; >>> dst->private = NULL; >>> } >>> @@ -1103,7 +1113,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>> { >>> struct folio *dst; >>> int rc = -EAGAIN; >>> - int page_was_mapped = 0; >>> + int page_was_mapped = 0, page_was_mlocked = 0; >>> struct anon_vma *anon_vma = NULL; >>> bool is_lru = !__folio_test_movable(src); >>> bool locked = false; >>> @@ -1157,6 +1167,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>> folio_lock(src); >>> } >>> locked = true; >>> + page_was_mlocked = folio_test_mlocked(src); >>> >>> if (folio_test_writeback(src)) { >>> /* >>> @@ -1206,7 +1217,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>> dst_locked = true; >>> >>> if (unlikely(!is_lru)) { >>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>> + __migrate_folio_record(dst, 0, anon_vma); >>> return MIGRATEPAGE_UNMAP; >>> } >>> >>> @@ -1236,7 +1247,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>> } >>> >>> if (!folio_mapped(src)) { >>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>> + unsigned int page_flags = 0; >>> + >>> + if (page_was_mapped) >>> + page_flags |= PAGE_WAS_MAPPED; >>> + if (page_was_mlocked) >>> + page_flags |= PAGE_WAS_MLOCKED; >>> + __migrate_folio_record(dst, page_flags, anon_vma); >>> return MIGRATEPAGE_UNMAP; >>> } >>> >>> @@ -1261,12 +1278,13 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>> struct list_head *ret) >>> { >>> int rc; >>> - int page_was_mapped = 0; >>> + int page_was_mapped = 0, page_was_mlocked = 0; >>> struct anon_vma *anon_vma = NULL; >>> bool is_lru = !__folio_test_movable(src); >>> struct list_head *prev; >>> >>> - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); >>> + __migrate_folio_extract(dst, &page_was_mapped, >>> + &page_was_mlocked, &anon_vma); >> >> It is better to read out the flag, then check page_was_mapped and page_was_mlocked >> to avoid future __migrate_folio_extract() interface churns. > > IHMO, in contrast, it's better to use separate flags in > __migrate_folio_record() too to avoid to pack flags in each call site. I am OK with it as long as the parameters of these two are symmetric. > >>> prev = dst->lru.prev; >>> list_del(&dst->lru); >>> >>> @@ -1287,7 +1305,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>> * isolated from the unevictable LRU: but this case is the easiest. >>> */ >>> folio_add_lru(dst); >>> - if (page_was_mapped) >>> + if (page_was_mlocked) >>> lru_add_drain(); >> >> Like I said at the top, this would be if (page_was_mapped || page_was_mlocked). >> >>> >>> if (page_was_mapped) >>> @@ -1321,8 +1339,15 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, >>> * right list unless we want to retry. >>> */ >>> if (rc == -EAGAIN) { >>> + unsigned int page_flags = 0; >>> + >>> + if (page_was_mapped) >>> + page_flags |= PAGE_WAS_MAPPED; >>> + if (page_was_mlocked) >>> + page_flags |= PAGE_WAS_MLOCKED; >>> + >>> list_add(&dst->lru, prev); >>> - __migrate_folio_record(dst, page_was_mapped, anon_vma); >>> + __migrate_folio_record(dst, page_flags, anon_vma); >>> return rc; >>> } >>> >>> @@ -1799,10 +1824,11 @@ static int migrate_pages_batch(struct list_head *from, >>> dst = list_first_entry(&dst_folios, struct folio, lru); >>> dst2 = list_next_entry(dst, lru); >>> list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { >>> - int page_was_mapped = 0; >>> + int page_was_mapped = 0, page_was_mlocked = 0; >>> struct anon_vma *anon_vma = NULL; >>> >>> - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); >>> + __migrate_folio_extract(dst, &page_was_mapped, >>> + &page_was_mlocked, &anon_vma); >>> migrate_folio_undo_src(folio, page_was_mapped, anon_vma, >>> true, ret_folios); >>> list_del(&dst->lru); >>> -- >>> 2.39.3 > > -- > Best Regards, > Huang, Ying -- Best Regards, Yan, Zi
On 10/19/2023 8:07 PM, Yin, Fengwei wrote: > > > On 10/19/2023 4:51 PM, Baolin Wang wrote: >> >> >> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>> Hi Baolin, >>> >>> On 10/19/23 15:25, Baolin Wang wrote: >>>> >>>> >>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>> Zi Yan <ziy@nvidia.com> writes: >>>>> >>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>> >>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>> - 18.75% compact_zone >>>>>>> - 17.39% migrate_pages >>>>>>> - 13.79% migrate_pages_batch >>>>>>> - 11.66% migrate_folio_move >>>>>>> - 7.02% lru_add_drain >>>>>>> + 7.02% lru_add_drain_cpu >>>>>>> + 3.00% move_to_new_folio >>>>>>> 1.23% rmap_walk >>>>>>> + 1.92% migrate_folio_unmap >>>>>>> + 3.20% migrate_pages_sync >>>>>>> + 0.90% isolate_migratepages >>>>>>> >>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>> for the heavy concurrent scenarios. >>>>>> >>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>> >>>>> lru_add_drain() is called after the page reference count checking in >>>>> move_to_new_folio(). So, I don't this is an issue. >>>> >>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>> >>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>> >>> >>> But I have question: why do we need use page_was_mlocked instead of check >>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >> >> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). > > Yes. This will clear mlock bit. > > What about set dst folio mlocked if source is before try_to_migrate_one()? And > then check whether dst folio is mlocked after? And need clear mlocked if migration > fails. I suppose the change is minor. Just a thought. Thanks. IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :)
On 10/20/2023 10:09 AM, Baolin Wang wrote: > > > On 10/19/2023 8:07 PM, Yin, Fengwei wrote: >> >> >> On 10/19/2023 4:51 PM, Baolin Wang wrote: >>> >>> >>> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>>> Hi Baolin, >>>> >>>> On 10/19/23 15:25, Baolin Wang wrote: >>>>> >>>>> >>>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>>> Zi Yan <ziy@nvidia.com> writes: >>>>>> >>>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>>> >>>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>>> - 18.75% compact_zone >>>>>>>> - 17.39% migrate_pages >>>>>>>> - 13.79% migrate_pages_batch >>>>>>>> - 11.66% migrate_folio_move >>>>>>>> - 7.02% lru_add_drain >>>>>>>> + 7.02% lru_add_drain_cpu >>>>>>>> + 3.00% move_to_new_folio >>>>>>>> 1.23% rmap_walk >>>>>>>> + 1.92% migrate_folio_unmap >>>>>>>> + 3.20% migrate_pages_sync >>>>>>>> + 0.90% isolate_migratepages >>>>>>>> >>>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>>> for the heavy concurrent scenarios. >>>>>>> >>>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>>> >>>>>> lru_add_drain() is called after the page reference count checking in >>>>>> move_to_new_folio(). So, I don't this is an issue. >>>>> >>>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>>> >>>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>>> >>>> >>>> But I have question: why do we need use page_was_mlocked instead of check >>>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >>> >>> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). >> >> Yes. This will clear mlock bit. >> >> What about set dst folio mlocked if source is before try_to_migrate_one()? And >> then check whether dst folio is mlocked after? And need clear mlocked if migration >> fails. I suppose the change is minor. Just a thought. Thanks. > > IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. > > Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. > > So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :) Can these concerns be addressed by clear dst mlocked after lru_add_drain() but before remove_migration_pte()? Regards Yin, Fengwei
On 10/20/2023 10:30 AM, Yin, Fengwei wrote: > > > On 10/20/2023 10:09 AM, Baolin Wang wrote: >> >> >> On 10/19/2023 8:07 PM, Yin, Fengwei wrote: >>> >>> >>> On 10/19/2023 4:51 PM, Baolin Wang wrote: >>>> >>>> >>>> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>>>> Hi Baolin, >>>>> >>>>> On 10/19/23 15:25, Baolin Wang wrote: >>>>>> >>>>>> >>>>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>>>> Zi Yan <ziy@nvidia.com> writes: >>>>>>> >>>>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>>>> >>>>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>>>> - 18.75% compact_zone >>>>>>>>> - 17.39% migrate_pages >>>>>>>>> - 13.79% migrate_pages_batch >>>>>>>>> - 11.66% migrate_folio_move >>>>>>>>> - 7.02% lru_add_drain >>>>>>>>> + 7.02% lru_add_drain_cpu >>>>>>>>> + 3.00% move_to_new_folio >>>>>>>>> 1.23% rmap_walk >>>>>>>>> + 1.92% migrate_folio_unmap >>>>>>>>> + 3.20% migrate_pages_sync >>>>>>>>> + 0.90% isolate_migratepages >>>>>>>>> >>>>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>>>> for the heavy concurrent scenarios. >>>>>>>> >>>>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>>>> >>>>>>> lru_add_drain() is called after the page reference count checking in >>>>>>> move_to_new_folio(). So, I don't this is an issue. >>>>>> >>>>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>>>> >>>>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>>>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>>>> >>>>> >>>>> But I have question: why do we need use page_was_mlocked instead of check >>>>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >>>> >>>> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). >>> >>> Yes. This will clear mlock bit. >>> >>> What about set dst folio mlocked if source is before try_to_migrate_one()? And >>> then check whether dst folio is mlocked after? And need clear mlocked if migration >>> fails. I suppose the change is minor. Just a thought. Thanks. >> >> IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. >> >> Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. >> >> So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :) > > Can these concerns be addressed by clear dst mlocked after lru_add_drain() but before > remove_migration_pte()? IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages.
On 10/20/2023 10:45 AM, Baolin Wang wrote: > > > On 10/20/2023 10:30 AM, Yin, Fengwei wrote: >> >> >> On 10/20/2023 10:09 AM, Baolin Wang wrote: >>> >>> >>> On 10/19/2023 8:07 PM, Yin, Fengwei wrote: >>>> >>>> >>>> On 10/19/2023 4:51 PM, Baolin Wang wrote: >>>>> >>>>> >>>>> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>>>>> Hi Baolin, >>>>>> >>>>>> On 10/19/23 15:25, Baolin Wang wrote: >>>>>>> >>>>>>> >>>>>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>>>>> Zi Yan <ziy@nvidia.com> writes: >>>>>>>> >>>>>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>>>>> >>>>>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>>>>> - 18.75% compact_zone >>>>>>>>>> - 17.39% migrate_pages >>>>>>>>>> - 13.79% migrate_pages_batch >>>>>>>>>> - 11.66% migrate_folio_move >>>>>>>>>> - 7.02% lru_add_drain >>>>>>>>>> + 7.02% lru_add_drain_cpu >>>>>>>>>> + 3.00% move_to_new_folio >>>>>>>>>> 1.23% rmap_walk >>>>>>>>>> + 1.92% migrate_folio_unmap >>>>>>>>>> + 3.20% migrate_pages_sync >>>>>>>>>> + 0.90% isolate_migratepages >>>>>>>>>> >>>>>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>>>>> for the heavy concurrent scenarios. >>>>>>>>> >>>>>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>>>>> >>>>>>>> lru_add_drain() is called after the page reference count checking in >>>>>>>> move_to_new_folio(). So, I don't this is an issue. >>>>>>> >>>>>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>>>>> >>>>>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>>>>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>>>>> >>>>>> >>>>>> But I have question: why do we need use page_was_mlocked instead of check >>>>>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >>>>> >>>>> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). >>>> >>>> Yes. This will clear mlock bit. >>>> >>>> What about set dst folio mlocked if source is before try_to_migrate_one()? And >>>> then check whether dst folio is mlocked after? And need clear mlocked if migration >>>> fails. I suppose the change is minor. Just a thought. Thanks. >>> >>> IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. >>> >>> Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. >>> >>> So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :) >> >> Can these concerns be addressed by clear dst mlocked after lru_add_drain() but before >> remove_migration_pte()? > > IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages. Fair enough. Thanks. Regards Yin, Fengwei
On 10/20/2023 10:45 AM, Baolin Wang wrote: > > > On 10/20/2023 10:30 AM, Yin, Fengwei wrote: >> >> >> On 10/20/2023 10:09 AM, Baolin Wang wrote: >>> >>> >>> On 10/19/2023 8:07 PM, Yin, Fengwei wrote: >>>> >>>> >>>> On 10/19/2023 4:51 PM, Baolin Wang wrote: >>>>> >>>>> >>>>> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>>>>> Hi Baolin, >>>>>> >>>>>> On 10/19/23 15:25, Baolin Wang wrote: >>>>>>> >>>>>>> >>>>>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>>>>> Zi Yan <ziy@nvidia.com> writes: >>>>>>>> >>>>>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>>>>> >>>>>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>>>>> - 18.75% compact_zone >>>>>>>>>> - 17.39% migrate_pages >>>>>>>>>> - 13.79% migrate_pages_batch >>>>>>>>>> - 11.66% migrate_folio_move >>>>>>>>>> - 7.02% lru_add_drain >>>>>>>>>> + 7.02% lru_add_drain_cpu >>>>>>>>>> + 3.00% move_to_new_folio >>>>>>>>>> 1.23% rmap_walk >>>>>>>>>> + 1.92% migrate_folio_unmap >>>>>>>>>> + 3.20% migrate_pages_sync >>>>>>>>>> + 0.90% isolate_migratepages >>>>>>>>>> >>>>>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>>>>> for the heavy concurrent scenarios. >>>>>>>>> >>>>>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>>>>> >>>>>>>> lru_add_drain() is called after the page reference count checking in >>>>>>>> move_to_new_folio(). So, I don't this is an issue. >>>>>>> >>>>>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>>>>> >>>>>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>>>>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>>>>> >>>>>> >>>>>> But I have question: why do we need use page_was_mlocked instead of check >>>>>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >>>>> >>>>> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). >>>> >>>> Yes. This will clear mlock bit. >>>> >>>> What about set dst folio mlocked if source is before try_to_migrate_one()? And >>>> then check whether dst folio is mlocked after? And need clear mlocked if migration >>>> fails. I suppose the change is minor. Just a thought. Thanks. >>> >>> IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. >>> >>> Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. >>> >>> So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :) >> >> Can these concerns be addressed by clear dst mlocked after lru_add_drain() but before >> remove_migration_pte()? > > IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages. BTW, Yosry tried to address the overlap of field lru and mlock_count: https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ But the lore doesn't group all the patches. Regards Yin, Fengwei
On 10/20/2023 10:54 AM, Yin, Fengwei wrote: > > > On 10/20/2023 10:45 AM, Baolin Wang wrote: >> >> >> On 10/20/2023 10:30 AM, Yin, Fengwei wrote: >>> >>> >>> On 10/20/2023 10:09 AM, Baolin Wang wrote: >>>> >>>> >>>> On 10/19/2023 8:07 PM, Yin, Fengwei wrote: >>>>> >>>>> >>>>> On 10/19/2023 4:51 PM, Baolin Wang wrote: >>>>>> >>>>>> >>>>>> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>>>>>> Hi Baolin, >>>>>>> >>>>>>> On 10/19/23 15:25, Baolin Wang wrote: >>>>>>>> >>>>>>>> >>>>>>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>>>>>> Zi Yan <ziy@nvidia.com> writes: >>>>>>>>> >>>>>>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>>>>>> >>>>>>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>>>>>> - 18.75% compact_zone >>>>>>>>>>> - 17.39% migrate_pages >>>>>>>>>>> - 13.79% migrate_pages_batch >>>>>>>>>>> - 11.66% migrate_folio_move >>>>>>>>>>> - 7.02% lru_add_drain >>>>>>>>>>> + 7.02% lru_add_drain_cpu >>>>>>>>>>> + 3.00% move_to_new_folio >>>>>>>>>>> 1.23% rmap_walk >>>>>>>>>>> + 1.92% migrate_folio_unmap >>>>>>>>>>> + 3.20% migrate_pages_sync >>>>>>>>>>> + 0.90% isolate_migratepages >>>>>>>>>>> >>>>>>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>>>>>> for the heavy concurrent scenarios. >>>>>>>>>> >>>>>>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>>>>>> >>>>>>>>> lru_add_drain() is called after the page reference count checking in >>>>>>>>> move_to_new_folio(). So, I don't this is an issue. >>>>>>>> >>>>>>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>>>>>> >>>>>>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>>>>>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>>>>>> >>>>>>> >>>>>>> But I have question: why do we need use page_was_mlocked instead of check >>>>>>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >>>>>> >>>>>> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). >>>>> >>>>> Yes. This will clear mlock bit. >>>>> >>>>> What about set dst folio mlocked if source is before try_to_migrate_one()? And >>>>> then check whether dst folio is mlocked after? And need clear mlocked if migration >>>>> fails. I suppose the change is minor. Just a thought. Thanks. >>>> >>>> IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. >>>> >>>> Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. >>>> >>>> So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :) >>> >>> Can these concerns be addressed by clear dst mlocked after lru_add_drain() but before >>> remove_migration_pte()? >> >> IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages. > > BTW, Yosry tried to address the overlap of field lru and mlock_count: > https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ > But the lore doesn't group all the patches. Thanks for the information. I'd like to review and test if this work can continue.
> >> > >> IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages. > > > > BTW, Yosry tried to address the overlap of field lru and mlock_count: > > https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ > > But the lore doesn't group all the patches. > > Thanks for the information. I'd like to review and test if this work can > continue. The motivation for this work was reviving the unevictable LRU for the memcg recharging RFC series [1]. However, that series was heavily criticized. I was not intending on following up on it. If reworking the mlock_count is beneficial for other reasons, I am happy to respin it if the work needed to make it mergeable is minimal. Otherwise, I don't think I have the time to revisit (but feel free to pick up the patches if you'd like). [1]https://lore.kernel.org/lkml/20230720070825.992023-1-yosryahmed@google.com/
On 10/20/2023 11:45 AM, Yosry Ahmed wrote: >>>> >>>> IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages. >>> >>> BTW, Yosry tried to address the overlap of field lru and mlock_count: >>> https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ >>> But the lore doesn't group all the patches. >> >> Thanks for the information. I'd like to review and test if this work can >> continue. > > The motivation for this work was reviving the unevictable LRU for the > memcg recharging RFC series [1]. However, that series was heavily > criticized. I was not intending on following up on it. > > If reworking the mlock_count is beneficial for other reasons, I am > happy to respin it if the work needed to make it mergeable is minimal. > Otherwise, I don't think I have the time to revisit (but feel free to > pick up the patches if you'd like). > > [1]https://lore.kernel.org/lkml/20230720070825.992023-1-yosryahmed@google.com/ I believe reworking the mlock_count is focus here. If there is no overlap between lru and mlock_count, the whole logic of lru_add_drain() can be removed here. And I noticed the link: https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ only has cover letter and the patches didn't grouped. Regards Yin, Fengwei
On Thu, Oct 19, 2023 at 8:52 PM Yin, Fengwei <fengwei.yin@intel.com> wrote: > > > > On 10/20/2023 11:45 AM, Yosry Ahmed wrote: > >>>> > >>>> IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages. > >>> > >>> BTW, Yosry tried to address the overlap of field lru and mlock_count: > >>> https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ > >>> But the lore doesn't group all the patches. > >> > >> Thanks for the information. I'd like to review and test if this work can > >> continue. > > > > The motivation for this work was reviving the unevictable LRU for the > > memcg recharging RFC series [1]. However, that series was heavily > > criticized. I was not intending on following up on it. > > > > If reworking the mlock_count is beneficial for other reasons, I am > > happy to respin it if the work needed to make it mergeable is minimal. > > Otherwise, I don't think I have the time to revisit (but feel free to > > pick up the patches if you'd like). > > > > [1]https://lore.kernel.org/lkml/20230720070825.992023-1-yosryahmed@google.com/ > > I believe reworking the mlock_count is focus here. If there is no overlap > between lru and mlock_count, the whole logic of lru_add_drain() can be > removed here. All patches except patch 4 are for reworking the mlock_count. Once the mlock count is reworked, reviving the unevictable LRU is actually very simple and removes more code than it adds (see patch 4 below). > > And I noticed the link: > https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ > only has cover letter and the patches didn't grouped. That's weird, here are the patches (in order): https://lore.kernel.org/lkml/20230618065744.1363948-1-yosryahmed@google.com/ https://lore.kernel.org/lkml/20230618065756.1364399-1-yosryahmed@google.com/ https://lore.kernel.org/lkml/20230618065809.1364900-1-yosryahmed@google.com/ https://lore.kernel.org/lkml/20230618065816.1365301-1-yosryahmed@google.com/ https://lore.kernel.org/lkml/20230618065824.1365750-1-yosryahmed@google.com/ > > > Regards > Yin, Fengwei >
On 10/20/2023 12:02 PM, Yosry Ahmed wrote: > On Thu, Oct 19, 2023 at 8:52 PM Yin, Fengwei <fengwei.yin@intel.com> wrote: >> >> >> >> On 10/20/2023 11:45 AM, Yosry Ahmed wrote: >>>>>> >>>>>> IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages. >>>>> >>>>> BTW, Yosry tried to address the overlap of field lru and mlock_count: >>>>> https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ >>>>> But the lore doesn't group all the patches. >>>> >>>> Thanks for the information. I'd like to review and test if this work can >>>> continue. >>> >>> The motivation for this work was reviving the unevictable LRU for the >>> memcg recharging RFC series [1]. However, that series was heavily >>> criticized. I was not intending on following up on it. >>> >>> If reworking the mlock_count is beneficial for other reasons, I am >>> happy to respin it if the work needed to make it mergeable is minimal. >>> Otherwise, I don't think I have the time to revisit (but feel free to >>> pick up the patches if you'd like). >>> >>> [1]https://lore.kernel.org/lkml/20230720070825.992023-1-yosryahmed@google.com/ >> >> I believe reworking the mlock_count is focus here. If there is no overlap >> between lru and mlock_count, the whole logic of lru_add_drain() can be >> removed here. > > All patches except patch 4 are for reworking the mlock_count. Once the > mlock count is reworked, reviving the unevictable LRU is actually very > simple and removes more code than it adds (see patch 4 below). > >> >> And I noticed the link: >> https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ >> only has cover letter and the patches didn't grouped. > > That's weird, here are the patches (in order): > https://lore.kernel.org/lkml/20230618065744.1363948-1-yosryahmed@google.com/ > https://lore.kernel.org/lkml/20230618065756.1364399-1-yosryahmed@google.com/ > https://lore.kernel.org/lkml/20230618065809.1364900-1-yosryahmed@google.com/ > https://lore.kernel.org/lkml/20230618065816.1365301-1-yosryahmed@google.com/ > https://lore.kernel.org/lkml/20230618065824.1365750-1-yosryahmed@google.com/ Thanks a lot. Regards Yin, Fengwei > >> >> >> Regards >> Yin, Fengwei >>
diff --git a/mm/migrate.c b/mm/migrate.c index 4caf405b6504..32c96f89710f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1027,22 +1027,32 @@ union migration_ptr { struct anon_vma *anon_vma; struct address_space *mapping; }; + +enum { + PAGE_WAS_MAPPED = 1 << 0, + PAGE_WAS_MLOCKED = 1 << 1, +}; + static void __migrate_folio_record(struct folio *dst, - unsigned long page_was_mapped, + unsigned long page_flags, struct anon_vma *anon_vma) { union migration_ptr ptr = { .anon_vma = anon_vma }; dst->mapping = ptr.mapping; - dst->private = (void *)page_was_mapped; + dst->private = (void *)page_flags; } static void __migrate_folio_extract(struct folio *dst, int *page_was_mappedp, + int *page_was_mlocked, struct anon_vma **anon_vmap) { union migration_ptr ptr = { .mapping = dst->mapping }; + unsigned long page_flags = (unsigned long)dst->private; + *anon_vmap = ptr.anon_vma; - *page_was_mappedp = (unsigned long)dst->private; + *page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0; + *page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0; dst->mapping = NULL; dst->private = NULL; } @@ -1103,7 +1113,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, { struct folio *dst; int rc = -EAGAIN; - int page_was_mapped = 0; + int page_was_mapped = 0, page_was_mlocked = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__folio_test_movable(src); bool locked = false; @@ -1157,6 +1167,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, folio_lock(src); } locked = true; + page_was_mlocked = folio_test_mlocked(src); if (folio_test_writeback(src)) { /* @@ -1206,7 +1217,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, dst_locked = true; if (unlikely(!is_lru)) { - __migrate_folio_record(dst, page_was_mapped, anon_vma); + __migrate_folio_record(dst, 0, anon_vma); return MIGRATEPAGE_UNMAP; } @@ -1236,7 +1247,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, } if (!folio_mapped(src)) { - __migrate_folio_record(dst, page_was_mapped, anon_vma); + unsigned int page_flags = 0; + + if (page_was_mapped) + page_flags |= PAGE_WAS_MAPPED; + if (page_was_mlocked) + page_flags |= PAGE_WAS_MLOCKED; + __migrate_folio_record(dst, page_flags, anon_vma); return MIGRATEPAGE_UNMAP; } @@ -1261,12 +1278,13 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, struct list_head *ret) { int rc; - int page_was_mapped = 0; + int page_was_mapped = 0, page_was_mlocked = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__folio_test_movable(src); struct list_head *prev; - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); + __migrate_folio_extract(dst, &page_was_mapped, + &page_was_mlocked, &anon_vma); prev = dst->lru.prev; list_del(&dst->lru); @@ -1287,7 +1305,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, * isolated from the unevictable LRU: but this case is the easiest. */ folio_add_lru(dst); - if (page_was_mapped) + if (page_was_mlocked) lru_add_drain(); if (page_was_mapped) @@ -1321,8 +1339,15 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, * right list unless we want to retry. */ if (rc == -EAGAIN) { + unsigned int page_flags = 0; + + if (page_was_mapped) + page_flags |= PAGE_WAS_MAPPED; + if (page_was_mlocked) + page_flags |= PAGE_WAS_MLOCKED; + list_add(&dst->lru, prev); - __migrate_folio_record(dst, page_was_mapped, anon_vma); + __migrate_folio_record(dst, page_flags, anon_vma); return rc; } @@ -1799,10 +1824,11 @@ static int migrate_pages_batch(struct list_head *from, dst = list_first_entry(&dst_folios, struct folio, lru); dst2 = list_next_entry(dst, lru); list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int page_was_mapped = 0; + int page_was_mapped = 0, page_was_mlocked = 0; struct anon_vma *anon_vma = NULL; - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); + __migrate_folio_extract(dst, &page_was_mapped, + &page_was_mlocked, &anon_vma); migrate_folio_undo_src(folio, page_was_mapped, anon_vma, true, ret_folios); list_del(&dst->lru);