Message ID | 20230728171037.2219226-5-shikemeng@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:918b:0:b0:3e4:2afc:c1 with SMTP id s11csp317063vqg; Fri, 28 Jul 2023 02:52:17 -0700 (PDT) X-Google-Smtp-Source: APBJJlFPLUQRz/+6aeKaiqeocdE1SpWj4DmcJ8/BGYgNBgialQRsepIBI8cLdbKkWOFZr7Owjp8i X-Received: by 2002:a05:6a00:23d5:b0:667:e17e:85c1 with SMTP id g21-20020a056a0023d500b00667e17e85c1mr1261332pfc.1.1690537936969; Fri, 28 Jul 2023 02:52:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690537936; cv=none; d=google.com; s=arc-20160816; b=Uo9mhixCEyF5OkhfY/qrDDoPIZG+rKqtGMBwozt1wfkHjD05KCO4MJIVeo15FfZlAg FmOUwUjghmB5Zik+zqZ6flYruFXpkmB6RTwijL3cKVPEd0m4f6GVkrWFDXxCJ2/7M2bh JjddHDCHTClTndeAVqgymtZInhpD1mx1iCsQPwwUDMwa97vZ234V9sV0AARYhA0KZb3v ltoqH3JiFSNxq3o4506AubQSvaphz/gjX9R7YoX7U/8XI2MxQhjNjhS69YzcfIyrv99S mvVa9rtSm1QXB4PluhiHsjSP2p5JWGFAolNNVmCNfFnGry6L22sx7rQoe7ejPSxkPSBS s/3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Vnh3A0Em3JneUg74kv573qyZagn0gQqMWRVEbRoAVfA=; fh=TSUM3bg1l4VMuLzo5+/YZeHv/ZQR+PETNj1d6BtXHQc=; b=ui5VVqJVXk7uK2U5m4GuzzndJOSWGEGhFA2M+w0G2DPW9Mn+hevtIRsxWzzpdoNfaU 6M8AeH63sNy44xYNUjHx7bcHzcwi6nsvSZ7y9rQ0MYj4n1dEFGwgjEi8Er+/3ej/wgNy 6C7V9j+m9Q2ohMTxxxQhs7ztUpuv0Ei7N0peEYYWJzvpwEUjqgH7PWPNxGvAlzwZe3iw 3KI4GIY1TIbLE1+sDoS/BLPD0tSBBVtR4B5I2AekmMyQk++vIm+gbNnYZEErT+/RdFjp 7qq4vDu88WjrOQZ4pXz84v3D1PLfYo3OCUjkPiulG6wpIKWCnbHKZjeSumSvddXS9OWC jvaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z4-20020aa78884000000b00679cdcafec1si2980119pfe.109.2023.07.28.02.52.03; Fri, 28 Jul 2023 02:52:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235387AbjG1JNo (ORCPT <rfc822;hanasaki@gmail.com> + 99 others); Fri, 28 Jul 2023 05:13:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232776AbjG1JMj (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 28 Jul 2023 05:12:39 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A6C044B2 for <linux-kernel@vger.kernel.org>; Fri, 28 Jul 2023 02:10:20 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4RC20Q5DXlz4f3nyv for <linux-kernel@vger.kernel.org>; Fri, 28 Jul 2023 17:10:14 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP4 (Coremail) with SMTP id gCh0CgA3n7L1hcNkCrD+Ow--.32454S6; Fri, 28 Jul 2023 17:10:17 +0800 (CST) From: Kemeng Shi <shikemeng@huaweicloud.com> To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, baolin.wang@linux.alibaba.com, mgorman@techsingularity.net, willy@infradead.org, david@redhat.com Cc: shikemeng@huaweicloud.com Subject: [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages Date: Sat, 29 Jul 2023 01:10:33 +0800 Message-Id: <20230728171037.2219226-5-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230728171037.2219226-1-shikemeng@huaweicloud.com> References: <20230728171037.2219226-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: gCh0CgA3n7L1hcNkCrD+Ow--.32454S6 X-Coremail-Antispam: 1UD129KBjvJXoW7ArW3uFWUWF1fuFykCryUZFb_yoW8tFWxpw 1fJwn7GF4DGa43W3ZIqFyDZ3W5Cw1fKF17JrW7Kw18AFnxtFnF93Z7tFnYvFyFqr9avF90 vr4Dta4Iya1jva7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPY14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xv wVC2z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFc xC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_ Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2 IErcIFxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r 4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRKFAPU UUUU= X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_00,DATE_IN_FUTURE_06_12, KHOP_HELO_FCRDNS,MAY_BE_FORGED,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772657508248593652 X-GMAIL-MSGID: 1772657508248593652 |
Series |
Fixes and cleanups to compaction
|
|
Commit Message
Kemeng Shi
July 28, 2023, 5:10 p.m. UTC
In old code, we set skip to found page block in fast_find_migrateblock. So
we use fast_find_block to avoid skip found page block from
fast_find_migrateblock.
In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock,
then fast_find_block is useless.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
mm/compaction.c | 12 +-----------
1 file changed, 1 insertion(+), 11 deletions(-)
Comments
On 7/29/2023 1:10 AM, Kemeng Shi wrote: > In old code, we set skip to found page block in fast_find_migrateblock. So > we use fast_find_block to avoid skip found page block from > fast_find_migrateblock. > In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in > fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock, > then fast_find_block is useless. > > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> > --- > mm/compaction.c | 12 +----------- > 1 file changed, 1 insertion(+), 11 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index ad535f880c70..09c36251c613 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) > const isolate_mode_t isolate_mode = > (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) | > (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0); > - bool fast_find_block; > > /* > * Start at where we last stopped, or beginning of the zone as > @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) > if (block_start_pfn < cc->zone->zone_start_pfn) > block_start_pfn = cc->zone->zone_start_pfn; > > - /* > - * fast_find_migrateblock marks a pageblock skipped so to avoid > - * the isolation_suitable check below, check whether the fast > - * search was successful. > - */ > - fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail; > - > /* Only scan within a pageblock boundary */ > block_end_pfn = pageblock_end_pfn(low_pfn); > > @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) > * Do not cross the free scanner. > */ > for (; block_end_pfn <= cc->free_pfn; > - fast_find_block = false, > cc->migrate_pfn = low_pfn = block_end_pfn, > block_start_pfn = block_end_pfn, > block_end_pfn += pageblock_nr_pages) { > @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) > * before making it "skip" so other compaction instances do > * not scan the same block. > */ > - if (pageblock_aligned(low_pfn) && > - !fast_find_block && !isolation_suitable(cc, page)) > + if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page)) I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right?
on 8/1/2023 10:42 AM, Baolin Wang wrote: > > > On 7/29/2023 1:10 AM, Kemeng Shi wrote: >> In old code, we set skip to found page block in fast_find_migrateblock. So >> we use fast_find_block to avoid skip found page block from >> fast_find_migrateblock. >> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in >> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock, >> then fast_find_block is useless. >> >> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >> --- >> mm/compaction.c | 12 +----------- >> 1 file changed, 1 insertion(+), 11 deletions(-) >> >> diff --git a/mm/compaction.c b/mm/compaction.c >> index ad535f880c70..09c36251c613 100644 >> --- a/mm/compaction.c >> +++ b/mm/compaction.c >> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >> const isolate_mode_t isolate_mode = >> (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) | >> (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0); >> - bool fast_find_block; >> /* >> * Start at where we last stopped, or beginning of the zone as >> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >> if (block_start_pfn < cc->zone->zone_start_pfn) >> block_start_pfn = cc->zone->zone_start_pfn; >> - /* >> - * fast_find_migrateblock marks a pageblock skipped so to avoid >> - * the isolation_suitable check below, check whether the fast >> - * search was successful. >> - */ >> - fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail; >> - >> /* Only scan within a pageblock boundary */ >> block_end_pfn = pageblock_end_pfn(low_pfn); >> @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >> * Do not cross the free scanner. >> */ >> for (; block_end_pfn <= cc->free_pfn; >> - fast_find_block = false, >> cc->migrate_pfn = low_pfn = block_end_pfn, >> block_start_pfn = block_end_pfn, >> block_end_pfn += pageblock_nr_pages) { >> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >> * before making it "skip" so other compaction instances do >> * not scan the same block. >> */ >> - if (pageblock_aligned(low_pfn) && >> - !fast_find_block && !isolation_suitable(cc, page)) >> + if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page)) > > I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right? > > Actually, found pageblock could be set skip as: 1. other compactor could mark this pageblock as skip after zone lock is realeased in fast_find_migrateblock. 2. fast_find_migrateblock may uses pfn from reinit_migrate_pfn which is previously found and sacnned. It could be fully sacnned and marked skip after it's first return from fast_find_migrateblock and it should be skipped. Thanks!
On 8/1/2023 11:24 AM, Kemeng Shi wrote: > > > on 8/1/2023 10:42 AM, Baolin Wang wrote: >> >> >> On 7/29/2023 1:10 AM, Kemeng Shi wrote: >>> In old code, we set skip to found page block in fast_find_migrateblock. So >>> we use fast_find_block to avoid skip found page block from >>> fast_find_migrateblock. >>> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in >>> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock, >>> then fast_find_block is useless. >>> >>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >>> --- >>> mm/compaction.c | 12 +----------- >>> 1 file changed, 1 insertion(+), 11 deletions(-) >>> >>> diff --git a/mm/compaction.c b/mm/compaction.c >>> index ad535f880c70..09c36251c613 100644 >>> --- a/mm/compaction.c >>> +++ b/mm/compaction.c >>> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>> const isolate_mode_t isolate_mode = >>> (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) | >>> (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0); >>> - bool fast_find_block; >>> /* >>> * Start at where we last stopped, or beginning of the zone as >>> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>> if (block_start_pfn < cc->zone->zone_start_pfn) >>> block_start_pfn = cc->zone->zone_start_pfn; >>> - /* >>> - * fast_find_migrateblock marks a pageblock skipped so to avoid >>> - * the isolation_suitable check below, check whether the fast >>> - * search was successful. >>> - */ >>> - fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail; >>> - >>> /* Only scan within a pageblock boundary */ >>> block_end_pfn = pageblock_end_pfn(low_pfn); >>> @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>> * Do not cross the free scanner. >>> */ >>> for (; block_end_pfn <= cc->free_pfn; >>> - fast_find_block = false, >>> cc->migrate_pfn = low_pfn = block_end_pfn, >>> block_start_pfn = block_end_pfn, >>> block_end_pfn += pageblock_nr_pages) { >>> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>> * before making it "skip" so other compaction instances do >>> * not scan the same block. >>> */ >>> - if (pageblock_aligned(low_pfn) && >>> - !fast_find_block && !isolation_suitable(cc, page)) >>> + if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page)) >> >> I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right? >> >> > Actually, found pageblock could be set skip as: > 1. other compactor could mark this pageblock as skip after zone lock is realeased > in fast_find_migrateblock. Yes, but your patch also can not close this race window, that means it can also be set skip flag after the isolation_suitable() validation by other compactors. > 2. fast_find_migrateblock may uses pfn from reinit_migrate_pfn which is previously found > and sacnned. It could be fully sacnned and marked skip after it's first return from Right, but now the 'fast_find_block' is false, and we will call isolation_suitable() to validate the skip flag. > fast_find_migrateblock and it should be skipped. > Thanks!
on 8/1/2023 11:34 AM, Baolin Wang wrote: > > > On 8/1/2023 11:24 AM, Kemeng Shi wrote: >> >> >> on 8/1/2023 10:42 AM, Baolin Wang wrote: >>> >>> >>> On 7/29/2023 1:10 AM, Kemeng Shi wrote: >>>> In old code, we set skip to found page block in fast_find_migrateblock. So >>>> we use fast_find_block to avoid skip found page block from >>>> fast_find_migrateblock. >>>> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in >>>> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock, >>>> then fast_find_block is useless. >>>> >>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >>>> --- >>>> mm/compaction.c | 12 +----------- >>>> 1 file changed, 1 insertion(+), 11 deletions(-) >>>> >>>> diff --git a/mm/compaction.c b/mm/compaction.c >>>> index ad535f880c70..09c36251c613 100644 >>>> --- a/mm/compaction.c >>>> +++ b/mm/compaction.c >>>> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>>> const isolate_mode_t isolate_mode = >>>> (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) | >>>> (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0); >>>> - bool fast_find_block; >>>> /* >>>> * Start at where we last stopped, or beginning of the zone as >>>> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>>> if (block_start_pfn < cc->zone->zone_start_pfn) >>>> block_start_pfn = cc->zone->zone_start_pfn; >>>> - /* >>>> - * fast_find_migrateblock marks a pageblock skipped so to avoid >>>> - * the isolation_suitable check below, check whether the fast >>>> - * search was successful. >>>> - */ >>>> - fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail; >>>> - >>>> /* Only scan within a pageblock boundary */ >>>> block_end_pfn = pageblock_end_pfn(low_pfn); >>>> @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>>> * Do not cross the free scanner. >>>> */ >>>> for (; block_end_pfn <= cc->free_pfn; >>>> - fast_find_block = false, >>>> cc->migrate_pfn = low_pfn = block_end_pfn, >>>> block_start_pfn = block_end_pfn, >>>> block_end_pfn += pageblock_nr_pages) { >>>> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>>> * before making it "skip" so other compaction instances do >>>> * not scan the same block. >>>> */ >>>> - if (pageblock_aligned(low_pfn) && >>>> - !fast_find_block && !isolation_suitable(cc, page)) >>>> + if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page)) >>> >>> I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right? >>> >>> >> Actually, found pageblock could be set skip as: >> 1. other compactor could mark this pageblock as skip after zone lock is realeased >> in fast_find_migrateblock. > > Yes, but your patch also can not close this race window, that means it can also be set skip flag after the isolation_suitable() validation by other compactors. > Yes, I think it's still worth to remove a lot of fast_find_block relevant check and reduce code complexity with one redundant isolation_suitable which may skip some block with luck. >> 2. fast_find_migrateblock may uses pfn from reinit_migrate_pfn which is previously found >> and sacnned. It could be fully sacnned and marked skip after it's first return from > > Right, but now the 'fast_find_block' is false, and we will call isolation_suitable() to validate the skip flag. > Right, sorry for missing that. But it's ok to keep the fast_find_block if you insist and I will just correct the stale comment that "fast_find_migrateblock marks a pageblock skipped ..." in next version. Thanks! >> fast_find_migrateblock and it should be skipped. >> Thanks! >
On 8/1/2023 11:48 AM, Kemeng Shi wrote: > > > on 8/1/2023 11:34 AM, Baolin Wang wrote: >> >> >> On 8/1/2023 11:24 AM, Kemeng Shi wrote: >>> >>> >>> on 8/1/2023 10:42 AM, Baolin Wang wrote: >>>> >>>> >>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote: >>>>> In old code, we set skip to found page block in fast_find_migrateblock. So >>>>> we use fast_find_block to avoid skip found page block from >>>>> fast_find_migrateblock. >>>>> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in >>>>> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock, >>>>> then fast_find_block is useless. >>>>> >>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >>>>> --- >>>>> mm/compaction.c | 12 +----------- >>>>> 1 file changed, 1 insertion(+), 11 deletions(-) >>>>> >>>>> diff --git a/mm/compaction.c b/mm/compaction.c >>>>> index ad535f880c70..09c36251c613 100644 >>>>> --- a/mm/compaction.c >>>>> +++ b/mm/compaction.c >>>>> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>>>> const isolate_mode_t isolate_mode = >>>>> (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) | >>>>> (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0); >>>>> - bool fast_find_block; >>>>> /* >>>>> * Start at where we last stopped, or beginning of the zone as >>>>> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>>>> if (block_start_pfn < cc->zone->zone_start_pfn) >>>>> block_start_pfn = cc->zone->zone_start_pfn; >>>>> - /* >>>>> - * fast_find_migrateblock marks a pageblock skipped so to avoid >>>>> - * the isolation_suitable check below, check whether the fast >>>>> - * search was successful. >>>>> - */ >>>>> - fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail; >>>>> - >>>>> /* Only scan within a pageblock boundary */ >>>>> block_end_pfn = pageblock_end_pfn(low_pfn); >>>>> @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>>>> * Do not cross the free scanner. >>>>> */ >>>>> for (; block_end_pfn <= cc->free_pfn; >>>>> - fast_find_block = false, >>>>> cc->migrate_pfn = low_pfn = block_end_pfn, >>>>> block_start_pfn = block_end_pfn, >>>>> block_end_pfn += pageblock_nr_pages) { >>>>> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) >>>>> * before making it "skip" so other compaction instances do >>>>> * not scan the same block. >>>>> */ >>>>> - if (pageblock_aligned(low_pfn) && >>>>> - !fast_find_block && !isolation_suitable(cc, page)) >>>>> + if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page)) >>>> >>>> I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right? >>>> >>>> >>> Actually, found pageblock could be set skip as: >>> 1. other compactor could mark this pageblock as skip after zone lock is realeased >>> in fast_find_migrateblock. >> >> Yes, but your patch also can not close this race window, that means it can also be set skip flag after the isolation_suitable() validation by other compactors. >> > Yes, I think it's still worth to remove a lot of fast_find_block relevant check and reduce > code complexity with one redundant isolation_suitable which may skip some block with luck. >>> 2. fast_find_migrateblock may uses pfn from reinit_migrate_pfn which is previously found >>> and sacnned. It could be fully sacnned and marked skip after it's first return from >> >> Right, but now the 'fast_find_block' is false, and we will call isolation_suitable() to validate the skip flag. >> > Right, sorry for missing that. > > But it's ok to keep the fast_find_block if you insist and I will just correct the stale Yes, I still prefer to keep the fast_find_block, since I did not see this patch can fix any real issue and might have a side effect for fast-find-pageblock(?). > comment that "fast_find_migrateblock marks a pageblock skipped ..." in next version. Sure, please do it.
diff --git a/mm/compaction.c b/mm/compaction.c index ad535f880c70..09c36251c613 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) const isolate_mode_t isolate_mode = (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) | (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0); - bool fast_find_block; /* * Start at where we last stopped, or beginning of the zone as @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) if (block_start_pfn < cc->zone->zone_start_pfn) block_start_pfn = cc->zone->zone_start_pfn; - /* - * fast_find_migrateblock marks a pageblock skipped so to avoid - * the isolation_suitable check below, check whether the fast - * search was successful. - */ - fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail; - /* Only scan within a pageblock boundary */ block_end_pfn = pageblock_end_pfn(low_pfn); @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) * Do not cross the free scanner. */ for (; block_end_pfn <= cc->free_pfn; - fast_find_block = false, cc->migrate_pfn = low_pfn = block_end_pfn, block_start_pfn = block_end_pfn, block_end_pfn += pageblock_nr_pages) { @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc) * before making it "skip" so other compaction instances do * not scan the same block. */ - if (pageblock_aligned(low_pfn) && - !fast_find_block && !isolation_suitable(cc, page)) + if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page)) continue; /*