Message ID | 20230428135414.v3.1.Ia86ccac02a303154a0b8bc60567e7a95d34c96d3@changeid |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1213499vqo; Fri, 28 Apr 2023 14:09:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ76oT46nabao0pA+PambaYH1ggdb1xZDUKU9IXW6covh9YPop/NyP/TZQ8lPCSKdaXh++1R X-Received: by 2002:a17:903:228a:b0:1a8:4506:a36f with SMTP id b10-20020a170903228a00b001a84506a36fmr12962737plh.25.1682716176186; Fri, 28 Apr 2023 14:09:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682716176; cv=none; d=google.com; s=arc-20160816; b=VRuMweoja8oO/0QG2U2q5HBPGPR9klrXInLMvJADg4Zfq9ABWyYMPGLUdDZlXHqlKE dFezLrdy4+j4bA9KRKzxVkB3uA1Z0urHyyzwmmdugp2Y6ckBinIgf5cGDiwEWy/TBy4G MwQ+nzR3JdQwr1mp+TQNOcq+2cTM6vdtxCP3YeFdAkPFSgfUAnKZlejrs2X3QWBU47x4 ua8C2cK0g1jdHInYMAFegOb/rv1akS+0ZdIwyAHsiAmZ0ilXxTP33a6bwSaskmOO0TFe fMwzJqhjZZfFNeyBqOiDVlOYVDlUA8IMQiu8ajOieWPgtOTJagGdmq9bCeCDfAvCml1E 4xfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=mKH9cNLoRl7F0jBRyhbLlrHWLrcYg1/BWBgCd+GuWTY=; b=RzUqmTprzevLnEMiw8eByoLs7HFx4c/LbxxFcAzsSfMhCJbVlX77NIAlrVBIEz31SO fAT2FePISnC1kkc9Z7hjsfyZg+2qvaL+EZySfby8Zspd2cZPvW5sNWNbW5A/nJNlkJaW ig+cp0N8u0geuoYnYGJGs7SdccfkbmOWKpR03yyfxkitdTiFe+dCPLulper3Uai/vyGV udQB/EJvC+3aTShPhQRFyVCmUsSOY8rmIzBNXXYjfnki0FQ8/dW5wTgmQS80LqhjEW2X oJJqj16Wcw26ryI9yV6XhxEzC4fucmNr3L+KinlLvNVqg/eNO2omgzHgPnt0XL5bxdUv 8gSQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=ce9F3+ul; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l10-20020a170903120a00b001a69fe5f560si23392143plh.444.2023.04.28.14.09.21; Fri, 28 Apr 2023 14:09:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=ce9F3+ul; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345610AbjD1UzV (ORCPT <rfc822;chrisjones.unixmen@gmail.com> + 99 others); Fri, 28 Apr 2023 16:55:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346536AbjD1UzU (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 28 Apr 2023 16:55:20 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D127171D for <linux-kernel@vger.kernel.org>; Fri, 28 Apr 2023 13:55:19 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-64115eef620so15404239b3a.1 for <linux-kernel@vger.kernel.org>; Fri, 28 Apr 2023 13:55:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1682715318; x=1685307318; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=mKH9cNLoRl7F0jBRyhbLlrHWLrcYg1/BWBgCd+GuWTY=; b=ce9F3+ulqAEqFwiYR/spGPVhrpwPFT1fWBM0liGcnadWqkv4icXpwr2gdbNiKWQ8rE HcqhSBIph7l3DtZaF/tFWpHZ/+MuocskRsD6H860hXG4OI8j6KPYCjbakr5agU/0+dBd rdxHgWSg5gYqhDB7e0psB12+n1j0kz1T5SSfo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682715318; x=1685307318; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=mKH9cNLoRl7F0jBRyhbLlrHWLrcYg1/BWBgCd+GuWTY=; b=O8iM3ubyf1jfWdQOzdnSraPK6DsNkQmP/xJpYi4efbBI9PFL532pgl1H5Yc2V+NxL2 UJBF3qRQ7HWiwGYqsmUAeUBRKIt5o+nztD4xKwmxRRzyUh/Gk31MEW2PNJ5ammWD7f4S D6LFA2oVwP9S6FLO/xG0NUjP6IlrIJttcGTq9Od3kc3fP83m6U6djvDL89EoiM7yMm8b zXdZgzygEEFetfR0p01x5NrTUFZcqnCvVO1V6mp4iKQJOoEBnzED/ILUjJ2m7thWQRgW AmUMTqXqTfPZXIA3tw9GYXWLNhXi2tX1PDcvE2iNVcj4n7V5qAuOGbFUVoGpAsKj6rJw 82gg== X-Gm-Message-State: AC+VfDwqrcqgZ6/oP3N4CZDp73/FIFp95sfvuj6nRa6ZeRzET5D1mSLq bmosuokjsf8k5PTlfGQMfbtL8w== X-Received: by 2002:a17:903:2341:b0:1a2:8924:2230 with SMTP id c1-20020a170903234100b001a289242230mr14459019plh.27.1682715318482; Fri, 28 Apr 2023 13:55:18 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:2d50:d792:6783:38db]) by smtp.gmail.com with ESMTPSA id bf9-20020a170902b90900b0019f9fd10f62sm13704122plb.70.2023.04.28.13.55.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Apr 2023 13:55:17 -0700 (PDT) From: Douglas Anderson <dianders@chromium.org> To: Andrew Morton <akpm@linux-foundation.org>, Mel Gorman <mgorman@techsingularity.net>, Vlastimil Babka <vbabka@suse.cz>, Ying <ying.huang@intel.com>, Alexander Viro <viro@zeniv.linux.org.uk>, Christian Brauner <brauner@kernel.org> Cc: Hillf Danton <hdanton@sina.com>, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Gao Xiang <hsiangkao@linux.alibaba.com>, Matthew Wilcox <willy@infradead.org>, Yu Zhao <yuzhao@google.com>, Douglas Anderson <dianders@chromium.org> Subject: [PATCH v3] migrate_pages: Avoid blocking for IO in MIGRATE_SYNC_LIGHT Date: Fri, 28 Apr 2023 13:54:38 -0700 Message-ID: <20230428135414.v3.1.Ia86ccac02a303154a0b8bc60567e7a95d34c96d3@changeid> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764455797047435585?= X-GMAIL-MSGID: =?utf-8?q?1764455797047435585?= |
Series |
[v3] migrate_pages: Avoid blocking for IO in MIGRATE_SYNC_LIGHT
|
|
Commit Message
Doug Anderson
April 28, 2023, 8:54 p.m. UTC
The MIGRATE_SYNC_LIGHT mode is intended to block for things that will
finish quickly but not for things that will take a long time. Exactly
how long is too long is not well defined, but waits of tens of
milliseconds is likely non-ideal.
When putting a Chromebook under memory pressure (opening over 90 tabs
on a 4GB machine) it was fairly easy to see delays waiting for some
locks in the kcompactd code path of > 100 ms. While the laptop wasn't
amazingly usable in this state, it was still limping along and this
state isn't something artificial. Sometimes we simply end up with a
lot of memory pressure.
Putting the same Chromebook under memory pressure while it was running
Android apps (though not stressing them) showed a much worse result
(NOTE: this was on a older kernel but the codepaths here are similar).
Android apps on ChromeOS currently run from a 128K-block,
zlib-compressed, loopback-mounted squashfs disk. If we get a page
fault from something backed by the squashfs filesystem we could end up
holding a folio lock while reading enough from disk to decompress 128K
(and then decompressing it using the somewhat slow zlib algorithms).
That reading goes through the ext4 subsystem (because it's a loopback
mount) before eventually ending up in the block subsystem. This extra
jaunt adds extra overhead. Without much work I could see cases where
we ended up blocked on a folio lock for over a second. With more
extreme memory pressure I could see up to 25 seconds.
We considered adding a timeout in the case of MIGRATE_SYNC_LIGHT for
the two locks that were seen to be slow [1] and that generated much
discussion. After discussion, it was decided that we should avoid
waiting for the two locks during MIGRATE_SYNC_LIGHT if they were being
held for IO. We'll continue with the unbounded wait for the more full
SYNC modes.
With this change, I couldn't see any slow waits on these locks with my
previous testcases.
NOTE: The reason I stated digging into this originally isn't because
some benchmark had gone awry, but because we've received in-the-field
crash reports where we have a hung task waiting on the page lock
(which is the equivalent code path on old kernels). While the root
cause of those crashes is likely unrelated and won't be fixed by this
patch, analyzing those crash reports did point out these very long
waits seemed like something good to fix. With this patch we should no
longer hang waiting on these locks, but presumably the system will
still be in a bad shape and hang somewhere else.
[1] https://lore.kernel.org/r/20230421151135.v2.1.I2b71e11264c5c214bc59744b9e13e4c353bc5714@changeid
Suggested-by: Matthew Wilcox <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Gao Xiang <hsiangkao@linux.alibaba.com>
Signed-off-by: Douglas Anderson <dianders@chromium.org>
---
Most of the actual code in this patch came from emails written by
Matthew Wilcox and I just cleaned the code up to get it to compile.
I'm happy to set authorship to him if he would like, but for now I've
credited him with Suggested-by.
This patch has changed pretty significantly between versions, so
adding a link to previous versions to help anyone needing to find the
history:
v1 - https://lore.kernel.org/r/20230413182313.RFC.1.Ia86ccac02a303154a0b8bc60567e7a95d34c96d3@changeid
v2 - https://lore.kernel.org/r/20230421221249.1616168-1-dianders@chromium.org/
Changes in v3:
- Combine patches for buffers and folios.
- Use buffer_uptodate() and folio_test_uptodate() instead of timeout.
Changes in v2:
- Keep unbounded delay in "SYNC", delay with a timeout in "SYNC_LIGHT".
- Also add a timeout for locking of buffers.
mm/migrate.c | 49 ++++++++++++++++++++++++++-----------------------
1 file changed, 26 insertions(+), 23 deletions(-)
Comments
On Fri, Apr 28, 2023 at 01:54:38PM -0700, Douglas Anderson wrote: > Suggested-by: Matthew Wilcox <willy@infradead.org> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Hillf Danton <hdanton@sina.com> > Cc: Gao Xiang <hsiangkao@linux.alibaba.com> > Signed-off-by: Douglas Anderson <dianders@chromium.org> > --- > Most of the actual code in this patch came from emails written by > Matthew Wilcox and I just cleaned the code up to get it to compile. > I'm happy to set authorship to him if he would like, but for now I've > credited him with Suggested-by. This all looks good to me. I don't care about getting credit for it. Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
On Fri, Apr 28, 2023 at 01:54:38PM -0700, Douglas Anderson wrote: > The MIGRATE_SYNC_LIGHT mode is intended to block for things that will > finish quickly but not for things that will take a long time. Exactly > how long is too long is not well defined, but waits of tens of > milliseconds is likely non-ideal. > > When putting a Chromebook under memory pressure (opening over 90 tabs > on a 4GB machine) it was fairly easy to see delays waiting for some > locks in the kcompactd code path of > 100 ms. While the laptop wasn't > amazingly usable in this state, it was still limping along and this > state isn't something artificial. Sometimes we simply end up with a > lot of memory pressure. > > Putting the same Chromebook under memory pressure while it was running > Android apps (though not stressing them) showed a much worse result > (NOTE: this was on a older kernel but the codepaths here are similar). > Android apps on ChromeOS currently run from a 128K-block, > zlib-compressed, loopback-mounted squashfs disk. If we get a page > fault from something backed by the squashfs filesystem we could end up > holding a folio lock while reading enough from disk to decompress 128K > (and then decompressing it using the somewhat slow zlib algorithms). > That reading goes through the ext4 subsystem (because it's a loopback > mount) before eventually ending up in the block subsystem. This extra > jaunt adds extra overhead. Without much work I could see cases where > we ended up blocked on a folio lock for over a second. With more > extreme memory pressure I could see up to 25 seconds. > > We considered adding a timeout in the case of MIGRATE_SYNC_LIGHT for > the two locks that were seen to be slow [1] and that generated much > discussion. After discussion, it was decided that we should avoid > waiting for the two locks during MIGRATE_SYNC_LIGHT if they were being > held for IO. We'll continue with the unbounded wait for the more full > SYNC modes. > > With this change, I couldn't see any slow waits on these locks with my > previous testcases. > > NOTE: The reason I stated digging into this originally isn't because > some benchmark had gone awry, but because we've received in-the-field > crash reports where we have a hung task waiting on the page lock > (which is the equivalent code path on old kernels). While the root > cause of those crashes is likely unrelated and won't be fixed by this > patch, analyzing those crash reports did point out these very long > waits seemed like something good to fix. With this patch we should no > longer hang waiting on these locks, but presumably the system will > still be in a bad shape and hang somewhere else. > > [1] https://lore.kernel.org/r/20230421151135.v2.1.I2b71e11264c5c214bc59744b9e13e4c353bc5714@changeid > > Suggested-by: Matthew Wilcox <willy@infradead.org> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Hillf Danton <hdanton@sina.com> > Cc: Gao Xiang <hsiangkao@linux.alibaba.com> > Signed-off-by: Douglas Anderson <dianders@chromium.org> Acked-by: Mel Gorman <mgorman@techsingularity.net>
Hi, On Sat, Apr 29, 2023 at 3:14 AM Hillf Danton <hdanton@sina.com> wrote: > > On 28 Apr 2023 13:54:38 -0700 Douglas Anderson <dianders@chromium.org> > > The MIGRATE_SYNC_LIGHT mode is intended to block for things that will > > finish quickly but not for things that will take a long time. Exactly > > how long is too long is not well defined, but waits of tens of > > milliseconds is likely non-ideal. > > > > When putting a Chromebook under memory pressure (opening over 90 tabs > > on a 4GB machine) it was fairly easy to see delays waiting for some > > locks in the kcompactd code path of > 100 ms. While the laptop wasn't > > amazingly usable in this state, it was still limping along and this > > state isn't something artificial. Sometimes we simply end up with a > > lot of memory pressure. > > Was kcompactd waken up for PAGE_ALLOC_COSTLY_ORDER? I put some more traces in and reproduced it again. I saw something that looked like this: 1. balance_pgdat() called wakeup_kcompactd() with order=10 and that caused us to get all the way to the end and wakeup kcompactd (there were previous calls to wakeup_kcompactd() that returned early). 2. kcompactd started and completed kcompactd_do_work() without blocking. 3. kcompactd called proactive_compact_node() and there blocked for ~92ms in one case, ~120ms in another case, ~131ms in another case. > > Putting the same Chromebook under memory pressure while it was running > > Android apps (though not stressing them) showed a much worse result > > (NOTE: this was on a older kernel but the codepaths here are similar). > > Android apps on ChromeOS currently run from a 128K-block, > > zlib-compressed, loopback-mounted squashfs disk. If we get a page > > fault from something backed by the squashfs filesystem we could end up > > holding a folio lock while reading enough from disk to decompress 128K > > (and then decompressing it using the somewhat slow zlib algorithms). > > That reading goes through the ext4 subsystem (because it's a loopback > > mount) before eventually ending up in the block subsystem. This extra > > jaunt adds extra overhead. Without much work I could see cases where > > we ended up blocked on a folio lock for over a second. With more > > extreme memory pressure I could see up to 25 seconds. > > In the same kcompactd code path above? It was definitely in kcompactd. I can go back and trace through this too, if it's useful, but I suspect it's the same. > > We considered adding a timeout in the case of MIGRATE_SYNC_LIGHT for > > the two locks that were seen to be slow [1] and that generated much > > discussion. After discussion, it was decided that we should avoid > > waiting for the two locks during MIGRATE_SYNC_LIGHT if they were being > > held for IO. We'll continue with the unbounded wait for the more full > > SYNC modes. > > > > With this change, I couldn't see any slow waits on these locks with my > > previous testcases. > > Well this is the upside after this change, but given the win, what is > the lose/cost paid? For example the changes in compact fail and success [1]. > > [1] https://lore.kernel.org/lkml/20230418191313.268131-1-hannes@cmpxchg.org/ That looks like an interesting series. Obviously it would need to be tested, but my hunch is that ${SUBJECT} patch would work well with that series. Specifically with Johannes's series it seems more important for the kcompactd thread to be working fruitfully. Having it blocked for a long time when there is other useful work it could be doing still seems wrong. With ${SUBJECT} patch it's not that we'll never come back and try again, but we'll just wait until a future iteration when (hopefully) the locks are easier to acquire. In the meantime, we're looking for other pages to migrate. -Doug
Hi, On Sun, Apr 30, 2023 at 1:53 AM Hillf Danton <hdanton@sina.com> wrote: > > On 28 Apr 2023 13:54:38 -0700 Douglas Anderson <dianders@chromium.org> > > The MIGRATE_SYNC_LIGHT mode is intended to block for things that will > > finish quickly but not for things that will take a long time. Exactly > > how long is too long is not well defined, but waits of tens of > > milliseconds is likely non-ideal. > > > > When putting a Chromebook under memory pressure (opening over 90 tabs > > on a 4GB machine) it was fairly easy to see delays waiting for some > > locks in the kcompactd code path of > 100 ms. While the laptop wasn't > > amazingly usable in this state, it was still limping along and this > > state isn't something artificial. Sometimes we simply end up with a > > lot of memory pressure. > > Given longer than 100ms stall, this can not be a correct fix if the > hardware fails to do more than ten IOs a second. > > OTOH given some pages reclaimed for compaction to make forward progress > before kswapd wakes kcompactd up, this can not be a fix without spotting > the cause of the stall. Right that the system is in pretty bad shape when this happens and it's not very effective at doing IO or much of anything because it's under bad memory pressure. I guess my first thought is that, when this happens then a process holding the lock gets preempted and doesn't get scheduled back in for a while. That _should_ be possible, right? In the case where I'm reproducing this then all the CPUs would be super busy madly trying to compress / decompress zram, so it doesn't surprise me that a process could get context switched out for a while. -Doug
Hi, On Tue, May 2, 2023 at 6:45 PM Hillf Danton <hdanton@sina.com> wrote: > > On 2 May 2023 14:20:54 -0700 Douglas Anderson <dianders@chromium.org> > > On Sun, Apr 30, 2023 at 1:53=E2=80=AFAM Hillf Danton <hdanton@sina.com> wrote: > > > On 28 Apr 2023 13:54:38 -0700 Douglas Anderson <dianders@chromium.org> > > > > The MIGRATE_SYNC_LIGHT mode is intended to block for things that will > > > > finish quickly but not for things that will take a long time. Exactly > > > > how long is too long is not well defined, but waits of tens of > > > > milliseconds is likely non-ideal. > > > > > > > > When putting a Chromebook under memory pressure (opening over 90 tabs > > > > on a 4GB machine) it was fairly easy to see delays waiting for some > > > > locks in the kcompactd code path of > 100 ms. While the laptop wasn't > > > > amazingly usable in this state, it was still limping along and this > > > > state isn't something artificial. Sometimes we simply end up with a > > > > lot of memory pressure. > > > > > > Given longer than 100ms stall, this can not be a correct fix if the > > > hardware fails to do more than ten IOs a second. > > > > > > OTOH given some pages reclaimed for compaction to make forward progress > > > before kswapd wakes kcompactd up, this can not be a fix without spotting > > > the cause of the stall. > > > > Right that the system is in pretty bad shape when this happens and > > it's not very effective at doing IO or much of anything because it's > > under bad memory pressure. > > Based on the info in another reply [1] > > | I put some more traces in and reproduced it again. I saw something > | that looked like this: > | > | 1. balance_pgdat() called wakeup_kcompactd() with order=10 and that > | caused us to get all the way to the end and wakeup kcompactd (there > | were previous calls to wakeup_kcompactd() that returned early). > | > | 2. kcompactd started and completed kcompactd_do_work() without blocking. > | > | 3. kcompactd called proactive_compact_node() and there blocked for > | ~92ms in one case, ~120ms in another case, ~131ms in another case. > > I see fragmentation given order=10 and proactive_compact_node(). Can you > specify the evidence of bad memory pressure? What type of evidence are you looking for? When I'm reproducing these problems, I'm running a test that specifically puts the system under memory pressure by opening up lots of tabs in the Chrome browser. When I start seeing these printouts, I can take a look at the system and I can see that it's pretty much constantly swapping in and swapping out. > > I guess my first thought is that, when this happens then a process > > holding the lock gets preempted and doesn't get scheduled back in for > > a while. That _should_ be possible, right? In the case where I'm > > reproducing this then all the CPUs would be super busy madly trying to > > compress / decompress zram, so it doesn't surprise me that a process > > could get context switched out for a while. > > Could switchout turn the below I/O upside down? > /* > * In "light" mode, we can wait for transient locks (eg > * inserting a page into the page table), but it's not > * worth waiting for I/O. > */ I'm not sure I understand what you're asking, sorry! -Doug
diff --git a/mm/migrate.c b/mm/migrate.c index db3f154446af..4a384eb32917 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -698,37 +698,32 @@ static bool buffer_migrate_lock_buffers(struct buffer_head *head, enum migrate_mode mode) { struct buffer_head *bh = head; + struct buffer_head *failed_bh; - /* Simple case, sync compaction */ - if (mode != MIGRATE_ASYNC) { - do { - lock_buffer(bh); - bh = bh->b_this_page; - - } while (bh != head); - - return true; - } - - /* async case, we cannot block on lock_buffer so use trylock_buffer */ do { if (!trylock_buffer(bh)) { - /* - * We failed to lock the buffer and cannot stall in - * async migration. Release the taken locks - */ - struct buffer_head *failed_bh = bh; - bh = head; - while (bh != failed_bh) { - unlock_buffer(bh); - bh = bh->b_this_page; - } - return false; + if (mode == MIGRATE_ASYNC) + goto unlock; + if (mode == MIGRATE_SYNC_LIGHT && !buffer_uptodate(bh)) + goto unlock; + lock_buffer(bh); } bh = bh->b_this_page; } while (bh != head); + return true; + +unlock: + /* We failed to lock the buffer and cannot stall. */ + failed_bh = bh; + bh = head; + while (bh != failed_bh) { + unlock_buffer(bh); + bh = bh->b_this_page; + } + + return false; } static int __buffer_migrate_folio(struct address_space *mapping, @@ -1162,6 +1157,14 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page if (current->flags & PF_MEMALLOC) goto out; + /* + * In "light" mode, we can wait for transient locks (eg + * inserting a page into the page table), but it's not + * worth waiting for I/O. + */ + if (mode == MIGRATE_SYNC_LIGHT && !folio_test_uptodate(src)) + goto out; + folio_lock(src); } locked = true;