Message ID | 20240304103757.235352-1-21cnbao@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-90440-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:fa17:b0:10a:f01:a869 with SMTP id ju23csp1335379dyc; Mon, 4 Mar 2024 02:39:08 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUXsV2A6k1Pviv5rfiZIHupof/LKEhkQKl0okUNmA3BcGx3IxSQdZGlq4ksENwtTguPSWkR8eSEqs0KDgwnxm2maudxEg== X-Google-Smtp-Source: AGHT+IF7ruxFR8IXMk0JiqfCEj74VDz2veQAaVnmvf914GBTz7FAAvlSzpw/JtlXTckhXJBzOE5I X-Received: by 2002:a50:c88b:0:b0:564:f6d5:f291 with SMTP id d11-20020a50c88b000000b00564f6d5f291mr5012323edh.34.1709548748550; Mon, 04 Mar 2024 02:39:08 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709548748; cv=pass; d=google.com; s=arc-20160816; b=oDOF7C3YrYOt+c1cWArKK6Z9hBBcXxXHQQcezrlaFNcae2pU1tYFJ2aLGYzsJxF1bR x+/fT0ca5V0aF4ugNPQ5NxfqGmpqzX+B5MbhN+iHtt2haVJE97Dyz2Yxid2fkqtCsYGt rgJ9vsgicG/xR6Tye0bBpk0GKqNxovjbJu7QMeBWbGI1V5Ksr3OHoLT1ezTjYaChPdSC UpeqcqVUKH6p21aaeEjyzZQIdLCbW2AFS/8Gz+miZl2JDmkVxP22ePQnW6cadu7TcYum EXH9nRq0D0dWNxvLbnNwBoZwInbwKCYfml+CBVEGVdjQPP7mNA6J/vZIcU40CQIMk0yV GyAw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=u9xib0zn6DxdjtfkNrv9JmBdK0ttpN1YNsPL7qmo/GU=; fh=PPloZZtbnThP7lOL/XxYxZ/mZeWNJJVzHPHEv0lQ6zM=; b=Z021HW2k0MxmN/LWUFqmq4/DUUYje2ZYtF+tpRMlJTdYN07XDeA+8rykO8xOYts4WE J7W76hzW4DZXpLDqzqmeS8LDJPaLGuoyyqOh+8vIFv3iXxxZVIqGThapwkXMyFGnQTUW gJbfr/QEd2H0vIpz37uxp0BG9oLdObfph3d+hjtp63x2E5nG6XMQGMNZAVcwtnZZswZ1 eJmrRxUM4V8ZPuNf8z3mIj0dfCIMJFL6iWN6jSgeMeEL5HijZ+/dNgjBAkOXF5EtIkHy qtAz0zU5/pKIRkJtNVJVyRcoRSBrjDOZRATxjXT4SJzHuZ3JZpSNItlZ2NC6LRe0d6ty XKTw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=ErcpOEPx; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-90440-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-90440-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id b102-20020a509f6f000000b00566fb8df2f6si2195235edf.248.2024.03.04.02.39.08 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Mar 2024 02:39:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-90440-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=ErcpOEPx; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-90440-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-90440-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id F27A51F2248E for <ouuuleilei@gmail.com>; Mon, 4 Mar 2024 10:39:07 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A5CDE1A29F; Mon, 4 Mar 2024 10:38:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ErcpOEPx" Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5145322615 for <linux-kernel@vger.kernel.org>; Mon, 4 Mar 2024 10:38:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709548735; cv=none; b=Z9WJDoAP7GOrEB0j9H7FBWOdzli2mSplt1pDT4qx+7kNd6shbAtd80TLGanvOWVcpFT+YLd6jNXsaPVrrgJXuawK6xk8YePzGZd0Qm5O2ALbhWxNdJVJiuqeGDNQ2RoFma0F3sjKr86zx7v9FGpZ3EGb2tXhbP+YsFi3nBp5XRI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709548735; c=relaxed/simple; bh=Mju5/GnHzcWO7zcnCuf4LaHAlV+7GUDHNxZW9EM5gbA=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version:Content-Type; b=am+VKyRD6FriTuhwbPIVM2CQgDg6QEfE+8VeJutB9PRzIl8TD1RzYMw3yiJqaDDehsVQarEQZ+c7rmVdafo96cYeyuK06vD+fg0wbiK0XVX0Zp8qdqqRenosrYKZ/Lkt5RetEhA4TnIHAsemoPyklLtdsZIx/o0FuxchIaXljM0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ErcpOEPx; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1dc139ed11fso41044395ad.0 for <linux-kernel@vger.kernel.org>; Mon, 04 Mar 2024 02:38:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709548733; x=1710153533; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=u9xib0zn6DxdjtfkNrv9JmBdK0ttpN1YNsPL7qmo/GU=; b=ErcpOEPxZjMOoM/B/WTmMHSPEk0lIPLoE9obchuJg0n9zLtG33HuJ916QNC/ar6tpG sNI7lmgSZO2rOiONNqZRq/xnoPp5vfO7f5OaqMMrDl25z+lKCkhrfLAyEjiz47uMKLZD 7XOT1zIrE4htWunj/rQ32/whPPKOeRKpzElTf6TPnlu/UcDP+BywhcQgtexp8fH5vGkx SqFx700QNw5VZE1zsFRpRfK1rJqUVsGYKWGVqGVRE7f4dkXh8gcl03HUdrSvTH8GuXrR zl/sM9/EgpyGcKK261jKFOVVIIegmMjIy2JX6eNmsk1RVdff42BjxGCwmNUa1w5AV96r hz1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709548733; x=1710153533; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=u9xib0zn6DxdjtfkNrv9JmBdK0ttpN1YNsPL7qmo/GU=; b=fpx3QvypM8vNyqTLwCfL5DApNpc/mvq4zZINPtNdwl6v+b1vhNjZNzMOMhFuirYtBn xsEmMvWoIDQYffahu51NHqMLvrwZZuSOZoLYLbDrtHChUv1NB6keDW9/vtNybiW9f1Nu 6fiHN7peBGBDlUmNC7iGKuAKuGGMca+kHxlkYHZjZepLZeT8H15/NZBV653n25B5BWSk w05R/nqvAUoxxRRvJC9OqbF1p3EOcBoBlIrCxtgjoSnQxbV1Qm2HzQS1IYCyORMobVwP 1Qz8/1UolEnhtD113fW3EUX5nB4NuTb57ArOikcZQbIheqY925W9TZimiBQOjghuBsrg wxRg== X-Forwarded-Encrypted: i=1; AJvYcCWYVdeO49H0Im4qYb9+m7YbTmVYfXa6RNzoXmH3bZTfuIoz8pPs/aI8zhcQMURMFyRVUUKdAcz84AD9kbbW7f+NouHWgCNKQFrAWFo0 X-Gm-Message-State: AOJu0YxmUDHmxPyVHObPZyENv+3C3Yc7N+X/oZsbjbn2efrU0Xr/Szyj SOreo/ZefoEhlaC6gvRshzT/VqH7yZY2mmkJ3hVmNcPtXUeNynMR X-Received: by 2002:a17:902:e54a:b0:1dd:4cb:cc57 with SMTP id n10-20020a170902e54a00b001dd04cbcc57mr6344173plf.0.1709548733322; Mon, 04 Mar 2024 02:38:53 -0800 (PST) Received: from localhost.localdomain ([2407:7000:8942:5500:aaa1:59ff:fe57:eb97]) by smtp.gmail.com with ESMTPSA id lh6-20020a170903290600b001dc23e877c1sm8201654plb.265.2024.03.04.02.38.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Mar 2024 02:38:26 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: david@redhat.com, ryan.roberts@arm.com, chrisl@kernel.org, yuzhao@google.com, hanchuanhua@oppo.com, linux-kernel@vger.kernel.org, willy@infradead.org, ying.huang@intel.com, xiang@kernel.org, mhocko@suse.com, shy828301@gmail.com, wangkefeng.wang@huawei.com, Barry Song <v-songbaohua@oppo.com>, Hugh Dickins <hughd@google.com> Subject: [RFC PATCH] mm: hold PTL from the first PTE while reclaiming a large folio Date: Mon, 4 Mar 2024 23:37:57 +1300 Message-Id: <20240304103757.235352-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792591788422353464 X-GMAIL-MSGID: 1792591788422353464 |
Series |
[RFC] mm: hold PTL from the first PTE while reclaiming a large folio
|
|
Commit Message
Barry Song
March 4, 2024, 10:37 a.m. UTC
From: Barry Song <v-songbaohua@oppo.com> page_vma_mapped_walk() within try_to_unmap_one() races with other PTEs modification such as break-before-make, while iterating PTEs of a large folio, it will only begin to acquire PTL after it gets a valid(present) PTE. break-before-make intermediately sets PTEs to pte_none. Thus, a large folio's PTEs might be partially skipped in try_to_unmap_one(). For example, for an anon folio, after try_to_unmap_one(), we may have PTE0 present, while PTE1 ~ PTE(nr_pages - 1) are swap entries. So folio will be still mapped, the folio fails to be reclaimed. What’s even more worrying is, its PTEs are no longer in a unified state. This might lead to accident folio_split() afterwards. And since a part of PTEs are now swap entries, accessing them will incur page fault - do_swap_page. It creates both anxiety and more expense. While we can't avoid userspace's unmap to break up unified PTEs such as CONT-PTE for a large folio, we can indeed keep away from kernel's breaking up them due to its code design. This patch is holding PTL from PTE0, thus, the folio will either be entirely reclaimed or entirely kept. On the other hand, this approach doesn't increase PTL contention. Even w/o the patch, page_vma_mapped_walk() will always get PTL after it sometimes skips one or two PTEs because intermediate break-before-makes are short, according to test. Of course, even w/o this patch, the vast majority of try_to_unmap_one still can get PTL from PTE0. This patch makes the number 100%. The other option is that we can give up in try_to_unmap_one once we find PTE0 is not the first entry we get PTL, we call page_vma_mapped_walk_done() to end the iteration at this case. This will keep the unified PTEs while the folio isn't reclaimed. The result is quite similar with small folios with one PTE - either entirely reclaimed or entirely kept. Reclaiming large folios by holding PTL from PTE0 seems a better option comparing to giving up after detecting PTL begins from non-PTE0. Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Barry Song <v-songbaohua@oppo.com> --- mm/vmscan.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
Comments
Hi Barry, On 04/03/2024 10:37, Barry Song wrote: > From: Barry Song <v-songbaohua@oppo.com> > > page_vma_mapped_walk() within try_to_unmap_one() races with other > PTEs modification such as break-before-make, while iterating PTEs > of a large folio, it will only begin to acquire PTL after it gets > a valid(present) PTE. break-before-make intermediately sets PTEs > to pte_none. Thus, a large folio's PTEs might be partially skipped > in try_to_unmap_one(). I just want to check my understanding here - I think the problem occurs for PTE-mapped, PMD-sized folios as well as smaller-than-PMD-size large folios? Now that I've had a look at the code and have a better understanding, I think that must be the case? And therefore this problem exists independently of my work to support swap-out of mTHP? (From your previous report I was under the impression that it only affected mTHP). Its just that the problem is becoming more pronounced because with mTHP, PTE-mapped large folios are much more common? > For example, for an anon folio, after try_to_unmap_one(), we may > have PTE0 present, while PTE1 ~ PTE(nr_pages - 1) are swap entries. > So folio will be still mapped, the folio fails to be reclaimed. > What’s even more worrying is, its PTEs are no longer in a unified > state. This might lead to accident folio_split() afterwards. And > since a part of PTEs are now swap entries, accessing them will > incur page fault - do_swap_page. > It creates both anxiety and more expense. While we can't avoid > userspace's unmap to break up unified PTEs such as CONT-PTE for > a large folio, we can indeed keep away from kernel's breaking up > them due to its code design. > This patch is holding PTL from PTE0, thus, the folio will either > be entirely reclaimed or entirely kept. On the other hand, this > approach doesn't increase PTL contention. Even w/o the patch, > page_vma_mapped_walk() will always get PTL after it sometimes > skips one or two PTEs because intermediate break-before-makes > are short, according to test. Of course, even w/o this patch, > the vast majority of try_to_unmap_one still can get PTL from > PTE0. This patch makes the number 100%. > The other option is that we can give up in try_to_unmap_one > once we find PTE0 is not the first entry we get PTL, we call > page_vma_mapped_walk_done() to end the iteration at this case. > This will keep the unified PTEs while the folio isn't reclaimed. > The result is quite similar with small folios with one PTE - > either entirely reclaimed or entirely kept. > Reclaiming large folios by holding PTL from PTE0 seems a better > option comparing to giving up after detecting PTL begins from > non-PTE0. > > Cc: Hugh Dickins <hughd@google.com> > Signed-off-by: Barry Song <v-songbaohua@oppo.com> Do we need a Fixes tag? > --- > mm/vmscan.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 0b888a2afa58..e4722fbbcd0c 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1270,6 +1270,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > > if (folio_test_pmd_mappable(folio)) > flags |= TTU_SPLIT_HUGE_PMD; > + /* > + * if page table lock is not held from the first PTE of > + * a large folio, some PTEs might be skipped because of > + * races with break-before-make, for example, PTEs can > + * be pte_none intermediately, thus one or more PTEs > + * might be skipped in try_to_unmap_one, we might result > + * in a large folio is partially mapped and partially > + * unmapped after try_to_unmap > + */ > + if (folio_test_large(folio)) > + flags |= TTU_SYNC; This looks sensible to me after thinking about it for a while. But I also have a gut feeling that there might be some more subtleties that are going over my head, since I'm not expert in this area. So will leave others to provide R-b :) Thanks, Ryan > > try_to_unmap(folio, flags); > if (folio_mapped(folio)) {
On 04.03.24 13:20, Ryan Roberts wrote: > Hi Barry, > > On 04/03/2024 10:37, Barry Song wrote: >> From: Barry Song <v-songbaohua@oppo.com> >> >> page_vma_mapped_walk() within try_to_unmap_one() races with other >> PTEs modification such as break-before-make, while iterating PTEs >> of a large folio, it will only begin to acquire PTL after it gets >> a valid(present) PTE. break-before-make intermediately sets PTEs >> to pte_none. Thus, a large folio's PTEs might be partially skipped >> in try_to_unmap_one(). > > I just want to check my understanding here - I think the problem occurs for > PTE-mapped, PMD-sized folios as well as smaller-than-PMD-size large folios? Now > that I've had a look at the code and have a better understanding, I think that > must be the case? And therefore this problem exists independently of my work to > support swap-out of mTHP? (From your previous report I was under the impression > that it only affected mTHP). > > Its just that the problem is becoming more pronounced because with mTHP, > PTE-mapped large folios are much more common? That is my understanding. > >> For example, for an anon folio, after try_to_unmap_one(), we may >> have PTE0 present, while PTE1 ~ PTE(nr_pages - 1) are swap entries. >> So folio will be still mapped, the folio fails to be reclaimed. >> What’s even more worrying is, its PTEs are no longer in a unified >> state. This might lead to accident folio_split() afterwards. And >> since a part of PTEs are now swap entries, accessing them will >> incur page fault - do_swap_page. >> It creates both anxiety and more expense. While we can't avoid >> userspace's unmap to break up unified PTEs such as CONT-PTE for >> a large folio, we can indeed keep away from kernel's breaking up >> them due to its code design. >> This patch is holding PTL from PTE0, thus, the folio will either >> be entirely reclaimed or entirely kept. On the other hand, this >> approach doesn't increase PTL contention. Even w/o the patch, >> page_vma_mapped_walk() will always get PTL after it sometimes >> skips one or two PTEs because intermediate break-before-makes >> are short, according to test. Of course, even w/o this patch, >> the vast majority of try_to_unmap_one still can get PTL from >> PTE0. This patch makes the number 100%. >> The other option is that we can give up in try_to_unmap_one >> once we find PTE0 is not the first entry we get PTL, we call >> page_vma_mapped_walk_done() to end the iteration at this case. >> This will keep the unified PTEs while the folio isn't reclaimed. >> The result is quite similar with small folios with one PTE - >> either entirely reclaimed or entirely kept. >> Reclaiming large folios by holding PTL from PTE0 seems a better >> option comparing to giving up after detecting PTL begins from >> non-PTE0. >> I'm sure that wall of text can be formatted in a better way :) . Also, I think we can drop some of the details, If you need some inspiration, I can give it a shot. >> Cc: Hugh Dickins <hughd@google.com> >> Signed-off-by: Barry Song <v-songbaohua@oppo.com> > > Do we need a Fixes tag? > What would be the description of the problem we are fixing? 1) failing to unmap? That can happen with small folios as well IIUC. 2) Putting the large folio on the deferred split queue? That sounds more reasonable. >> --- >> mm/vmscan.c | 11 +++++++++++ >> 1 file changed, 11 insertions(+) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index 0b888a2afa58..e4722fbbcd0c 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1270,6 +1270,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, >> >> if (folio_test_pmd_mappable(folio)) >> flags |= TTU_SPLIT_HUGE_PMD; >> + /* >> + * if page table lock is not held from the first PTE of >> + * a large folio, some PTEs might be skipped because of >> + * races with break-before-make, for example, PTEs can >> + * be pte_none intermediately, thus one or more PTEs >> + * might be skipped in try_to_unmap_one, we might result >> + * in a large folio is partially mapped and partially >> + * unmapped after try_to_unmap >> + */ >> + if (folio_test_large(folio)) >> + flags |= TTU_SYNC; > > This looks sensible to me after thinking about it for a while. But I also have a > gut feeling that there might be some more subtleties that are going over my > head, since I'm not expert in this area. So will leave others to provide R-b :) > As we are seeing more such problems with lockless PT walks, maybe we really want some other special value (nonswap entry?) to indicate that a PTE this is currently ondergoing protection changes. So we'd avoid the pte_none() temporarily, if possible. Without that, TTU_SYNC feels like the right thing to do.
On 04/03/2024 12:41, David Hildenbrand wrote: > On 04.03.24 13:20, Ryan Roberts wrote: >> Hi Barry, >> >> On 04/03/2024 10:37, Barry Song wrote: >>> From: Barry Song <v-songbaohua@oppo.com> >>> >>> page_vma_mapped_walk() within try_to_unmap_one() races with other >>> PTEs modification such as break-before-make, while iterating PTEs >>> of a large folio, it will only begin to acquire PTL after it gets >>> a valid(present) PTE. break-before-make intermediately sets PTEs >>> to pte_none. Thus, a large folio's PTEs might be partially skipped >>> in try_to_unmap_one(). >> >> I just want to check my understanding here - I think the problem occurs for >> PTE-mapped, PMD-sized folios as well as smaller-than-PMD-size large folios? Now >> that I've had a look at the code and have a better understanding, I think that >> must be the case? And therefore this problem exists independently of my work to >> support swap-out of mTHP? (From your previous report I was under the impression >> that it only affected mTHP). >> >> Its just that the problem is becoming more pronounced because with mTHP, >> PTE-mapped large folios are much more common? > > That is my understanding. > >> >>> For example, for an anon folio, after try_to_unmap_one(), we may >>> have PTE0 present, while PTE1 ~ PTE(nr_pages - 1) are swap entries. >>> So folio will be still mapped, the folio fails to be reclaimed. >>> What’s even more worrying is, its PTEs are no longer in a unified >>> state. This might lead to accident folio_split() afterwards. And >>> since a part of PTEs are now swap entries, accessing them will >>> incur page fault - do_swap_page. >>> It creates both anxiety and more expense. While we can't avoid >>> userspace's unmap to break up unified PTEs such as CONT-PTE for >>> a large folio, we can indeed keep away from kernel's breaking up >>> them due to its code design. >>> This patch is holding PTL from PTE0, thus, the folio will either >>> be entirely reclaimed or entirely kept. On the other hand, this >>> approach doesn't increase PTL contention. Even w/o the patch, >>> page_vma_mapped_walk() will always get PTL after it sometimes >>> skips one or two PTEs because intermediate break-before-makes >>> are short, according to test. Of course, even w/o this patch, >>> the vast majority of try_to_unmap_one still can get PTL from >>> PTE0. This patch makes the number 100%. >>> The other option is that we can give up in try_to_unmap_one >>> once we find PTE0 is not the first entry we get PTL, we call >>> page_vma_mapped_walk_done() to end the iteration at this case. >>> This will keep the unified PTEs while the folio isn't reclaimed. >>> The result is quite similar with small folios with one PTE - >>> either entirely reclaimed or entirely kept. >>> Reclaiming large folios by holding PTL from PTE0 seems a better >>> option comparing to giving up after detecting PTL begins from >>> non-PTE0. >>> > > I'm sure that wall of text can be formatted in a better way :) . Also, I think > we can drop some of the details, > > If you need some inspiration, I can give it a shot. > >>> Cc: Hugh Dickins <hughd@google.com> >>> Signed-off-by: Barry Song <v-songbaohua@oppo.com> >> >> Do we need a Fixes tag? >> > > What would be the description of the problem we are fixing? > > 1) failing to unmap? > > That can happen with small folios as well IIUC. > > 2) Putting the large folio on the deferred split queue? > > That sounds more reasonable. Isn't the real problem today that we can end up writng a THP to the swap file (so 2M more IO and space used) but we can't remove it from memory, so no actual reclaim happens? Although I guess your (2) is really just another way of saying that. > >>> --- >>> mm/vmscan.c | 11 +++++++++++ >>> 1 file changed, 11 insertions(+) >>> >>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>> index 0b888a2afa58..e4722fbbcd0c 100644 >>> --- a/mm/vmscan.c >>> +++ b/mm/vmscan.c >>> @@ -1270,6 +1270,17 @@ static unsigned int shrink_folio_list(struct list_head >>> *folio_list, >>> if (folio_test_pmd_mappable(folio)) >>> flags |= TTU_SPLIT_HUGE_PMD; >>> + /* >>> + * if page table lock is not held from the first PTE of >>> + * a large folio, some PTEs might be skipped because of >>> + * races with break-before-make, for example, PTEs can >>> + * be pte_none intermediately, thus one or more PTEs >>> + * might be skipped in try_to_unmap_one, we might result >>> + * in a large folio is partially mapped and partially >>> + * unmapped after try_to_unmap >>> + */ >>> + if (folio_test_large(folio)) >>> + flags |= TTU_SYNC; >> >> This looks sensible to me after thinking about it for a while. But I also have a >> gut feeling that there might be some more subtleties that are going over my >> head, since I'm not expert in this area. So will leave others to provide R-b :) >> > > As we are seeing more such problems with lockless PT walks, maybe we really want > some other special value (nonswap entry?) to indicate that a PTE this is > currently ondergoing protection changes. So we'd avoid the pte_none() > temporarily, if possible. > > Without that, TTU_SYNC feels like the right thing to do. >
On 04.03.24 14:03, Ryan Roberts wrote: > On 04/03/2024 12:41, David Hildenbrand wrote: >> On 04.03.24 13:20, Ryan Roberts wrote: >>> Hi Barry, >>> >>> On 04/03/2024 10:37, Barry Song wrote: >>>> From: Barry Song <v-songbaohua@oppo.com> >>>> >>>> page_vma_mapped_walk() within try_to_unmap_one() races with other >>>> PTEs modification such as break-before-make, while iterating PTEs >>>> of a large folio, it will only begin to acquire PTL after it gets >>>> a valid(present) PTE. break-before-make intermediately sets PTEs >>>> to pte_none. Thus, a large folio's PTEs might be partially skipped >>>> in try_to_unmap_one(). >>> >>> I just want to check my understanding here - I think the problem occurs for >>> PTE-mapped, PMD-sized folios as well as smaller-than-PMD-size large folios? Now >>> that I've had a look at the code and have a better understanding, I think that >>> must be the case? And therefore this problem exists independently of my work to >>> support swap-out of mTHP? (From your previous report I was under the impression >>> that it only affected mTHP). >>> >>> Its just that the problem is becoming more pronounced because with mTHP, >>> PTE-mapped large folios are much more common? >> >> That is my understanding. >> >>> >>>> For example, for an anon folio, after try_to_unmap_one(), we may >>>> have PTE0 present, while PTE1 ~ PTE(nr_pages - 1) are swap entries. >>>> So folio will be still mapped, the folio fails to be reclaimed. >>>> What’s even more worrying is, its PTEs are no longer in a unified >>>> state. This might lead to accident folio_split() afterwards. And >>>> since a part of PTEs are now swap entries, accessing them will >>>> incur page fault - do_swap_page. >>>> It creates both anxiety and more expense. While we can't avoid >>>> userspace's unmap to break up unified PTEs such as CONT-PTE for >>>> a large folio, we can indeed keep away from kernel's breaking up >>>> them due to its code design. >>>> This patch is holding PTL from PTE0, thus, the folio will either >>>> be entirely reclaimed or entirely kept. On the other hand, this >>>> approach doesn't increase PTL contention. Even w/o the patch, >>>> page_vma_mapped_walk() will always get PTL after it sometimes >>>> skips one or two PTEs because intermediate break-before-makes >>>> are short, according to test. Of course, even w/o this patch, >>>> the vast majority of try_to_unmap_one still can get PTL from >>>> PTE0. This patch makes the number 100%. >>>> The other option is that we can give up in try_to_unmap_one >>>> once we find PTE0 is not the first entry we get PTL, we call >>>> page_vma_mapped_walk_done() to end the iteration at this case. >>>> This will keep the unified PTEs while the folio isn't reclaimed. >>>> The result is quite similar with small folios with one PTE - >>>> either entirely reclaimed or entirely kept. >>>> Reclaiming large folios by holding PTL from PTE0 seems a better >>>> option comparing to giving up after detecting PTL begins from >>>> non-PTE0. >>>> >> >> I'm sure that wall of text can be formatted in a better way :) . Also, I think >> we can drop some of the details, >> >> If you need some inspiration, I can give it a shot. >> >>>> Cc: Hugh Dickins <hughd@google.com> >>>> Signed-off-by: Barry Song <v-songbaohua@oppo.com> >>> >>> Do we need a Fixes tag? >>> >> >> What would be the description of the problem we are fixing? >> >> 1) failing to unmap? >> >> That can happen with small folios as well IIUC. >> >> 2) Putting the large folio on the deferred split queue? >> >> That sounds more reasonable. > > Isn't the real problem today that we can end up writng a THP to the swap file > (so 2M more IO and space used) but we can't remove it from memory, so no actual > reclaim happens? Although I guess your (2) is really just another way of saying > that. The same could happen with small folios I believe? We might end up running into the folio_mapped() after the try_to_unmap(). Note that the actual I/O does not happen during add_to_swap(), but during the pageout() call when we find the folio to be dirty. So there would not actually be more I/O. Only swap space would be reserved, that would be used later when not running into the race.
diff --git a/mm/vmscan.c b/mm/vmscan.c index 0b888a2afa58..e4722fbbcd0c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1270,6 +1270,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (folio_test_pmd_mappable(folio)) flags |= TTU_SPLIT_HUGE_PMD; + /* + * if page table lock is not held from the first PTE of + * a large folio, some PTEs might be skipped because of + * races with break-before-make, for example, PTEs can + * be pte_none intermediately, thus one or more PTEs + * might be skipped in try_to_unmap_one, we might result + * in a large folio is partially mapped and partially + * unmapped after try_to_unmap + */ + if (folio_test_large(folio)) + flags |= TTU_SYNC; try_to_unmap(folio, flags); if (folio_mapped(folio)) {