Message ID | 20240204083411.3762683-1-chengming.zhou@linux.dev |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-51529-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp254658dyb; Sun, 4 Feb 2024 00:36:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IEQXfsrurQx2rcz4AVfhFcr6C4qVdPNlW18GNBV/IOrvdXTYcR6Br9a74zzvlB76yiGwmNb X-Received: by 2002:a05:6808:1801:b0:3bf:cbf5:6d45 with SMTP id bh1-20020a056808180100b003bfcbf56d45mr5046268oib.0.1707035790349; Sun, 04 Feb 2024 00:36:30 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707035790; cv=pass; d=google.com; s=arc-20160816; b=g9o7yMWltycOIporQDlPnxTu2sWgKeVQIAfCbTYHjCEmgNzA0FIR5XG+rgXCxy7SVx eA20LZbyK6UmzPcpTI7VoS+zXEAVlTl+Fnz/AoM5dheFxELzyFQ81rAXpVTPKtHHIfSm 0aZL8GjxPAnQYOMO3DCmrlZW6v+ocJsdyUsFFwpCU6pYdC+6Vqje/2k9XVrZXJ4IV23Z ngU2w5PU+TlusIOpEGc6oOTnHg/EiYMEcvyNGOPa/IPS8CdQnCISO4/E0ORLwd0htY8A kFG4AwhOeJhHCvsSw/M3wUvYASaIYwa0ryHNmIqi4+f0d1VHDH6z7QkNVMVwQx3WN2DR bVXQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=Pbt84SQ0eMAct0yO+KuRCNRVPEgcExEkWU6nZFC8ea8=; fh=orz0qqz1CwYtCjfVqek20/tZ4Rkdpn9+hGGVJeuEZU8=; b=TadCX4crHUsmUFCft2U95b5U5+PyduiBSRoPlCbqb3LF0sXI1IdHv/4kVZJN9b0U4K VFnsGnDEjV8QbF44PsP6UTnsfoIfOhGNrQgpBjmRWohYJ+RKeJLFmuS8regT6paMRmb5 /xYvrrA9twjHEeqBCjvY9hN4BywF9VZWcrJ3j8xf8upyv6PrZEryeDcQcDsVipvEIBHZ r/rQqLIHfzPBMNOYrOB2N7ZPsyOmZVqlml8qiKpag16Kilt++OFv1+opZOgKpNeIvQ+1 jNHdntJxAFJjjeqkk/dLTSWL9Zw91OE3fDo5DwAFrXKPX1pEqMJ26loAWjH8J0np30cY 5MMw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=LChvhoaI; arc=pass (i=1 spf=pass spfdomain=linux.dev dkim=pass dkdomain=linux.dev dmarc=pass fromdomain=linux.dev); spf=pass (google.com: domain of linux-kernel+bounces-51529-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-51529-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev X-Forwarded-Encrypted: i=1; AJvYcCUNBSNarHlutaNKbGMwLt/5JsJ+YN4NHP+j7B1zOTt1VBoLAixCsriLxrpU2700lC1HZVW3cZjZyXlqoOyhlvzLuSu2BA== Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id z10-20020aa785ca000000b006dfe2ac05d4si4247386pfn.248.2024.02.04.00.36.29 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Feb 2024 00:36:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-51529-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=LChvhoaI; arc=pass (i=1 spf=pass spfdomain=linux.dev dkim=pass dkdomain=linux.dev dmarc=pass fromdomain=linux.dev); spf=pass (google.com: domain of linux-kernel+bounces-51529-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-51529-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 50F2BB244DD for <ouuuleilei@gmail.com>; Sun, 4 Feb 2024 08:35:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DB7F916415; Sun, 4 Feb 2024 08:35:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="LChvhoaI" Received: from out-173.mta0.migadu.com (out-173.mta0.migadu.com [91.218.175.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51C011427A for <linux-kernel@vger.kernel.org>; Sun, 4 Feb 2024 08:35:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707035733; cv=none; b=EZ5IdQApmjTifnQGsVAzME++N2eQ5ETon4lnUDvnnx4tqPnX9wnRQ/gVH6Mk8vxpm4XFShXp/jPBfdpioJAzCS72cNkVz2UBqhTsjQ0gubwIR5bKs8njyysNY+GgUL0KRhA/NkDGwhBF+Lu8LUaNUb+7C+4yvDyQClwfSZvO/Js= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707035733; c=relaxed/simple; bh=MCPeAivxQ0qxlpWbWJvDut4U8iutpFpR6TzSF2LDF1s=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=DK0frTV7udZBItmUUeHibRGIfRX9t49OmQgQbmU0jfuEWvuymkz/EficcmukCh0wh6KL0ogIUDG5GCKRtVrlN4f8FDrVSY+voCMheK9Nythd7i7EAeIZLrTJa3nD1fa95hjARDwf+UfE1J4+mvKfBL6rhybu1r+z057a7eKrVE8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=LChvhoaI; arc=none smtp.client-ip=91.218.175.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1707035727; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=Pbt84SQ0eMAct0yO+KuRCNRVPEgcExEkWU6nZFC8ea8=; b=LChvhoaIhcZ6/hsqUPJEAUZUHV/gRdrP42CjAcuvwiYvTOsVaLLc6efBOMJk+LpmTog1wQ pOm5xY1EOmu42MeHI95DQcyqYCQ588RUrZQD/l1Yvj+HkWCihFqZ9KntIDAlmJJ640OYSR iaAGKZn+vdSF+CHtaOX71CxoxmjGVKU= From: chengming.zhou@linux.dev To: hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou <zhouchengming@bytedance.com> Subject: [PATCH] mm/zswap: invalidate old entry when store fail or !zswap_enabled Date: Sun, 4 Feb 2024 08:34:11 +0000 Message-Id: <20240204083411.3762683-1-chengming.zhou@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789956760882598200 X-GMAIL-MSGID: 1789956760882598200 |
Series |
mm/zswap: invalidate old entry when store fail or !zswap_enabled
|
|
Commit Message
Chengming Zhou
Feb. 4, 2024, 8:34 a.m. UTC
From: Chengming Zhou <zhouchengming@bytedance.com> We may encounter duplicate entry in the zswap_store(): 1. swap slot that freed to per-cpu swap cache, doesn't invalidate the zswap entry, then got reused. This has been fixed. 2. !exclusive load mode, swapin folio will leave its zswap entry on the tree, then swapout again. This has been removed. 3. one folio can be dirtied again after zswap_store(), so need to zswap_store() again. This should be handled correctly. So we must invalidate the old duplicate entry before insert the new one, which actually doesn't have to be done at the beginning of zswap_store(). And this is a normal situation, we shouldn't WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want to detect swap entry UAF problem? But not very necessary here.) The good point is that we don't need to lock tree twice in the store success path. Note we still need to invalidate the old duplicate entry in the store failure path, otherwise the new data in swapfile could be overwrite by the old data in zswap pool when lru writeback. We have to do this even when !zswap_enabled since zswap can be disabled anytime. If the folio store success before, then got dirtied again but zswap disabled, we won't invalidate the old duplicate entry in the zswap_store(). So later lru writeback may overwrite the new data in swapfile. This fix is not good, since we have to grab lock to check everytime even when zswap is disabled, but it's simple. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> --- mm/zswap.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-)
Comments
On Sun, Feb 04, 2024 at 08:34:11AM +0000, chengming.zhou@linux.dev wrote: > From: Chengming Zhou <zhouchengming@bytedance.com> > > We may encounter duplicate entry in the zswap_store(): > > 1. swap slot that freed to per-cpu swap cache, doesn't invalidate > the zswap entry, then got reused. This has been fixed. > > 2. !exclusive load mode, swapin folio will leave its zswap entry > on the tree, then swapout again. This has been removed. > > 3. one folio can be dirtied again after zswap_store(), so need to > zswap_store() again. This should be handled correctly. > > So we must invalidate the old duplicate entry before insert the > new one, which actually doesn't have to be done at the beginning > of zswap_store(). And this is a normal situation, we shouldn't > WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want > to detect swap entry UAF problem? But not very necessary here.) > > The good point is that we don't need to lock tree twice in the > store success path. > > Note we still need to invalidate the old duplicate entry in the > store failure path, otherwise the new data in swapfile could be > overwrite by the old data in zswap pool when lru writeback. I think this may have been introduced by 42c06a0e8ebe ("mm: kill frontswap"). Frontswap used to check if the page was present in frontswap and invalidate it before calling into zswap, so it would invalidate a previously stored page when it is dirtied and swapped out again, even if zswap is disabled. Johannes, does this sound correct to you? If yes, I think we need a proper Fixes tag and a stable backport as this may cause data corruption. > > We have to do this even when !zswap_enabled since zswap can be > disabled anytime. If the folio store success before, then got > dirtied again but zswap disabled, we won't invalidate the old > duplicate entry in the zswap_store(). So later lru writeback > may overwrite the new data in swapfile. > > This fix is not good, since we have to grab lock to check everytime > even when zswap is disabled, but it's simple. Frontswap had a bitmap that we can query locklessly to find out if there is an outdated stored page. I think we can overcome this with the xarray, we can do a lockless lookup first, and only take the lock if there is an outdated entry to remove. Meanwhile I am not sure if acquiring the lock on every swapout even with zswap disabled is acceptable, but I think it's the simplest fix for now, unless we revive the bitmap. > > Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> > --- > mm/zswap.c | 33 +++++++++++++++------------------ > 1 file changed, 15 insertions(+), 18 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index cd67f7f6b302..0b7599f4116d 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -1518,18 +1518,8 @@ bool zswap_store(struct folio *folio) > return false; > > if (!zswap_enabled) > - return false; > + goto check_old; > > - /* > - * If this is a duplicate, it must be removed before attempting to store > - * it, otherwise, if the store fails the old page won't be removed from > - * the tree, and it might be written back overriding the new data. > - */ > - spin_lock(&tree->lock); > - entry = zswap_rb_search(&tree->rbroot, offset); > - if (entry) > - zswap_invalidate_entry(tree, entry); > - spin_unlock(&tree->lock); > objcg = get_obj_cgroup_from_folio(folio); > if (objcg && !obj_cgroup_may_zswap(objcg)) { > memcg = get_mem_cgroup_from_objcg(objcg); > @@ -1608,15 +1598,11 @@ bool zswap_store(struct folio *folio) > /* map */ > spin_lock(&tree->lock); > /* > - * A duplicate entry should have been removed at the beginning of this > - * function. Since the swap entry should be pinned, if a duplicate is > - * found again here it means that something went wrong in the swap > - * cache. > + * The folio could be dirtied again, invalidate the possible old entry > + * before insert this new entry. > */ > - while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { > - WARN_ON(1); > + while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) > zswap_invalidate_entry(tree, dupentry); > - } I always thought the loop here was confusing. We are holding the lock, so it should be guaranteed that if we get -EEXIST once and invalidate it, we won't find it the next time around. This should really be a cmpxchg operation, which is simple with the xarray. We can probably do the same with the rbtree, but perhaps it's not worth it if the xarray change is coming soon. For now, I think an if condition is clearer: if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { zswap_invalidate_entry(tree, dupentry); /* Must succeed, we just removed the dup under the lock */ WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry)); } > if (entry->length) { > INIT_LIST_HEAD(&entry->lru); > zswap_lru_add(&entry->pool->list_lru, entry); > @@ -1638,6 +1624,17 @@ bool zswap_store(struct folio *folio) > reject: > if (objcg) > obj_cgroup_put(objcg); > +check_old: > + /* > + * If zswap store fail or zswap disabled, we must invalidate possible > + * old entry which previously stored by this folio. Otherwise, later > + * writeback could overwrite the new data in swapfile. > + */ > + spin_lock(&tree->lock); > + entry = zswap_rb_search(&tree->rbroot, offset); > + if (entry) > + zswap_invalidate_entry(tree, entry); > + spin_unlock(&tree->lock); > return false; > > shrink: > -- > 2.40.1 >
On 2024/2/6 06:55, Yosry Ahmed wrote: > On Sun, Feb 04, 2024 at 08:34:11AM +0000, chengming.zhou@linux.dev wrote: >> From: Chengming Zhou <zhouchengming@bytedance.com> >> >> We may encounter duplicate entry in the zswap_store(): >> >> 1. swap slot that freed to per-cpu swap cache, doesn't invalidate >> the zswap entry, then got reused. This has been fixed. >> >> 2. !exclusive load mode, swapin folio will leave its zswap entry >> on the tree, then swapout again. This has been removed. >> >> 3. one folio can be dirtied again after zswap_store(), so need to >> zswap_store() again. This should be handled correctly. >> >> So we must invalidate the old duplicate entry before insert the >> new one, which actually doesn't have to be done at the beginning >> of zswap_store(). And this is a normal situation, we shouldn't >> WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want >> to detect swap entry UAF problem? But not very necessary here.) >> >> The good point is that we don't need to lock tree twice in the >> store success path. >> >> Note we still need to invalidate the old duplicate entry in the >> store failure path, otherwise the new data in swapfile could be >> overwrite by the old data in zswap pool when lru writeback. > > I think this may have been introduced by 42c06a0e8ebe ("mm: kill > frontswap"). Frontswap used to check if the page was present in > frontswap and invalidate it before calling into zswap, so it would > invalidate a previously stored page when it is dirtied and swapped out > again, even if zswap is disabled. > > Johannes, does this sound correct to you? If yes, I think we need a > proper Fixes tag and a stable backport as this may cause data > corruption. I haven't looked into that commit. If this is true, will add: Fixes: 42c06a0e8ebe ("mm: kill frontswap") > >> >> We have to do this even when !zswap_enabled since zswap can be >> disabled anytime. If the folio store success before, then got >> dirtied again but zswap disabled, we won't invalidate the old >> duplicate entry in the zswap_store(). So later lru writeback >> may overwrite the new data in swapfile. >> >> This fix is not good, since we have to grab lock to check everytime >> even when zswap is disabled, but it's simple. > > Frontswap had a bitmap that we can query locklessly to find out if there > is an outdated stored page. I think we can overcome this with the > xarray, we can do a lockless lookup first, and only take the lock if > there is an outdated entry to remove. Yes, agree! We can lockless lookup once xarray lands in. > > Meanwhile I am not sure if acquiring the lock on every swapout even with > zswap disabled is acceptable, but I think it's the simplest fix for now, > unless we revive the bitmap. Yeah, it's simple. I think bitmap is not needed if we will use xarray. > >> >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> >> --- >> mm/zswap.c | 33 +++++++++++++++------------------ >> 1 file changed, 15 insertions(+), 18 deletions(-) >> >> diff --git a/mm/zswap.c b/mm/zswap.c >> index cd67f7f6b302..0b7599f4116d 100644 >> --- a/mm/zswap.c >> +++ b/mm/zswap.c >> @@ -1518,18 +1518,8 @@ bool zswap_store(struct folio *folio) >> return false; >> >> if (!zswap_enabled) >> - return false; >> + goto check_old; >> >> - /* >> - * If this is a duplicate, it must be removed before attempting to store >> - * it, otherwise, if the store fails the old page won't be removed from >> - * the tree, and it might be written back overriding the new data. >> - */ >> - spin_lock(&tree->lock); >> - entry = zswap_rb_search(&tree->rbroot, offset); >> - if (entry) >> - zswap_invalidate_entry(tree, entry); >> - spin_unlock(&tree->lock); >> objcg = get_obj_cgroup_from_folio(folio); >> if (objcg && !obj_cgroup_may_zswap(objcg)) { >> memcg = get_mem_cgroup_from_objcg(objcg); >> @@ -1608,15 +1598,11 @@ bool zswap_store(struct folio *folio) >> /* map */ >> spin_lock(&tree->lock); >> /* >> - * A duplicate entry should have been removed at the beginning of this >> - * function. Since the swap entry should be pinned, if a duplicate is >> - * found again here it means that something went wrong in the swap >> - * cache. >> + * The folio could be dirtied again, invalidate the possible old entry >> + * before insert this new entry. >> */ >> - while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { >> - WARN_ON(1); >> + while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) >> zswap_invalidate_entry(tree, dupentry); >> - } > > I always thought the loop here was confusing. We are holding the lock, > so it should be guaranteed that if we get -EEXIST once and invalidate > it, we won't find it the next time around. Ah, right, this is obvious. > > This should really be a cmpxchg operation, which is simple with the > xarray. We can probably do the same with the rbtree, but perhaps it's > not worth it if the xarray change is coming soon. > > For now, I think an if condition is clearer: > > if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { > zswap_invalidate_entry(tree, dupentry); > /* Must succeed, we just removed the dup under the lock */ > WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry)); > } This is clearer, will change to this version. Thanks! > >> if (entry->length) { >> INIT_LIST_HEAD(&entry->lru); >> zswap_lru_add(&entry->pool->list_lru, entry); >> @@ -1638,6 +1624,17 @@ bool zswap_store(struct folio *folio) >> reject: >> if (objcg) >> obj_cgroup_put(objcg); >> +check_old: >> + /* >> + * If zswap store fail or zswap disabled, we must invalidate possible >> + * old entry which previously stored by this folio. Otherwise, later >> + * writeback could overwrite the new data in swapfile. >> + */ >> + spin_lock(&tree->lock); >> + entry = zswap_rb_search(&tree->rbroot, offset); >> + if (entry) >> + zswap_invalidate_entry(tree, entry); >> + spin_unlock(&tree->lock); >> return false; >> >> shrink: >> -- >> 2.40.1 >>
On Tue, Feb 06, 2024 at 10:23:33AM +0800, Chengming Zhou wrote: > On 2024/2/6 06:55, Yosry Ahmed wrote: > > On Sun, Feb 04, 2024 at 08:34:11AM +0000, chengming.zhou@linux.dev wrote: > >> From: Chengming Zhou <zhouchengming@bytedance.com> > >> > >> We may encounter duplicate entry in the zswap_store(): > >> > >> 1. swap slot that freed to per-cpu swap cache, doesn't invalidate > >> the zswap entry, then got reused. This has been fixed. > >> > >> 2. !exclusive load mode, swapin folio will leave its zswap entry > >> on the tree, then swapout again. This has been removed. > >> > >> 3. one folio can be dirtied again after zswap_store(), so need to > >> zswap_store() again. This should be handled correctly. > >> > >> So we must invalidate the old duplicate entry before insert the > >> new one, which actually doesn't have to be done at the beginning > >> of zswap_store(). And this is a normal situation, we shouldn't > >> WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want > >> to detect swap entry UAF problem? But not very necessary here.) > >> > >> The good point is that we don't need to lock tree twice in the > >> store success path. > >> > >> Note we still need to invalidate the old duplicate entry in the > >> store failure path, otherwise the new data in swapfile could be > >> overwrite by the old data in zswap pool when lru writeback. > > > > I think this may have been introduced by 42c06a0e8ebe ("mm: kill > > frontswap"). Frontswap used to check if the page was present in > > frontswap and invalidate it before calling into zswap, so it would > > invalidate a previously stored page when it is dirtied and swapped out > > again, even if zswap is disabled. > > > > Johannes, does this sound correct to you? If yes, I think we need a > > proper Fixes tag and a stable backport as this may cause data > > corruption. > > I haven't looked into that commit. If this is true, will add: > > Fixes: 42c06a0e8ebe ("mm: kill frontswap") You're right, this was introduced by the frontswap removal. The Fixes tag is appropriate, as well as CC: stable@vger.kernel.org. > >> We have to do this even when !zswap_enabled since zswap can be > >> disabled anytime. If the folio store success before, then got > >> dirtied again but zswap disabled, we won't invalidate the old > >> duplicate entry in the zswap_store(). So later lru writeback > >> may overwrite the new data in swapfile. > >> > >> This fix is not good, since we have to grab lock to check everytime > >> even when zswap is disabled, but it's simple. > > > > Frontswap had a bitmap that we can query locklessly to find out if there > > is an outdated stored page. I think we can overcome this with the > > xarray, we can do a lockless lookup first, and only take the lock if > > there is an outdated entry to remove. > > Yes, agree! We can lockless lookup once xarray lands in. > > > Meanwhile I am not sure if acquiring the lock on every swapout even with > > zswap disabled is acceptable, but I think it's the simplest fix for now, > > unless we revive the bitmap. > > Yeah, it's simple. I think bitmap is not needed if we will use xarray. I don't think the lock is a dealbreaker in the short term. We also take it in the load and invalidate paths even if zswap is disabled, to maintain coherency during intermittent enabling/disabling. It hasn't been an issue in production at least. > >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> > > > > For now, I think an if condition is clearer: > > > > if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { > > zswap_invalidate_entry(tree, dupentry); > > /* Must succeed, we just removed the dup under the lock */ > > WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry)); > > } > > This is clearer, will change to this version. Agreed! With that: Acked-by: Johannes Weiner <hannes@cmpxchg.org>
diff --git a/mm/zswap.c b/mm/zswap.c index cd67f7f6b302..0b7599f4116d 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1518,18 +1518,8 @@ bool zswap_store(struct folio *folio) return false; if (!zswap_enabled) - return false; + goto check_old; - /* - * If this is a duplicate, it must be removed before attempting to store - * it, otherwise, if the store fails the old page won't be removed from - * the tree, and it might be written back overriding the new data. - */ - spin_lock(&tree->lock); - entry = zswap_rb_search(&tree->rbroot, offset); - if (entry) - zswap_invalidate_entry(tree, entry); - spin_unlock(&tree->lock); objcg = get_obj_cgroup_from_folio(folio); if (objcg && !obj_cgroup_may_zswap(objcg)) { memcg = get_mem_cgroup_from_objcg(objcg); @@ -1608,15 +1598,11 @@ bool zswap_store(struct folio *folio) /* map */ spin_lock(&tree->lock); /* - * A duplicate entry should have been removed at the beginning of this - * function. Since the swap entry should be pinned, if a duplicate is - * found again here it means that something went wrong in the swap - * cache. + * The folio could be dirtied again, invalidate the possible old entry + * before insert this new entry. */ - while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { - WARN_ON(1); + while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) zswap_invalidate_entry(tree, dupentry); - } if (entry->length) { INIT_LIST_HEAD(&entry->lru); zswap_lru_add(&entry->pool->list_lru, entry); @@ -1638,6 +1624,17 @@ bool zswap_store(struct folio *folio) reject: if (objcg) obj_cgroup_put(objcg); +check_old: + /* + * If zswap store fail or zswap disabled, we must invalidate possible + * old entry which previously stored by this folio. Otherwise, later + * writeback could overwrite the new data in swapfile. + */ + spin_lock(&tree->lock); + entry = zswap_rb_search(&tree->rbroot, offset); + if (entry) + zswap_invalidate_entry(tree, entry); + spin_unlock(&tree->lock); return false; shrink: