Message ID | 20221114000447.1681003-2-peterx@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1882887wru; Sun, 13 Nov 2022 16:08:32 -0800 (PST) X-Google-Smtp-Source: AA0mqf5mp4Zl2LZEiaQPmfCvqRJU4I/euBnwNJwd2ika3gxZcc+qGQ7c5TPrFK6mVWe8Pgau3z7d X-Received: by 2002:a63:6442:0:b0:43c:4724:186f with SMTP id y63-20020a636442000000b0043c4724186fmr9532668pgb.529.1668384512510; Sun, 13 Nov 2022 16:08:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668384512; cv=none; d=google.com; s=arc-20160816; b=b1L2OSgHYvpPz1rk1uV/NEZGbj+pMRZHGCdTCsNZYxZRQC0dzkrbh2EThYIk+xSP6Y gepo0Ezddsr6+EVNpL1XPfWuIO5L5YaYzhtMTzMwoEmXPHBDnulzXKVamkxUSSgpGt/o s/K/EmL7F9nu6WvV8MUiNtXogB/1+v0caC9RoKH8OvGvgY3QiaW/xpZ96pUz8mpW6rLQ AwjAFOrrU2rqI7Quh8rdFran4W6LUU/pf6Y4ZQ97hJV5mzCNTwOtSgn74HCG+GlKubAI G+lO7ZYZvreawbN9evECx0aF7lvZQyctDwsA/GMKAf6zZIE+RhxII5lZriyDUPNtixy4 STcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=E/uqXcWNCa1p19zapYQ5e7dpPK5NM6V2qbPw5ZdKILA=; b=x3DBGYB5GDdqKuhB9V7GuGvGoNgi37UNJJ0KuczZxEmPkP47aGnnPL6kvUW/yVT8rJ mwLqR2U/q2VF+EM8ndMMMcoULVhUoxRhzaxmmhFZl7ydG+Y1iKmqMiY90OOpzOehYnrf NAHZ9521LaIs/MQTMqOBd8XgwLzvu8EZK7PudKJJ26p8YUulFBbOrK+qnyUi8iA/MEYT tTKWPXVmZGtvGTezfW311kj0er5I2tS72jD2WnvZWOltc5+kEWSr9mKp8riEc+4WkIss WBO8dWaYvr/kipEn4bsDavUszlhC/BpetsVJVxWa0p5j0T2FES5bas5V4+Tk0gKAMV6E Manw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VCcfQZab; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lk18-20020a17090b33d200b00213f1db312esi9972878pjb.111.2022.11.13.16.08.18; Sun, 13 Nov 2022 16:08:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VCcfQZab; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235565AbiKNAFw (ORCPT <rfc822;winker.wchi@gmail.com> + 99 others); Sun, 13 Nov 2022 19:05:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229692AbiKNAFs (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sun, 13 Nov 2022 19:05:48 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B302D2C3 for <linux-kernel@vger.kernel.org>; Sun, 13 Nov 2022 16:04:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668384293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E/uqXcWNCa1p19zapYQ5e7dpPK5NM6V2qbPw5ZdKILA=; b=VCcfQZabfHrzrM/jtpQ5dyInD7YpWIN1QynGU1eie023bo7vmFn2x1ms+47pfqv2NaSQYQ xo+I77aK4B2+JxE2VFGBOPoM4BZA7PEwISd8wo+NCdyEBpzITv7NJrjX3Yp2wXd8N5oINT sR3lYVJsBkzLsCo4H2nFOau5dKCywD0= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-507-7Zf8Fmx3PMCYGNb_SmPxHg-1; Sun, 13 Nov 2022 19:04:52 -0500 X-MC-Unique: 7Zf8Fmx3PMCYGNb_SmPxHg-1 Received: by mail-qk1-f198.google.com with SMTP id h13-20020a05620a244d00b006fb713618b8so1685253qkn.0 for <linux-kernel@vger.kernel.org>; Sun, 13 Nov 2022 16:04:52 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E/uqXcWNCa1p19zapYQ5e7dpPK5NM6V2qbPw5ZdKILA=; b=QsJRbHdfUX1UMCLDU1+gB30eMrk0y5vsWk1u2ifaqWiiKyDIQG/KtKi3cdPFlq/7cE gt+6mWvi4Xpxtzu/2jOQLGwGH+hZSCkCp2cJBPDBlAIg6/z2DzmPYTC4b5RYI/dAFc0m E4yZ0YFYK9expH/6eIasLEXlCDpKiUiPkIw42xfMNGLilluHMOMP7zxPoKhnt25RYPYt lh8mm4cvc2ppKaMM2szLxIavCivzvPaKaF51gP5WhyX+EcMLnlZW/AJZVvopkW1Sfo82 14I2kCD3jgURL6R1U/UpXE62nz3Q7y29gQQTppilBDD5n6w2OkLSOcUNebcxvO5SiHtt 13nA== X-Gm-Message-State: ANoB5pmiZg9xVX77oEvSssayiZhd/IrXwA0qbSIm/G0t1SRnjU0ulxWl EntfUdTc5yfLSmLSYlKGB37Nxuhay9bfUf5IPLJ0ZvcOVMbj64YBzD/3KTG8Za/ddEx1huN6KzV JOCTDJVjmTPYntCDvnABQT78HitBlCYBoQUrdWNnoFt0wiupGNSeUcQG0e0EPFUQgcofoW2y0wA == X-Received: by 2002:a05:622a:1c0f:b0:3a5:47c8:3889 with SMTP id bq15-20020a05622a1c0f00b003a547c83889mr10331823qtb.66.1668384291141; Sun, 13 Nov 2022 16:04:51 -0800 (PST) X-Received: by 2002:a05:622a:1c0f:b0:3a5:47c8:3889 with SMTP id bq15-20020a05622a1c0f00b003a547c83889mr10331790qtb.66.1668384290766; Sun, 13 Nov 2022 16:04:50 -0800 (PST) Received: from x1n.redhat.com (bras-base-aurron9127w-grc-46-70-31-27-79.dsl.bell.ca. [70.31.27.79]) by smtp.gmail.com with ESMTPSA id cb5-20020a05622a1f8500b0039cc0fbdb61sm4870380qtb.53.2022.11.13.16.04.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Nov 2022 16:04:50 -0800 (PST) From: Peter Xu <peterx@redhat.com> To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>, Nadav Amit <nadav.amit@gmail.com>, Andrew Morton <akpm@linux-foundation.org>, peterx@redhat.com, Andrea Arcangeli <aarcange@redhat.com>, Ives van Hoorne <ives@codesandbox.io>, Axel Rasmussen <axelrasmussen@google.com>, Alistair Popple <apopple@nvidia.com>, stable@vger.kernel.org Subject: [PATCH v3 1/2] mm/migrate: Fix read-only page got writable when recover pte Date: Sun, 13 Nov 2022 19:04:46 -0500 Message-Id: <20221114000447.1681003-2-peterx@redhat.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221114000447.1681003-1-peterx@redhat.com> References: <20221114000447.1681003-1-peterx@redhat.com> MIME-Version: 1.0 Content-type: text/plain Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749427958402640158?= X-GMAIL-MSGID: =?utf-8?q?1749427958402640158?= |
Series |
mm/migrate: Fix writable pte for read migration entry
|
|
Commit Message
Peter Xu
Nov. 14, 2022, 12:04 a.m. UTC
Ives van Hoorne from codesandbox.io reported an issue regarding possible data loss of uffd-wp when applied to memfds on heavily loaded systems. The symptom is some read page got data mismatch from the snapshot child VMs. Here I can also reproduce with a Rust reproducer that was provided by Ives that keeps taking snapshot of a 256MB VM, on a 32G system when I initiate 80 instances I can trigger the issues in ten minutes. It turns out that we got some pages write-through even if uffd-wp is applied to the pte. The problem is, when removing migration entries, we didn't really worry about write bit as long as we know it's not a write migration entry. That may not be true, for some memory types (e.g. writable shmem) mk_pte can return a pte with write bit set, then to recover the migration entry to its original state we need to explicit wr-protect the pte or it'll has the write bit set if it's a read migration entry. For uffd it can cause write-through. The relevant code on uffd was introduced in the anon support, which is commit f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration", 2020-04-07). However anon shouldn't suffer from this problem because anon should already have the write bit cleared always, so that may not be a proper Fixes target, while I'm adding the Fixes to be uffd shmem support. Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: stable@vger.kernel.org Fixes: b1f9e876862d ("mm/uffd: enable write protection for shmem & hugetlbfs") Reported-by: Ives van Hoorne <ives@codesandbox.io> Reviewed-by: Alistair Popple <apopple@nvidia.com> Tested-by: Ives van Hoorne <ives@codesandbox.io> Signed-off-by: Peter Xu <peterx@redhat.com> --- mm/migrate.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
Comments
On 14.11.22 01:04, Peter Xu wrote: > Ives van Hoorne from codesandbox.io reported an issue regarding possible > data loss of uffd-wp when applied to memfds on heavily loaded systems. The > symptom is some read page got data mismatch from the snapshot child VMs. > > Here I can also reproduce with a Rust reproducer that was provided by Ives > that keeps taking snapshot of a 256MB VM, on a 32G system when I initiate > 80 instances I can trigger the issues in ten minutes. > > It turns out that we got some pages write-through even if uffd-wp is > applied to the pte. > > The problem is, when removing migration entries, we didn't really worry > about write bit as long as we know it's not a write migration entry. That > may not be true, for some memory types (e.g. writable shmem) mk_pte can > return a pte with write bit set, then to recover the migration entry to its > original state we need to explicit wr-protect the pte or it'll has the > write bit set if it's a read migration entry. For uffd it can cause > write-through. > > The relevant code on uffd was introduced in the anon support, which is > commit f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration", > 2020-04-07). However anon shouldn't suffer from this problem because anon > should already have the write bit cleared always, so that may not be a > proper Fixes target, while I'm adding the Fixes to be uffd shmem support. > > Cc: Andrea Arcangeli <aarcange@redhat.com> > Cc: stable@vger.kernel.org > Fixes: b1f9e876862d ("mm/uffd: enable write protection for shmem & hugetlbfs") > Reported-by: Ives van Hoorne <ives@codesandbox.io> > Reviewed-by: Alistair Popple <apopple@nvidia.com> > Tested-by: Ives van Hoorne <ives@codesandbox.io> > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > mm/migrate.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index dff333593a8a..8b6351c08c78 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -213,8 +213,14 @@ static bool remove_migration_pte(struct folio *folio, > pte = pte_mkdirty(pte); > if (is_writable_migration_entry(entry)) > pte = maybe_mkwrite(pte, vma); > - else if (pte_swp_uffd_wp(*pvmw.pte)) > + else > + /* NOTE: mk_pte can have write bit set */ > + pte = pte_wrprotect(pte); > + > + if (pte_swp_uffd_wp(*pvmw.pte)) { > + WARN_ON_ONCE(pte_write(pte)); > pte = pte_mkuffd_wp(pte); > + } > > if (folio_test_anon(folio) && !is_readable_migration_entry(entry)) > rmap_flags |= RMAP_EXCLUSIVE; As raised, I don't agree to this generic non-uffd-wp change without further, clear justification. I won't nack it, but I won't ack it either.
On Tue, 15 Nov 2022 19:17:43 +0100 David Hildenbrand <david@redhat.com> wrote: > On 14.11.22 01:04, Peter Xu wrote: > > Ives van Hoorne from codesandbox.io reported an issue regarding possible > > data loss of uffd-wp when applied to memfds on heavily loaded systems. The > > symptom is some read page got data mismatch from the snapshot child VMs. > > > > Here I can also reproduce with a Rust reproducer that was provided by Ives > > that keeps taking snapshot of a 256MB VM, on a 32G system when I initiate > > 80 instances I can trigger the issues in ten minutes. > > > > It turns out that we got some pages write-through even if uffd-wp is > > applied to the pte. > > > > The problem is, when removing migration entries, we didn't really worry > > about write bit as long as we know it's not a write migration entry. That > > may not be true, for some memory types (e.g. writable shmem) mk_pte can > > return a pte with write bit set, then to recover the migration entry to its > > original state we need to explicit wr-protect the pte or it'll has the > > write bit set if it's a read migration entry. For uffd it can cause > > write-through. > > > > The relevant code on uffd was introduced in the anon support, which is > > commit f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration", > > 2020-04-07). However anon shouldn't suffer from this problem because anon > > should already have the write bit cleared always, so that may not be a > > proper Fixes target, while I'm adding the Fixes to be uffd shmem support. > > > > ... > > > --- a/mm/migrate.c > > +++ b/mm/migrate.c > > @@ -213,8 +213,14 @@ static bool remove_migration_pte(struct folio *folio, > > pte = pte_mkdirty(pte); > > if (is_writable_migration_entry(entry)) > > pte = maybe_mkwrite(pte, vma); > > - else if (pte_swp_uffd_wp(*pvmw.pte)) > > + else > > + /* NOTE: mk_pte can have write bit set */ > > + pte = pte_wrprotect(pte); > > + > > + if (pte_swp_uffd_wp(*pvmw.pte)) { > > + WARN_ON_ONCE(pte_write(pte)); Will this warnnig trigger in the scenario you and Ives have discovered? > > pte = pte_mkuffd_wp(pte); > > + } > > > > if (folio_test_anon(folio) && !is_readable_migration_entry(entry)) > > rmap_flags |= RMAP_EXCLUSIVE; > > As raised, I don't agree to this generic non-uffd-wp change without > further, clear justification. Pater, can you please work this further? > I won't nack it, but I won't ack it either. I wouldn't mind seeing a little code comment which explains why we're doing this.
Hi, Andrew, On Wed, Nov 30, 2022 at 02:24:25PM -0800, Andrew Morton wrote: > On Tue, 15 Nov 2022 19:17:43 +0100 David Hildenbrand <david@redhat.com> wrote: > > > On 14.11.22 01:04, Peter Xu wrote: > > > Ives van Hoorne from codesandbox.io reported an issue regarding possible > > > data loss of uffd-wp when applied to memfds on heavily loaded systems. The > > > symptom is some read page got data mismatch from the snapshot child VMs. > > > > > > Here I can also reproduce with a Rust reproducer that was provided by Ives > > > that keeps taking snapshot of a 256MB VM, on a 32G system when I initiate > > > 80 instances I can trigger the issues in ten minutes. > > > > > > It turns out that we got some pages write-through even if uffd-wp is > > > applied to the pte. > > > > > > The problem is, when removing migration entries, we didn't really worry > > > about write bit as long as we know it's not a write migration entry. That > > > may not be true, for some memory types (e.g. writable shmem) mk_pte can > > > return a pte with write bit set, then to recover the migration entry to its > > > original state we need to explicit wr-protect the pte or it'll has the > > > write bit set if it's a read migration entry. For uffd it can cause > > > write-through. > > > > > > The relevant code on uffd was introduced in the anon support, which is > > > commit f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration", > > > 2020-04-07). However anon shouldn't suffer from this problem because anon > > > should already have the write bit cleared always, so that may not be a > > > proper Fixes target, while I'm adding the Fixes to be uffd shmem support. > > > > > > > ... > > > > > --- a/mm/migrate.c > > > +++ b/mm/migrate.c > > > @@ -213,8 +213,14 @@ static bool remove_migration_pte(struct folio *folio, > > > pte = pte_mkdirty(pte); > > > if (is_writable_migration_entry(entry)) > > > pte = maybe_mkwrite(pte, vma); > > > - else if (pte_swp_uffd_wp(*pvmw.pte)) > > > + else > > > + /* NOTE: mk_pte can have write bit set */ > > > + pte = pte_wrprotect(pte); > > > + > > > + if (pte_swp_uffd_wp(*pvmw.pte)) { > > > + WARN_ON_ONCE(pte_write(pte)); > > Will this warnnig trigger in the scenario you and Ives have discovered? If without the above newly added wr-protect, yes. This is the case where we found we got write bit set even if uffd-wp bit is also set, hence allows the write to go through even if marked protected. > > > > pte = pte_mkuffd_wp(pte); > > > + } > > > > > > if (folio_test_anon(folio) && !is_readable_migration_entry(entry)) > > > rmap_flags |= RMAP_EXCLUSIVE; > > > > As raised, I don't agree to this generic non-uffd-wp change without > > further, clear justification. > > Pater, can you please work this further? I didn't reply here because I have already replied with the question in previous version with a few attempts. Quotting myself: https://lore.kernel.org/all/Y3KgYeMTdTM0FN5W@x1n/ The thing is recovering the pte into its original form is the safest approach to me, so I think we need justification on why it's always safe to set the write bit. I've also got another longer email trying to explain why I think it's the other way round to be justfied, rather than justifying removal of the write bit for a read migration entry, here: https://lore.kernel.org/all/Y3O5bCXSbvKJrjRL@x1n/ > > > I won't nack it, but I won't ack it either. > > I wouldn't mind seeing a little code comment which explains why we're > doing this. I've got one more fixup to the same patch attached, with enriched comments on why we need wr-protect for read migration entries. Please have a look to see whether that helps, thanks.
On 01.12.22 16:28, Peter Xu wrote: > Hi, Andrew, > > On Wed, Nov 30, 2022 at 02:24:25PM -0800, Andrew Morton wrote: >> On Tue, 15 Nov 2022 19:17:43 +0100 David Hildenbrand <david@redhat.com> wrote: >> >>> On 14.11.22 01:04, Peter Xu wrote: >>>> Ives van Hoorne from codesandbox.io reported an issue regarding possible >>>> data loss of uffd-wp when applied to memfds on heavily loaded systems. The >>>> symptom is some read page got data mismatch from the snapshot child VMs. >>>> >>>> Here I can also reproduce with a Rust reproducer that was provided by Ives >>>> that keeps taking snapshot of a 256MB VM, on a 32G system when I initiate >>>> 80 instances I can trigger the issues in ten minutes. >>>> >>>> It turns out that we got some pages write-through even if uffd-wp is >>>> applied to the pte. >>>> >>>> The problem is, when removing migration entries, we didn't really worry >>>> about write bit as long as we know it's not a write migration entry. That >>>> may not be true, for some memory types (e.g. writable shmem) mk_pte can >>>> return a pte with write bit set, then to recover the migration entry to its >>>> original state we need to explicit wr-protect the pte or it'll has the >>>> write bit set if it's a read migration entry. For uffd it can cause >>>> write-through. >>>> >>>> The relevant code on uffd was introduced in the anon support, which is >>>> commit f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration", >>>> 2020-04-07). However anon shouldn't suffer from this problem because anon >>>> should already have the write bit cleared always, so that may not be a >>>> proper Fixes target, while I'm adding the Fixes to be uffd shmem support. >>>> >>> >>> ... >>> >>>> --- a/mm/migrate.c >>>> +++ b/mm/migrate.c >>>> @@ -213,8 +213,14 @@ static bool remove_migration_pte(struct folio *folio, >>>> pte = pte_mkdirty(pte); >>>> if (is_writable_migration_entry(entry)) >>>> pte = maybe_mkwrite(pte, vma); >>>> - else if (pte_swp_uffd_wp(*pvmw.pte)) >>>> + else >>>> + /* NOTE: mk_pte can have write bit set */ >>>> + pte = pte_wrprotect(pte); >>>> + >>>> + if (pte_swp_uffd_wp(*pvmw.pte)) { >>>> + WARN_ON_ONCE(pte_write(pte)); >> >> Will this warnnig trigger in the scenario you and Ives have discovered? > > If without the above newly added wr-protect, yes. This is the case where > we found we got write bit set even if uffd-wp bit is also set, hence allows > the write to go through even if marked protected. > >> >>>> pte = pte_mkuffd_wp(pte); >>>> + } >>>> >>>> if (folio_test_anon(folio) && !is_readable_migration_entry(entry)) >>>> rmap_flags |= RMAP_EXCLUSIVE; >>> >>> As raised, I don't agree to this generic non-uffd-wp change without >>> further, clear justification. >> >> Pater, can you please work this further? > > I didn't reply here because I have already replied with the question in > previous version with a few attempts. Quotting myself: > > https://lore.kernel.org/all/Y3KgYeMTdTM0FN5W@x1n/ > > The thing is recovering the pte into its original form is the > safest approach to me, so I think we need justification on why it's > always safe to set the write bit. > > I've also got another longer email trying to explain why I think it's the > other way round to be justfied, rather than justifying removal of the write > bit for a read migration entry, here: > And I disagree for this patch that is supposed to fix this hunk: @@ -243,11 +243,15 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, entry = pte_to_swp_entry(*pvmw.pte); if (is_write_migration_entry(entry)) pte = maybe_mkwrite(pte, vma); + else if (pte_swp_uffd_wp(*pvmw.pte)) + pte = pte_mkuffd_wp(pte); if (unlikely(is_zone_device_page(new))) { if (is_device_private_page(new)) { entry = make_device_private_entry(new, pte_write(pte)); pte = swp_entry_to_pte(entry); + if (pte_swp_uffd_wp(*pvmw.pte)) + pte = pte_mkuffd_wp(pte); } } There is really nothing to justify the other way around here. If it's broken fix it independently and properly backport it independenty. But we don't know about any such broken case. I have no energy to spare to argue further ;)
On Thu, 1 Dec 2022 16:42:52 +0100 David Hildenbrand <david@redhat.com> wrote: > On 01.12.22 16:28, Peter Xu wrote: > > > > I didn't reply here because I have already replied with the question in > > previous version with a few attempts. Quotting myself: > > > > https://lore.kernel.org/all/Y3KgYeMTdTM0FN5W@x1n/ > > > > The thing is recovering the pte into its original form is the > > safest approach to me, so I think we need justification on why it's > > always safe to set the write bit. > > > > I've also got another longer email trying to explain why I think it's the > > other way round to be justfied, rather than justifying removal of the write > > bit for a read migration entry, here: > > > > And I disagree for this patch that is supposed to fix this hunk: > > > @@ -243,11 +243,15 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, > entry = pte_to_swp_entry(*pvmw.pte); > if (is_write_migration_entry(entry)) > pte = maybe_mkwrite(pte, vma); > + else if (pte_swp_uffd_wp(*pvmw.pte)) > + pte = pte_mkuffd_wp(pte); > > if (unlikely(is_zone_device_page(new))) { > if (is_device_private_page(new)) { > entry = make_device_private_entry(new, pte_write(pte)); > pte = swp_entry_to_pte(entry); > + if (pte_swp_uffd_wp(*pvmw.pte)) > + pte = pte_mkuffd_wp(pte); > } > } David, I'm unclear on what you mean by the above. Can you please expand? > > There is really nothing to justify the other way around here. > If it's broken fix it independently and properly backport it independenty. > > But we don't know about any such broken case. > > I have no energy to spare to argue further ;) This is a silent data loss bug, which is about as bad as it gets. Under obscure conditions, fortunately. But please let's keep working it. Let's aim for something minimal for backporting purposes. We can revisit any cleanliness issues later. David, do you feel that the proposed fix will at least address the bug without adverse side-effects?
On 01.12.22 23:30, Andrew Morton wrote: > On Thu, 1 Dec 2022 16:42:52 +0100 David Hildenbrand <david@redhat.com> wrote: > >> On 01.12.22 16:28, Peter Xu wrote: >>> >>> I didn't reply here because I have already replied with the question in >>> previous version with a few attempts. Quotting myself: >>> >>> https://lore.kernel.org/all/Y3KgYeMTdTM0FN5W@x1n/ >>> >>> The thing is recovering the pte into its original form is the >>> safest approach to me, so I think we need justification on why it's >>> always safe to set the write bit. >>> >>> I've also got another longer email trying to explain why I think it's the >>> other way round to be justfied, rather than justifying removal of the write >>> bit for a read migration entry, here: >>> >> >> And I disagree for this patch that is supposed to fix this hunk: >> >> >> @@ -243,11 +243,15 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, >> entry = pte_to_swp_entry(*pvmw.pte); >> if (is_write_migration_entry(entry)) >> pte = maybe_mkwrite(pte, vma); >> + else if (pte_swp_uffd_wp(*pvmw.pte)) >> + pte = pte_mkuffd_wp(pte); >> >> if (unlikely(is_zone_device_page(new))) { >> if (is_device_private_page(new)) { >> entry = make_device_private_entry(new, pte_write(pte)); >> pte = swp_entry_to_pte(entry); >> + if (pte_swp_uffd_wp(*pvmw.pte)) >> + pte = pte_mkuffd_wp(pte); >> } >> } > > David, I'm unclear on what you mean by the above. Can you please > expand? > >> >> There is really nothing to justify the other way around here. >> If it's broken fix it independently and properly backport it independenty. >> >> But we don't know about any such broken case. >> >> I have no energy to spare to argue further ;) > > This is a silent data loss bug, which is about as bad as it gets. > Under obscure conditions, fortunately. But please let's keep working > it. Let's aim for something minimal for backporting purposes. We can > revisit any cleanliness issues later. Okay, you activated my energy reserves. > > David, do you feel that the proposed fix will at least address the bug > without adverse side-effects? Usually, when I suspect something is dodgy I unconsciously push back harder than I usually would. I just looked into the issue once again and realized that this patch here (and also my alternative proposal) most likely tackles the more-generic issue from the wrong direction. I found yet another such bug (most probably two, just too lazy to write another reproducer). Migration code does the right thing here -- IMHO -- and the issue should be fixed differently. I'm testing an alternative patch right now and will share it later today, along with a reproducer.
On 02.12.22 12:03, David Hildenbrand wrote: > On 01.12.22 23:30, Andrew Morton wrote: >> On Thu, 1 Dec 2022 16:42:52 +0100 David Hildenbrand <david@redhat.com> wrote: >> >>> On 01.12.22 16:28, Peter Xu wrote: >>>> >>>> I didn't reply here because I have already replied with the question in >>>> previous version with a few attempts. Quotting myself: >>>> >>>> https://lore.kernel.org/all/Y3KgYeMTdTM0FN5W@x1n/ >>>> >>>> The thing is recovering the pte into its original form is the >>>> safest approach to me, so I think we need justification on why it's >>>> always safe to set the write bit. >>>> >>>> I've also got another longer email trying to explain why I think it's the >>>> other way round to be justfied, rather than justifying removal of the write >>>> bit for a read migration entry, here: >>>> >>> >>> And I disagree for this patch that is supposed to fix this hunk: >>> >>> >>> @@ -243,11 +243,15 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, >>> entry = pte_to_swp_entry(*pvmw.pte); >>> if (is_write_migration_entry(entry)) >>> pte = maybe_mkwrite(pte, vma); >>> + else if (pte_swp_uffd_wp(*pvmw.pte)) >>> + pte = pte_mkuffd_wp(pte); >>> >>> if (unlikely(is_zone_device_page(new))) { >>> if (is_device_private_page(new)) { >>> entry = make_device_private_entry(new, pte_write(pte)); >>> pte = swp_entry_to_pte(entry); >>> + if (pte_swp_uffd_wp(*pvmw.pte)) >>> + pte = pte_mkuffd_wp(pte); >>> } >>> } >> >> David, I'm unclear on what you mean by the above. Can you please >> expand? >> >>> >>> There is really nothing to justify the other way around here. >>> If it's broken fix it independently and properly backport it independenty. >>> >>> But we don't know about any such broken case. >>> >>> I have no energy to spare to argue further ;) >> >> This is a silent data loss bug, which is about as bad as it gets. >> Under obscure conditions, fortunately. But please let's keep working >> it. Let's aim for something minimal for backporting purposes. We can >> revisit any cleanliness issues later. > > Okay, you activated my energy reserves. > >> >> David, do you feel that the proposed fix will at least address the bug >> without adverse side-effects? > > Usually, when I suspect something is dodgy I unconsciously push back > harder than I usually would. > > I just looked into the issue once again and realized that this patch > here (and also my alternative proposal) most likely tackles the > more-generic issue from the wrong direction. I found yet another such > bug (most probably two, just too lazy to write another reproducer). > Migration code does the right thing here -- IMHO -- and the issue should > be fixed differently. > > I'm testing an alternative patch right now and will share it later > today, along with a reproducer. > mprotect() reproducer attached.
On Fri, Dec 02, 2022 at 01:07:02PM +0100, David Hildenbrand wrote: > On 02.12.22 12:03, David Hildenbrand wrote: > > On 01.12.22 23:30, Andrew Morton wrote: > > > On Thu, 1 Dec 2022 16:42:52 +0100 David Hildenbrand <david@redhat.com> wrote: > > > > > > > On 01.12.22 16:28, Peter Xu wrote: > > > > > > > > > > I didn't reply here because I have already replied with the question in > > > > > previous version with a few attempts. Quotting myself: > > > > > > > > > > https://lore.kernel.org/all/Y3KgYeMTdTM0FN5W@x1n/ > > > > > > > > > > The thing is recovering the pte into its original form is the > > > > > safest approach to me, so I think we need justification on why it's > > > > > always safe to set the write bit. > > > > > > > > > > I've also got another longer email trying to explain why I think it's the > > > > > other way round to be justfied, rather than justifying removal of the write > > > > > bit for a read migration entry, here: > > > > > > > > > > > > > And I disagree for this patch that is supposed to fix this hunk: > > > > > > > > > > > > @@ -243,11 +243,15 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, > > > > entry = pte_to_swp_entry(*pvmw.pte); > > > > if (is_write_migration_entry(entry)) > > > > pte = maybe_mkwrite(pte, vma); > > > > + else if (pte_swp_uffd_wp(*pvmw.pte)) > > > > + pte = pte_mkuffd_wp(pte); > > > > if (unlikely(is_zone_device_page(new))) { > > > > if (is_device_private_page(new)) { > > > > entry = make_device_private_entry(new, pte_write(pte)); > > > > pte = swp_entry_to_pte(entry); > > > > + if (pte_swp_uffd_wp(*pvmw.pte)) > > > > + pte = pte_mkuffd_wp(pte); > > > > } > > > > } > > > > > > David, I'm unclear on what you mean by the above. Can you please > > > expand? > > > > > > > > > > > There is really nothing to justify the other way around here. > > > > If it's broken fix it independently and properly backport it independenty. > > > > > > > > But we don't know about any such broken case. > > > > > > > > I have no energy to spare to argue further ;) > > > > > > This is a silent data loss bug, which is about as bad as it gets. > > > Under obscure conditions, fortunately. But please let's keep working > > > it. Let's aim for something minimal for backporting purposes. We can > > > revisit any cleanliness issues later. > > > > Okay, you activated my energy reserves. > > > > > > > > David, do you feel that the proposed fix will at least address the bug > > > without adverse side-effects? > > > > Usually, when I suspect something is dodgy I unconsciously push back > > harder than I usually would. Please consider using unconsciousness only for self guidance, figuring out directions, or making decisions on one's own. For discussions on the list which can get more than one person involved, we do need consciousness and reasonings. Thanks for the reproducer, that's definitely good reasonings. Do you have other reproducer that can trigger an issue without mprotect()? As I probably mentioned before in other threads mprotect() is IMHO conceptually against uffd-wp and I don't yet figured out how to use them all right. For example, we can uffd-wr-protect a pte in uffd-wp range, then if we do "mprotect(RW)" it's hard to tell whether the user wants it write or not. E.g., using mprotect(RW) to resolve page faults should be wrong because it'll not touch the uffd-wp bit at all. I confess I never thought more on how we should define the interactions between uffd-wp and mprotect. In short, it'll be great if you have other reproducers for any uffd-wp issues other than mprotect(). I said that also because I just got another message from Ives privately that there _seems_ to have yet another even harder to reproduce bug here (Ives, feel free to fill in any more information if you got it). So if you can figure out what's missing and already write a reproducer, that'll be perfect. Thanks, > > > > I just looked into the issue once again and realized that this patch > > here (and also my alternative proposal) most likely tackles the > > more-generic issue from the wrong direction. I found yet another such > > bug (most probably two, just too lazy to write another reproducer). > > Migration code does the right thing here -- IMHO -- and the issue should > > be fixed differently. > > > > I'm testing an alternative patch right now and will share it later > > today, along with a reproducer. > > > > mprotect() reproducer attached. > > -- > Thanks, > > David / dhildenb > #define _GNU_SOURCE > #include <stdio.h> > #include <stdlib.h> > #include <string.h> > #include <fcntl.h> > #include <unistd.h> > #include <errno.h> > #include <poll.h> > #include <pthread.h> > #include <sys/mman.h> > #include <sys/syscall.h> > #include <sys/ioctl.h> > #include <linux/memfd.h> > #include <linux/userfaultfd.h> > > size_t pagesize; > int uffd; > > static void *uffd_thread_fn(void *arg) > { > static struct uffd_msg msg; > ssize_t nread; > > while (1) { > struct pollfd pollfd; > int nready; > > pollfd.fd = uffd; > pollfd.events = POLLIN; > nready = poll(&pollfd, 1, -1); > if (nready == -1) { > fprintf(stderr, "poll() failed: %d\n", errno); > exit(1); > } > > nread = read(uffd, &msg, sizeof(msg)); > if (nread <= 0) > continue; > > if (msg.event != UFFD_EVENT_PAGEFAULT || > !(msg.arg.pagefault.flags & UFFD_PAGEFAULT_FLAG_WP)) { > printf("FAIL: wrong uffd-wp event fired\n"); > exit(1); > } > > printf("PASS: uffd-wp fired\n"); > exit(0); > } > } > > static int setup_uffd(char *map) > { > struct uffdio_api uffdio_api; > struct uffdio_register uffdio_register; > struct uffdio_range uffd_range; > pthread_t thread; > > uffd = syscall(__NR_userfaultfd, > O_CLOEXEC | O_NONBLOCK | UFFD_USER_MODE_ONLY); > if (uffd < 0) { > fprintf(stderr, "syscall() failed: %d\n", errno); > return -errno; > } > > uffdio_api.api = UFFD_API; > uffdio_api.features = UFFD_FEATURE_PAGEFAULT_FLAG_WP; > if (ioctl(uffd, UFFDIO_API, &uffdio_api) < 0) { > fprintf(stderr, "UFFDIO_API failed: %d\n", errno); > return -errno; > } > > if (!(uffdio_api.features & UFFD_FEATURE_PAGEFAULT_FLAG_WP)) { > fprintf(stderr, "UFFD_FEATURE_WRITEPROTECT missing\n"); > return -ENOSYS; > } > > uffdio_register.range.start = (unsigned long) map; > uffdio_register.range.len = pagesize; > uffdio_register.mode = UFFDIO_REGISTER_MODE_WP; > if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) < 0) { > fprintf(stderr, "UFFDIO_REGISTER failed: %d\n", errno); > return -errno; > } > > pthread_create(&thread, NULL, uffd_thread_fn, NULL); > > return 0; > } > > int main(int argc, char **argv) > { > struct uffdio_writeprotect uffd_writeprotect; > char *map; > int fd; > > pagesize = getpagesize(); > fd = memfd_create("test", 0); > if (fd < 0) { > fprintf(stderr, "memfd_create() failed\n"); > return -errno; > } > if (ftruncate(fd, pagesize)) { > fprintf(stderr, "ftruncate() failed\n"); > return -errno; > } > > /* Start out without write protection. */ > map = mmap(NULL, pagesize, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); > if (map == MAP_FAILED) { > fprintf(stderr, "mmap() failed\n"); > return -errno; > } > > if (setup_uffd(map)) > return 1; > > /* Populate a page ... */ > memset(map, 0, pagesize); > > /* ... and write-protect it using uffd-wp. */ > uffd_writeprotect.range.start = (unsigned long) map; > uffd_writeprotect.range.len = pagesize; > uffd_writeprotect.mode = UFFDIO_WRITEPROTECT_MODE_WP; > if (ioctl(uffd, UFFDIO_WRITEPROTECT, &uffd_writeprotect)) { > fprintf(stderr, "UFFDIO_WRITEPROTECT failed: %d\n", errno); > return -errno; > } > > /* Write-protect the whole mapping temporarily. */ > mprotect(map, pagesize, PROT_READ); > mprotect(map, pagesize, PROT_READ|PROT_WRITE); > > /* Test if uffd-wp fires. */ > memset(map, 1, pagesize); > > printf("FAIL: uffd-wp did not fire\n"); > return 1; > }
>>>> >>>> David, do you feel that the proposed fix will at least address the bug >>>> without adverse side-effects? >>> >>> Usually, when I suspect something is dodgy I unconsciously push back >>> harder than I usually would. > > Please consider using unconsciousness only for self guidance, figuring out > directions, or making decisions on one's own. Yeah, sorry about my communication. I expressed that this approach felt wrong to me, I just wasn't able to phrase exactly why I thought migration is doing the right thing and didn't have a lot of time to look into the details. Now I dedicated some time and realized that mproctect() is doing the exact same thing, it became clearer to me why migration code wasn't broken before. > > For discussions on the list which can get more than one person involved, we > do need consciousness and reasonings. Yeah, I need vacation. > > Thanks for the reproducer, that's definitely good reasonings. Do you have > other reproducer that can trigger an issue without mprotect()? As noted in the RFC patch I sent, I suspect NUMA hinting page remapping might similarly trigger it. I did not try reproducing it, though. > > As I probably mentioned before in other threads mprotect() is IMHO > conceptually against uffd-wp and I don't yet figured out how to use them > all right. For example, we can uffd-wr-protect a pte in uffd-wp range, > then if we do "mprotect(RW)" it's hard to tell whether the user wants it > write or not. E.g., using mprotect(RW) to resolve page faults should be > wrong because it'll not touch the uffd-wp bit at all. I confess I never > thought more on how we should define the interactions between uffd-wp and > mprotect. > > In short, it'll be great if you have other reproducers for any uffd-wp > issues other than mprotect(). > > I said that also because I just got another message from Ives privately > that there _seems_ to have yet another even harder to reproduce bug here > (Ives, feel free to fill in any more information if you got it). So if you > can figure out what's missing and already write a reproducer, that'll be > perfect. Maybe NUMA hitning on the fallback path, when we didn't migrate or migration failed?
On 01.12.22 23:30, Andrew Morton wrote: > On Thu, 1 Dec 2022 16:42:52 +0100 David Hildenbrand <david@redhat.com> wrote: > >> On 01.12.22 16:28, Peter Xu wrote: >>> >>> I didn't reply here because I have already replied with the question in >>> previous version with a few attempts. Quotting myself: >>> >>> https://lore.kernel.org/all/Y3KgYeMTdTM0FN5W@x1n/ >>> >>> The thing is recovering the pte into its original form is the >>> safest approach to me, so I think we need justification on why it's >>> always safe to set the write bit. >>> >>> I've also got another longer email trying to explain why I think it's the >>> other way round to be justfied, rather than justifying removal of the write >>> bit for a read migration entry, here: >>> >> >> And I disagree for this patch that is supposed to fix this hunk: >> >> >> @@ -243,11 +243,15 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, >> entry = pte_to_swp_entry(*pvmw.pte); >> if (is_write_migration_entry(entry)) >> pte = maybe_mkwrite(pte, vma); >> + else if (pte_swp_uffd_wp(*pvmw.pte)) >> + pte = pte_mkuffd_wp(pte); >> >> if (unlikely(is_zone_device_page(new))) { >> if (is_device_private_page(new)) { >> entry = make_device_private_entry(new, pte_write(pte)); >> pte = swp_entry_to_pte(entry); >> + if (pte_swp_uffd_wp(*pvmw.pte)) >> + pte = pte_mkuffd_wp(pte); >> } >> } > > David, I'm unclear on what you mean by the above. Can you please > expand? > >> >> There is really nothing to justify the other way around here. >> If it's broken fix it independently and properly backport it independenty. >> >> But we don't know about any such broken case. >> >> I have no energy to spare to argue further ;) > > This is a silent data loss bug, which is about as bad as it gets. > Under obscure conditions, fortunately. But please let's keep working > it. Let's aim for something minimal for backporting purposes. We can > revisit any cleanliness issues later. > > David, do you feel that the proposed fix will at least address the bug > without adverse side-effects? Just to answer that question clearly: it will fix this bug, but it's likely that other similar bugs remain (suspecting NUMA hinting). Adverse side effect will be that some PTEs that could we writable won't be writable. I assume it's not too bad in practice for this particular case. I proposed an alternative fix and identified other possible broken cases. Again, I don't NAK this patch as is, it just logically doesn't make sense to me to handle this case differently to the other vma->vm_page_prot users. (more details in the other thread)
diff --git a/mm/migrate.c b/mm/migrate.c index dff333593a8a..8b6351c08c78 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -213,8 +213,14 @@ static bool remove_migration_pte(struct folio *folio, pte = pte_mkdirty(pte); if (is_writable_migration_entry(entry)) pte = maybe_mkwrite(pte, vma); - else if (pte_swp_uffd_wp(*pvmw.pte)) + else + /* NOTE: mk_pte can have write bit set */ + pte = pte_wrprotect(pte); + + if (pte_swp_uffd_wp(*pvmw.pte)) { + WARN_ON_ONCE(pte_write(pte)); pte = pte_mkuffd_wp(pte); + } if (folio_test_anon(folio) && !is_readable_migration_entry(entry)) rmap_flags |= RMAP_EXCLUSIVE;