Message ID | 20230214075710.2401855-2-stevensd@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2827423wrn; Mon, 13 Feb 2023 23:58:25 -0800 (PST) X-Google-Smtp-Source: AK7set/LaLcEL6G3g+Y1Yo+a4fW4KmZS7EPk1GTWdMTkcFFF+0xdPTWjRIwH1hunJ7NYieCSEVP9 X-Received: by 2002:a17:90b:1652:b0:234:1a77:93b8 with SMTP id il18-20020a17090b165200b002341a7793b8mr1517688pjb.21.1676361504849; Mon, 13 Feb 2023 23:58:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676361504; cv=none; d=google.com; s=arc-20160816; b=WAhDfNl1nvbNMYgTRtVyUQKNih/Vt3uTMpF4C2cm+ki7dVqf+DnWyPL8cCTSVB3B+c CCu2gQ25H+tji8s6OUeF/ljQJOnPer4F4lmVnIa51tihEOOrLE9bCIHalRdnAhNkCUOL +gPccKZRVHeSbA9cptu+TteeB7dJPs6s+bmoWdUdgyUtY21nNNSRcLqWsqii3bwCKUpg HRd1nndaUDfT4secbVqG2tvrO6JGVO8fD1iYXzeCYKDqEXqXjUeoeBPtNjxZTjAlD9k9 NV+OoN1np+ncIcW6huu2zrc3knJ5G8yfgljAbDcYZYX5wldQFPc298R9E7Tj0/3zpN6U xuuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=j0Z3rzguH4ZoJOngNzhmwoM/o5xYQMHZRaqzjDLV+ZY=; b=lNMacamOu56Kp6isCS9ugBmKyAmL1PV3n3Nt4bTCsPlfDnLqvzZ4f8Q2S6ikYNMn/B PPGyRbyqmWxCvg8y3Ov29KAeNt1YtTfAzvCg2rfMAUIvXZSQ1n1TzR+t0/UaxjcA6c5Z oMvsC5D79tsSdt04mxxadZHLFNUCH2r+58hX/Vg0OGff5GHCPr3RO7sBSrUPEmpFFO8a ePhJq/wKY6LStfmTaO5f+hk9WEyHjw+boinh8kj/pmjL8Omoza88U+or4PzStRf7HsdW HC4Y+7uMNDnLZevUmA4Ll2Ojxo2YVRSa9vgDPBZGrip0ZFNMWuDPyv/fsucnVFuSNElH ZyJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=TT5qGbRp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x9-20020a17090aca0900b00233abe49bd1si11334601pjt.109.2023.02.13.23.58.12; Mon, 13 Feb 2023 23:58:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=TT5qGbRp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231549AbjBNH5m (ORCPT <rfc822;tebrre53rla2o@gmail.com> + 99 others); Tue, 14 Feb 2023 02:57:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231409AbjBNH50 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 14 Feb 2023 02:57:26 -0500 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33315206A5 for <linux-kernel@vger.kernel.org>; Mon, 13 Feb 2023 23:57:24 -0800 (PST) Received: by mail-pg1-x52f.google.com with SMTP id 78so9711516pgb.8 for <linux-kernel@vger.kernel.org>; Mon, 13 Feb 2023 23:57:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=j0Z3rzguH4ZoJOngNzhmwoM/o5xYQMHZRaqzjDLV+ZY=; b=TT5qGbRp+TwoKbeoF8RFdMUbptoh0Tt+uCr8QbFYXP+VDWlmHqE0TWskWe7Gg86ZOl w2szTKW6pZULps5U/lb0RAWobrUA+dl61/EqpGukGnH6udCW7LU+n688a001VOF6WnRG l8Xj5sx1HKNNl4w4OHepXx71PY3MkIN5u7aYE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j0Z3rzguH4ZoJOngNzhmwoM/o5xYQMHZRaqzjDLV+ZY=; b=vP/7ga9KyA26PeDuIZAjHZHBrTIQ1VBzJtFYaFSDQyjBnRYPn4jYJ3kxNUDPcBYXdh E+1rkIoSOn/na1GO7aDJBQRYskQKIPOqi8kcFdoH4a5pmhQyt8UGZ/vDXwmE6WFCWX1z DtrSNt6CEtV7zvqprRJVkc2gRUSuxIt3Xvaa0W5JOcRstnmcdVuU+G/4XJVDBqvflR4W eLt2sKYNXKcma7BBAb+74apgqNoRnkHlZssmM9VQX4gF1+52QkHi16DRD+TOELW4lWhg UIq+5A9UIIu7V7L5OtKSBK0LVUHsfl3B4z3gQmOw1zIcni/VF8l8mz7aqe/2Z4AGqttS +ciA== X-Gm-Message-State: AO0yUKWfSDO7TLAYjlr2oOW989uciYa4E658DbPiqT0TQvJYKfl/ls8U xuyM2eWrwfSpjDyFqyXkfm26hA== X-Received: by 2002:aa7:95a9:0:b0:5a8:bd6e:90fb with SMTP id a9-20020aa795a9000000b005a8bd6e90fbmr1427099pfk.19.1676361443593; Mon, 13 Feb 2023 23:57:23 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:45f7:92a0:f546:300f]) by smtp.gmail.com with UTF8SMTPSA id d19-20020aa78153000000b00593fa670c88sm9134529pfn.57.2023.02.13.23.57.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 13 Feb 2023 23:57:22 -0800 (PST) From: David Stevens <stevensd@chromium.org> X-Google-Original-From: David Stevens <stevensd@google.com> To: linux-mm@kvack.org, Peter Xu <peterx@redhat.com>, Matthew Wilcox <willy@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org>, "Kirill A . Shutemov" <kirill@shutemov.name>, Yang Shi <shy828301@gmail.com>, David Hildenbrand <david@redhat.com>, Hugh Dickins <hughd@google.com>, linux-kernel@vger.kernel.org, David Stevens <stevensd@chromium.org> Subject: [PATCH 2/2] mm/khugepaged: skip shmem with userfaultfd Date: Tue, 14 Feb 2023 16:57:10 +0900 Message-Id: <20230214075710.2401855-2-stevensd@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog In-Reply-To: <20230214075710.2401855-1-stevensd@google.com> References: <20230214075710.2401855-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757792441587242707?= X-GMAIL-MSGID: =?utf-8?q?1757792441587242707?= |
Series |
[1/2] mm/khugepaged: set THP as uptodate earlier for shmem
|
|
Commit Message
David Stevens
Feb. 14, 2023, 7:57 a.m. UTC
From: David Stevens <stevensd@chromium.org> Make sure that collapse_file respects any userfaultfds registered with MODE_MISSING. If userspace has any such userfaultfds registered, then for any page which it knows to be missing, it may expect a UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when collapsing a shmem range would result in replacing an empty page with a THP, so that it doesn't break userfaultfd. Synchronization when checking for userfaultfds in collapse_file is tricky because the mmap locks can't be used to prevent races with the registration of new userfaultfds. Instead, we provide synchronization by ensuring that userspace cannot observe the fact that pages are missing before we check for userfaultfds. Although this allows registration of a userfaultfd to race with collapse_file, it ensures that userspace cannot observe any pages transition from missing to present after such a race. This makes such a race indistinguishable to the collapse occurring immediately before the userfaultfd registration. The first step to provide this synchronization is to stop filling gaps during the loop iterating over the target range, since the page cache lock can be dropped during that loop. The second step is to fill the gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final time, to avoid races with accesses to the page cache that only take the RCU read lock. This fix is targeted at khugepaged, but the change also applies to MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now return EBUSY if there are any missing pages (instead of succeeding on shmem and returning EINVAL on anonymous memory). There is also now a window during MADV_COLLAPSE where a fault on a missing page will cause the syscall to fail with EAGAIN. The fact that intermediate page cache state can no longer be observed before the rollback of a failed collapse is also technically a userspace-visible change (via at least SEEK_DATA and SEEK_END), but it is exceedingly unlikely that anything relies on being able to observe that transient state. Signed-off-by: David Stevens <stevensd@chromium.org> --- mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 58 insertions(+), 8 deletions(-)
Comments
Hi, David, On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > From: David Stevens <stevensd@chromium.org> > > Make sure that collapse_file respects any userfaultfds registered with > MODE_MISSING. If userspace has any such userfaultfds registered, then > for any page which it knows to be missing, it may expect a > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > collapsing a shmem range would result in replacing an empty page with a > THP, so that it doesn't break userfaultfd. > > Synchronization when checking for userfaultfds in collapse_file is > tricky because the mmap locks can't be used to prevent races with the > registration of new userfaultfds. Instead, we provide synchronization by > ensuring that userspace cannot observe the fact that pages are missing > before we check for userfaultfds. Although this allows registration of a > userfaultfd to race with collapse_file, it ensures that userspace cannot > observe any pages transition from missing to present after such a race. > This makes such a race indistinguishable to the collapse occurring > immediately before the userfaultfd registration. > > The first step to provide this synchronization is to stop filling gaps > during the loop iterating over the target range, since the page cache > lock can be dropped during that loop. The second step is to fill the > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > time, to avoid races with accesses to the page cache that only take the > RCU read lock. > > This fix is targeted at khugepaged, but the change also applies to > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > return EBUSY if there are any missing pages (instead of succeeding on > shmem and returning EINVAL on anonymous memory). There is also now a > window during MADV_COLLAPSE where a fault on a missing page will cause > the syscall to fail with EAGAIN. > > The fact that intermediate page cache state can no longer be observed > before the rollback of a failed collapse is also technically a > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > is exceedingly unlikely that anything relies on being able to observe > that transient state. > > Signed-off-by: David Stevens <stevensd@chromium.org> > --- > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 58 insertions(+), 8 deletions(-) Could you attach a changelog in your next post (probably with a cover letter when patches more than one)? Your patch 1 reminded me that, I think both lseek and mincore will not report DATA but HOLE on the thp holes during collapse, no matter we fill hpage in (as long as hpage being !uptodate) or not (as what you do with this one). However I don't understand how this new patch can avoid the same race issue I mentioned in the last version at all.
On Wed, Feb 15, 2023 at 7:35 AM Peter Xu <peterx@redhat.com> wrote: > > Hi, David, > > On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > > From: David Stevens <stevensd@chromium.org> > > > > Make sure that collapse_file respects any userfaultfds registered with > > MODE_MISSING. If userspace has any such userfaultfds registered, then > > for any page which it knows to be missing, it may expect a > > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > > collapsing a shmem range would result in replacing an empty page with a > > THP, so that it doesn't break userfaultfd. > > > > Synchronization when checking for userfaultfds in collapse_file is > > tricky because the mmap locks can't be used to prevent races with the > > registration of new userfaultfds. Instead, we provide synchronization by > > ensuring that userspace cannot observe the fact that pages are missing > > before we check for userfaultfds. Although this allows registration of a > > userfaultfd to race with collapse_file, it ensures that userspace cannot > > observe any pages transition from missing to present after such a race. > > This makes such a race indistinguishable to the collapse occurring > > immediately before the userfaultfd registration. > > > > The first step to provide this synchronization is to stop filling gaps > > during the loop iterating over the target range, since the page cache > > lock can be dropped during that loop. The second step is to fill the > > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > > time, to avoid races with accesses to the page cache that only take the > > RCU read lock. > > > > This fix is targeted at khugepaged, but the change also applies to > > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > > return EBUSY if there are any missing pages (instead of succeeding on > > shmem and returning EINVAL on anonymous memory). There is also now a > > window during MADV_COLLAPSE where a fault on a missing page will cause > > the syscall to fail with EAGAIN. > > > > The fact that intermediate page cache state can no longer be observed > > before the rollback of a failed collapse is also technically a > > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > > is exceedingly unlikely that anything relies on being able to observe > > that transient state. > > > > Signed-off-by: David Stevens <stevensd@chromium.org> > > --- > > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > > 1 file changed, 58 insertions(+), 8 deletions(-) > > Could you attach a changelog in your next post (probably with a cover > letter when patches more than one)? > > Your patch 1 reminded me that, I think both lseek and mincore will not > report DATA but HOLE on the thp holes during collapse, no matter we fill > hpage in (as long as hpage being !uptodate) or not (as what you do with > this one). > > However I don't understand how this new patch can avoid the same race issue > I mentioned in the last version at all. If find_get_entry sees an XA_RETRY_ENTRY, then it will re-read from the xarray. This means find_get_entry will loop while we're finalizing the collapse - either until we finalize the collapse with the multi-index hpage entry or abort the collapse and clear the retry entry. This means that even if userspace registers a userfaultfd and calls lseek after khugepage check for userfaultfd, the call to lseek will block until the collapse is finished. There are a number of other places in filemap.c/shmem.c that do their own iteration over the xarray, and they all retry on xas_retry() as well. -David
On Wed, Feb 15, 2023 at 10:57:11AM +0900, David Stevens wrote: > On Wed, Feb 15, 2023 at 7:35 AM Peter Xu <peterx@redhat.com> wrote: > > > > Hi, David, > > > > On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > > > From: David Stevens <stevensd@chromium.org> > > > > > > Make sure that collapse_file respects any userfaultfds registered with > > > MODE_MISSING. If userspace has any such userfaultfds registered, then > > > for any page which it knows to be missing, it may expect a > > > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > > > collapsing a shmem range would result in replacing an empty page with a > > > THP, so that it doesn't break userfaultfd. > > > > > > Synchronization when checking for userfaultfds in collapse_file is > > > tricky because the mmap locks can't be used to prevent races with the > > > registration of new userfaultfds. Instead, we provide synchronization by > > > ensuring that userspace cannot observe the fact that pages are missing > > > before we check for userfaultfds. Although this allows registration of a > > > userfaultfd to race with collapse_file, it ensures that userspace cannot > > > observe any pages transition from missing to present after such a race. > > > This makes such a race indistinguishable to the collapse occurring > > > immediately before the userfaultfd registration. > > > > > > The first step to provide this synchronization is to stop filling gaps > > > during the loop iterating over the target range, since the page cache > > > lock can be dropped during that loop. The second step is to fill the > > > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > > > time, to avoid races with accesses to the page cache that only take the > > > RCU read lock. > > > > > > This fix is targeted at khugepaged, but the change also applies to > > > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > > > return EBUSY if there are any missing pages (instead of succeeding on > > > shmem and returning EINVAL on anonymous memory). There is also now a > > > window during MADV_COLLAPSE where a fault on a missing page will cause > > > the syscall to fail with EAGAIN. > > > > > > The fact that intermediate page cache state can no longer be observed > > > before the rollback of a failed collapse is also technically a > > > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > > > is exceedingly unlikely that anything relies on being able to observe > > > that transient state. > > > > > > Signed-off-by: David Stevens <stevensd@chromium.org> > > > --- > > > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > > > 1 file changed, 58 insertions(+), 8 deletions(-) > > > > Could you attach a changelog in your next post (probably with a cover > > letter when patches more than one)? > > > > Your patch 1 reminded me that, I think both lseek and mincore will not > > report DATA but HOLE on the thp holes during collapse, no matter we fill > > hpage in (as long as hpage being !uptodate) or not (as what you do with > > this one). > > > > However I don't understand how this new patch can avoid the same race issue > > I mentioned in the last version at all. > > If find_get_entry sees an XA_RETRY_ENTRY, then it will re-read from > the xarray. This means find_get_entry will loop while we're finalizing > the collapse - either until we finalize the collapse with the > multi-index hpage entry or abort the collapse and clear the retry > entry. This means that even if userspace registers a userfaultfd and > calls lseek after khugepage check for userfaultfd, the call to lseek > will block until the collapse is finished. > > There are a number of other places in filemap.c/shmem.c that do their > own iteration over the xarray, and they all retry on xas_retry() as > well. I've no problem on using RETRY entries (as long as others are fine with it :). It seems your logic depends on patch 1 being there already, so right after the RETRY got replaced with the thp it'll show Uptodate==DATA. However I doubt whether patch 1 is correct at all.. Maybe that can be instead fixed by having: folio_mark_uptodate(folio); To be before: xas_set_order(&xas, start, HPAGE_PMD_ORDER); xas_store(&xas, hpage); To replace patch 1, but I think there's still some issue in patch 2 even if it works. Ouch, I cut the codes.. I'll comment inline in another reply.
On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > From: David Stevens <stevensd@chromium.org> > > Make sure that collapse_file respects any userfaultfds registered with > MODE_MISSING. If userspace has any such userfaultfds registered, then > for any page which it knows to be missing, it may expect a > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > collapsing a shmem range would result in replacing an empty page with a > THP, so that it doesn't break userfaultfd. > > Synchronization when checking for userfaultfds in collapse_file is > tricky because the mmap locks can't be used to prevent races with the > registration of new userfaultfds. Instead, we provide synchronization by > ensuring that userspace cannot observe the fact that pages are missing > before we check for userfaultfds. Although this allows registration of a > userfaultfd to race with collapse_file, it ensures that userspace cannot > observe any pages transition from missing to present after such a race. > This makes such a race indistinguishable to the collapse occurring > immediately before the userfaultfd registration. > > The first step to provide this synchronization is to stop filling gaps > during the loop iterating over the target range, since the page cache > lock can be dropped during that loop. The second step is to fill the > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > time, to avoid races with accesses to the page cache that only take the > RCU read lock. > > This fix is targeted at khugepaged, but the change also applies to > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > return EBUSY if there are any missing pages (instead of succeeding on > shmem and returning EINVAL on anonymous memory). There is also now a > window during MADV_COLLAPSE where a fault on a missing page will cause > the syscall to fail with EAGAIN. > > The fact that intermediate page cache state can no longer be observed > before the rollback of a failed collapse is also technically a > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > is exceedingly unlikely that anything relies on being able to observe > that transient state. > > Signed-off-by: David Stevens <stevensd@chromium.org> > --- > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 58 insertions(+), 8 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index b648f1053d95..8c2e2349e883 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -55,6 +55,7 @@ enum scan_result { > SCAN_CGROUP_CHARGE_FAIL, > SCAN_TRUNCATED, > SCAN_PAGE_HAS_PRIVATE, > + SCAN_PAGE_FILLED, PS: You may want to also touch SCAN_STATUS in huge_memory.h next time. > }; > > #define CREATE_TRACE_POINTS > @@ -1725,8 +1726,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > * - allocate and lock a new huge page; > * - scan page cache replacing old pages with the new one > * + swap/gup in pages if necessary; > - * + fill in gaps; IIUC it's not a complete removal, but just moved downwards: > * + keep old pages around in case rollback is required; > + * - finalize updates to the page cache; + fill in gaps with RETRY entries + detect race conditions with userfaultfds > * - if replacing succeeds: > * + copy data over; > * + free old pages; > @@ -1805,13 +1806,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > result = SCAN_TRUNCATED; > goto xa_locked; > } > - xas_set(&xas, index); > + xas_set(&xas, index + 1); > } > if (!shmem_charge(mapping->host, 1)) { > result = SCAN_FAIL; > goto xa_locked; > } > - xas_store(&xas, hpage); > nr_none++; > continue; > } > @@ -1970,6 +1970,56 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > put_page(page); > goto xa_unlocked; > } > + > + if (nr_none) { > + struct vm_area_struct *vma; > + int nr_none_check = 0; > + > + xas_unlock_irq(&xas); > + i_mmap_lock_read(mapping); > + xas_lock_irq(&xas); > + > + xas_set(&xas, start); > + for (index = start; index < end; index++) { > + if (!xas_next(&xas)) { > + xas_store(&xas, XA_RETRY_ENTRY); > + nr_none_check++; > + } > + } > + > + if (nr_none != nr_none_check) { > + result = SCAN_PAGE_FILLED; > + goto immap_locked; > + } > + > + /* > + * If userspace observed a missing page in a VMA with an armed > + * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for > + * that page, so we need to roll back to avoid suppressing such > + * an event. Any userfaultfds armed after this point will not be > + * able to observe any missing pages due to the previously > + * inserted retry entries. > + */ > + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, start) { > + if (userfaultfd_missing(vma)) { > + result = SCAN_EXCEED_NONE_PTE; > + goto immap_locked; > + } > + } > + > +immap_locked: > + i_mmap_unlock_read(mapping); > + if (result != SCAN_SUCCEED) { > + xas_set(&xas, start); > + for (index = start; index < end; index++) { > + if (xas_next(&xas) == XA_RETRY_ENTRY) > + xas_store(&xas, NULL); > + } > + > + goto xa_locked; > + } > + } > + Until here, all look fine to me (ignoring patch 1 for now; assuming the hpage is always uptodate). My question is after here we'll release page cache lock again before try_to_unmap_flush(), but is it safe to keep RETRY entries after releasing page cache lock? It means other threads can be spinning. I assume page lock is always safe and sleepable, but not sure about the page cache lock here. > nr = thp_nr_pages(hpage); > > if (is_shmem) > @@ -2068,15 +2118,13 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > } > > xas_set(&xas, start); > - xas_for_each(&xas, page, end - 1) { > + end = index; > + for (index = start; index < end; index++) { > + xas_next(&xas); > page = list_first_entry_or_null(&pagelist, > struct page, lru); > if (!page || xas.xa_index < page->index) { > - if (!nr_none) > - break; > nr_none--; > - /* Put holes back where they were */ > - xas_store(&xas, NULL); > continue; > } > > @@ -2592,11 +2640,13 @@ static int madvise_collapse_errno(enum scan_result r) > case SCAN_ALLOC_HUGE_PAGE_FAIL: > return -ENOMEM; > case SCAN_CGROUP_CHARGE_FAIL: > + case SCAN_EXCEED_NONE_PTE: > return -EBUSY; > /* Resource temporary unavailable - trying again might succeed */ > case SCAN_PAGE_LOCK: > case SCAN_PAGE_LRU: > case SCAN_DEL_PAGE_LRU: > + case SCAN_PAGE_FILLED: > return -EAGAIN; > /* > * Other: Trying again likely not to succeed / error intrinsic to > -- > 2.39.1.581.gbfd45094c4-goog >
On Thu, Feb 16, 2023 at 7:48 AM Peter Xu <peterx@redhat.com> wrote: > > On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > > From: David Stevens <stevensd@chromium.org> > > > > Make sure that collapse_file respects any userfaultfds registered with > > MODE_MISSING. If userspace has any such userfaultfds registered, then > > for any page which it knows to be missing, it may expect a > > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > > collapsing a shmem range would result in replacing an empty page with a > > THP, so that it doesn't break userfaultfd. > > > > Synchronization when checking for userfaultfds in collapse_file is > > tricky because the mmap locks can't be used to prevent races with the > > registration of new userfaultfds. Instead, we provide synchronization by > > ensuring that userspace cannot observe the fact that pages are missing > > before we check for userfaultfds. Although this allows registration of a > > userfaultfd to race with collapse_file, it ensures that userspace cannot > > observe any pages transition from missing to present after such a race. > > This makes such a race indistinguishable to the collapse occurring > > immediately before the userfaultfd registration. > > > > The first step to provide this synchronization is to stop filling gaps > > during the loop iterating over the target range, since the page cache > > lock can be dropped during that loop. The second step is to fill the > > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > > time, to avoid races with accesses to the page cache that only take the > > RCU read lock. > > > > This fix is targeted at khugepaged, but the change also applies to > > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > > return EBUSY if there are any missing pages (instead of succeeding on > > shmem and returning EINVAL on anonymous memory). There is also now a > > window during MADV_COLLAPSE where a fault on a missing page will cause > > the syscall to fail with EAGAIN. > > > > The fact that intermediate page cache state can no longer be observed > > before the rollback of a failed collapse is also technically a > > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > > is exceedingly unlikely that anything relies on being able to observe > > that transient state. > > > > Signed-off-by: David Stevens <stevensd@chromium.org> > > --- > > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > > 1 file changed, 58 insertions(+), 8 deletions(-) > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index b648f1053d95..8c2e2349e883 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -55,6 +55,7 @@ enum scan_result { > > SCAN_CGROUP_CHARGE_FAIL, > > SCAN_TRUNCATED, > > SCAN_PAGE_HAS_PRIVATE, > > + SCAN_PAGE_FILLED, > > PS: You may want to also touch SCAN_STATUS in huge_memory.h next time. > > > }; > > > > #define CREATE_TRACE_POINTS > > @@ -1725,8 +1726,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > > * - allocate and lock a new huge page; > > * - scan page cache replacing old pages with the new one > > * + swap/gup in pages if necessary; > > - * + fill in gaps; > > IIUC it's not a complete removal, but just moved downwards: > > > * + keep old pages around in case rollback is required; > > + * - finalize updates to the page cache; > > + fill in gaps with RETRY entries > + detect race conditions with userfaultfds > > > * - if replacing succeeds: > > * + copy data over; > > * + free old pages; > > @@ -1805,13 +1806,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > result = SCAN_TRUNCATED; > > goto xa_locked; > > } > > - xas_set(&xas, index); > > + xas_set(&xas, index + 1); > > } > > if (!shmem_charge(mapping->host, 1)) { > > result = SCAN_FAIL; > > goto xa_locked; > > } > > - xas_store(&xas, hpage); > > nr_none++; > > continue; > > } > > @@ -1970,6 +1970,56 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > put_page(page); > > goto xa_unlocked; > > } > > + > > + if (nr_none) { > > + struct vm_area_struct *vma; > > + int nr_none_check = 0; > > + > > + xas_unlock_irq(&xas); > > + i_mmap_lock_read(mapping); > > + xas_lock_irq(&xas); > > + > > + xas_set(&xas, start); > > + for (index = start; index < end; index++) { > > + if (!xas_next(&xas)) { > > + xas_store(&xas, XA_RETRY_ENTRY); > > + nr_none_check++; > > + } > > + } > > + > > + if (nr_none != nr_none_check) { > > + result = SCAN_PAGE_FILLED; > > + goto immap_locked; > > + } > > + > > + /* > > + * If userspace observed a missing page in a VMA with an armed > > + * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for > > + * that page, so we need to roll back to avoid suppressing such > > + * an event. Any userfaultfds armed after this point will not be > > + * able to observe any missing pages due to the previously > > + * inserted retry entries. > > + */ > > + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, start) { > > + if (userfaultfd_missing(vma)) { > > + result = SCAN_EXCEED_NONE_PTE; > > + goto immap_locked; > > + } > > + } > > + > > +immap_locked: > > + i_mmap_unlock_read(mapping); > > + if (result != SCAN_SUCCEED) { > > + xas_set(&xas, start); > > + for (index = start; index < end; index++) { > > + if (xas_next(&xas) == XA_RETRY_ENTRY) > > + xas_store(&xas, NULL); > > + } > > + > > + goto xa_locked; > > + } > > + } > > + > > Until here, all look fine to me (ignoring patch 1 for now; assuming the > hpage is always uptodate). > > My question is after here we'll release page cache lock again before > try_to_unmap_flush(), but is it safe to keep RETRY entries after releasing > page cache lock? It means other threads can be spinning. I assume page > lock is always safe and sleepable, but not sure about the page cache lock > here. We insert the multi-index entry for hpage before releasing the page cache lock, which should replace all of the XA_RETRY_ENTRYs. So the page cache will be fully up to date when we release the lock, at least in terms of which pages it contains. -David
On Thu, Feb 16, 2023 at 10:37:47AM +0900, David Stevens wrote: > On Thu, Feb 16, 2023 at 7:48 AM Peter Xu <peterx@redhat.com> wrote: > > > > On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > > > From: David Stevens <stevensd@chromium.org> > > > > > > Make sure that collapse_file respects any userfaultfds registered with > > > MODE_MISSING. If userspace has any such userfaultfds registered, then > > > for any page which it knows to be missing, it may expect a > > > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > > > collapsing a shmem range would result in replacing an empty page with a > > > THP, so that it doesn't break userfaultfd. > > > > > > Synchronization when checking for userfaultfds in collapse_file is > > > tricky because the mmap locks can't be used to prevent races with the > > > registration of new userfaultfds. Instead, we provide synchronization by > > > ensuring that userspace cannot observe the fact that pages are missing > > > before we check for userfaultfds. Although this allows registration of a > > > userfaultfd to race with collapse_file, it ensures that userspace cannot > > > observe any pages transition from missing to present after such a race. > > > This makes such a race indistinguishable to the collapse occurring > > > immediately before the userfaultfd registration. > > > > > > The first step to provide this synchronization is to stop filling gaps > > > during the loop iterating over the target range, since the page cache > > > lock can be dropped during that loop. The second step is to fill the > > > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > > > time, to avoid races with accesses to the page cache that only take the > > > RCU read lock. > > > > > > This fix is targeted at khugepaged, but the change also applies to > > > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > > > return EBUSY if there are any missing pages (instead of succeeding on > > > shmem and returning EINVAL on anonymous memory). There is also now a > > > window during MADV_COLLAPSE where a fault on a missing page will cause > > > the syscall to fail with EAGAIN. > > > > > > The fact that intermediate page cache state can no longer be observed > > > before the rollback of a failed collapse is also technically a > > > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > > > is exceedingly unlikely that anything relies on being able to observe > > > that transient state. > > > > > > Signed-off-by: David Stevens <stevensd@chromium.org> > > > --- > > > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > > > 1 file changed, 58 insertions(+), 8 deletions(-) > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > index b648f1053d95..8c2e2349e883 100644 > > > --- a/mm/khugepaged.c > > > +++ b/mm/khugepaged.c > > > @@ -55,6 +55,7 @@ enum scan_result { > > > SCAN_CGROUP_CHARGE_FAIL, > > > SCAN_TRUNCATED, > > > SCAN_PAGE_HAS_PRIVATE, > > > + SCAN_PAGE_FILLED, > > > > PS: You may want to also touch SCAN_STATUS in huge_memory.h next time. > > > > > }; > > > > > > #define CREATE_TRACE_POINTS > > > @@ -1725,8 +1726,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > > > * - allocate and lock a new huge page; > > > * - scan page cache replacing old pages with the new one > > > * + swap/gup in pages if necessary; > > > - * + fill in gaps; > > > > IIUC it's not a complete removal, but just moved downwards: > > > > > * + keep old pages around in case rollback is required; > > > + * - finalize updates to the page cache; > > > > + fill in gaps with RETRY entries > > + detect race conditions with userfaultfds > > > > > * - if replacing succeeds: > > > * + copy data over; > > > * + free old pages; > > > @@ -1805,13 +1806,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > > result = SCAN_TRUNCATED; > > > goto xa_locked; > > > } > > > - xas_set(&xas, index); > > > + xas_set(&xas, index + 1); > > > } > > > if (!shmem_charge(mapping->host, 1)) { > > > result = SCAN_FAIL; > > > goto xa_locked; > > > } > > > - xas_store(&xas, hpage); > > > nr_none++; > > > continue; > > > } > > > @@ -1970,6 +1970,56 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > > put_page(page); > > > goto xa_unlocked; > > > } > > > + > > > + if (nr_none) { > > > + struct vm_area_struct *vma; > > > + int nr_none_check = 0; > > > + > > > + xas_unlock_irq(&xas); > > > + i_mmap_lock_read(mapping); > > > + xas_lock_irq(&xas); > > > + > > > + xas_set(&xas, start); > > > + for (index = start; index < end; index++) { > > > + if (!xas_next(&xas)) { > > > + xas_store(&xas, XA_RETRY_ENTRY); > > > + nr_none_check++; > > > + } > > > + } > > > + > > > + if (nr_none != nr_none_check) { > > > + result = SCAN_PAGE_FILLED; > > > + goto immap_locked; > > > + } > > > + > > > + /* > > > + * If userspace observed a missing page in a VMA with an armed > > > + * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for > > > + * that page, so we need to roll back to avoid suppressing such > > > + * an event. Any userfaultfds armed after this point will not be > > > + * able to observe any missing pages due to the previously > > > + * inserted retry entries. > > > + */ > > > + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, start) { > > > + if (userfaultfd_missing(vma)) { > > > + result = SCAN_EXCEED_NONE_PTE; > > > + goto immap_locked; > > > + } > > > + } > > > + > > > +immap_locked: > > > + i_mmap_unlock_read(mapping); > > > + if (result != SCAN_SUCCEED) { > > > + xas_set(&xas, start); > > > + for (index = start; index < end; index++) { > > > + if (xas_next(&xas) == XA_RETRY_ENTRY) > > > + xas_store(&xas, NULL); > > > + } > > > + > > > + goto xa_locked; > > > + } > > > + } > > > + > > > > Until here, all look fine to me (ignoring patch 1 for now; assuming the > > hpage is always uptodate). > > > > My question is after here we'll release page cache lock again before > > try_to_unmap_flush(), but is it safe to keep RETRY entries after releasing > > page cache lock? It means other threads can be spinning. I assume page > > lock is always safe and sleepable, but not sure about the page cache lock > > here. > > We insert the multi-index entry for hpage before releasing the page > cache lock, which should replace all of the XA_RETRY_ENTRYs. So the > page cache will be fully up to date when we release the lock, at least > in terms of which pages it contains. IIUC we released it before copying the pages: xa_locked: xas_unlock_irq(&xas); <-------------------------------- here xa_unlocked: /* * If collapse is successful, flush must be done now before copying. * If collapse is unsuccessful, does flush actually need to be done? * Do it anyway, to clear the state. */ try_to_unmap_flush(); Before insertion of the multi-index: /* Join all the small entries into a single multi-index entry. */ xas_set_order(&xas, start, HPAGE_PMD_ORDER); xas_store(&xas, hpage); Thanks,
On Thu, Feb 16, 2023 at 6:41 AM Peter Xu <peterx@redhat.com> wrote: > > On Thu, Feb 16, 2023 at 10:37:47AM +0900, David Stevens wrote: > > On Thu, Feb 16, 2023 at 7:48 AM Peter Xu <peterx@redhat.com> wrote: > > > > > > On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > > > > From: David Stevens <stevensd@chromium.org> > > > > > > > > Make sure that collapse_file respects any userfaultfds registered with > > > > MODE_MISSING. If userspace has any such userfaultfds registered, then > > > > for any page which it knows to be missing, it may expect a > > > > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > > > > collapsing a shmem range would result in replacing an empty page with a > > > > THP, so that it doesn't break userfaultfd. > > > > > > > > Synchronization when checking for userfaultfds in collapse_file is > > > > tricky because the mmap locks can't be used to prevent races with the > > > > registration of new userfaultfds. Instead, we provide synchronization by > > > > ensuring that userspace cannot observe the fact that pages are missing > > > > before we check for userfaultfds. Although this allows registration of a > > > > userfaultfd to race with collapse_file, it ensures that userspace cannot > > > > observe any pages transition from missing to present after such a race. > > > > This makes such a race indistinguishable to the collapse occurring > > > > immediately before the userfaultfd registration. > > > > > > > > The first step to provide this synchronization is to stop filling gaps > > > > during the loop iterating over the target range, since the page cache > > > > lock can be dropped during that loop. The second step is to fill the > > > > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > > > > time, to avoid races with accesses to the page cache that only take the > > > > RCU read lock. > > > > > > > > This fix is targeted at khugepaged, but the change also applies to > > > > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > > > > return EBUSY if there are any missing pages (instead of succeeding on > > > > shmem and returning EINVAL on anonymous memory). There is also now a > > > > window during MADV_COLLAPSE where a fault on a missing page will cause > > > > the syscall to fail with EAGAIN. > > > > > > > > The fact that intermediate page cache state can no longer be observed > > > > before the rollback of a failed collapse is also technically a > > > > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > > > > is exceedingly unlikely that anything relies on being able to observe > > > > that transient state. > > > > > > > > Signed-off-by: David Stevens <stevensd@chromium.org> > > > > --- > > > > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > > > > 1 file changed, 58 insertions(+), 8 deletions(-) > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > index b648f1053d95..8c2e2349e883 100644 > > > > --- a/mm/khugepaged.c > > > > +++ b/mm/khugepaged.c > > > > @@ -55,6 +55,7 @@ enum scan_result { > > > > SCAN_CGROUP_CHARGE_FAIL, > > > > SCAN_TRUNCATED, > > > > SCAN_PAGE_HAS_PRIVATE, > > > > + SCAN_PAGE_FILLED, > > > > > > PS: You may want to also touch SCAN_STATUS in huge_memory.h next time. > > > > > > > }; > > > > > > > > #define CREATE_TRACE_POINTS > > > > @@ -1725,8 +1726,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > > > > * - allocate and lock a new huge page; > > > > * - scan page cache replacing old pages with the new one > > > > * + swap/gup in pages if necessary; > > > > - * + fill in gaps; > > > > > > IIUC it's not a complete removal, but just moved downwards: > > > > > > > * + keep old pages around in case rollback is required; > > > > + * - finalize updates to the page cache; > > > > > > + fill in gaps with RETRY entries > > > + detect race conditions with userfaultfds > > > > > > > * - if replacing succeeds: > > > > * + copy data over; > > > > * + free old pages; > > > > @@ -1805,13 +1806,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > > > result = SCAN_TRUNCATED; > > > > goto xa_locked; > > > > } > > > > - xas_set(&xas, index); > > > > + xas_set(&xas, index + 1); > > > > } > > > > if (!shmem_charge(mapping->host, 1)) { > > > > result = SCAN_FAIL; > > > > goto xa_locked; > > > > } > > > > - xas_store(&xas, hpage); > > > > nr_none++; > > > > continue; > > > > } > > > > @@ -1970,6 +1970,56 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > > > put_page(page); > > > > goto xa_unlocked; > > > > } > > > > + > > > > + if (nr_none) { > > > > + struct vm_area_struct *vma; > > > > + int nr_none_check = 0; > > > > + > > > > + xas_unlock_irq(&xas); > > > > + i_mmap_lock_read(mapping); > > > > + xas_lock_irq(&xas); > > > > + > > > > + xas_set(&xas, start); > > > > + for (index = start; index < end; index++) { > > > > + if (!xas_next(&xas)) { > > > > + xas_store(&xas, XA_RETRY_ENTRY); > > > > + nr_none_check++; > > > > + } > > > > + } > > > > + > > > > + if (nr_none != nr_none_check) { > > > > + result = SCAN_PAGE_FILLED; > > > > + goto immap_locked; > > > > + } > > > > + > > > > + /* > > > > + * If userspace observed a missing page in a VMA with an armed > > > > + * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for > > > > + * that page, so we need to roll back to avoid suppressing such > > > > + * an event. Any userfaultfds armed after this point will not be > > > > + * able to observe any missing pages due to the previously > > > > + * inserted retry entries. > > > > + */ > > > > + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, start) { > > > > + if (userfaultfd_missing(vma)) { > > > > + result = SCAN_EXCEED_NONE_PTE; > > > > + goto immap_locked; > > > > + } > > > > + } > > > > + > > > > +immap_locked: > > > > + i_mmap_unlock_read(mapping); > > > > + if (result != SCAN_SUCCEED) { > > > > + xas_set(&xas, start); > > > > + for (index = start; index < end; index++) { > > > > + if (xas_next(&xas) == XA_RETRY_ENTRY) > > > > + xas_store(&xas, NULL); > > > > + } > > > > + > > > > + goto xa_locked; > > > > + } > > > > + } > > > > + > > > > > > Until here, all look fine to me (ignoring patch 1 for now; assuming the > > > hpage is always uptodate). > > > > > > My question is after here we'll release page cache lock again before > > > try_to_unmap_flush(), but is it safe to keep RETRY entries after releasing > > > page cache lock? It means other threads can be spinning. I assume page > > > lock is always safe and sleepable, but not sure about the page cache lock > > > here. > > > > We insert the multi-index entry for hpage before releasing the page > > cache lock, which should replace all of the XA_RETRY_ENTRYs. So the > > page cache will be fully up to date when we release the lock, at least > > in terms of which pages it contains. > > IIUC we released it before copying the pages: The huge page is locked until the copy is done. It should be fine unless the users inspect the page content without acquiring page lock. > > xa_locked: > xas_unlock_irq(&xas); <-------------------------------- here > xa_unlocked: > > /* > * If collapse is successful, flush must be done now before copying. > * If collapse is unsuccessful, does flush actually need to be done? > * Do it anyway, to clear the state. > */ > try_to_unmap_flush(); > > Before insertion of the multi-index: > > /* Join all the small entries into a single multi-index entry. */ > xas_set_order(&xas, start, HPAGE_PMD_ORDER); > xas_store(&xas, hpage); > > Thanks, > > -- > Peter Xu >
Hi, Yang, On Thu, Feb 16, 2023 at 01:58:55PM -0800, Yang Shi wrote: > > IIUC we released it before copying the pages: > > The huge page is locked until the copy is done. It should be fine > unless the users inspect the page content without acquiring page lock. The current patch from David has replaced "insert hpage into holes" with "insert RETRY entries into holes", so IMHO the hpage is not visible at all when releasing page cache lock here. All the accessors (including RCU protected ones to access page cache; those may not need to take the page lock) should be spinning on the RETRY entry, which it seems fine to me. But my question was whether it's legal to keep them spinning even after releasing the page cache lock. Thanks, > > > > > xa_locked: > > xas_unlock_irq(&xas); <-------------------------------- here > > xa_unlocked: > > > > /* > > * If collapse is successful, flush must be done now before copying. > > * If collapse is unsuccessful, does flush actually need to be done? > > * Do it anyway, to clear the state. > > */ > > try_to_unmap_flush(); > > > > Before insertion of the multi-index: > > > > /* Join all the small entries into a single multi-index entry. */ > > xas_set_order(&xas, start, HPAGE_PMD_ORDER); > > xas_store(&xas, hpage); > > > > Thanks, > > > > -- > > Peter Xu > > >
On Thu, Feb 16, 2023 at 3:07 PM Peter Xu <peterx@redhat.com> wrote: > > Hi, Yang, > > On Thu, Feb 16, 2023 at 01:58:55PM -0800, Yang Shi wrote: > > > IIUC we released it before copying the pages: > > > > The huge page is locked until the copy is done. It should be fine > > unless the users inspect the page content without acquiring page lock. > > The current patch from David has replaced "insert hpage into holes" with > "insert RETRY entries into holes", so IMHO the hpage is not visible at all > when releasing page cache lock here. IIRC his patch (just this patch, don't include patch #1) conceptually does: acquire xa lock fill the holes with retry entry if (nr_none == nr_none_check && uffd missing pass) /* no hole is filled since holding xa_lock and no uffd missing */ install huge page in page cache <-- huge page is visible here else { set error code replace retry entry back to NULL } release xa_lock if (succeed) { copy content to huge page unlock huge page } else restore the small pages Am I missing something? > > All the accessors (including RCU protected ones to access page cache; those > may not need to take the page lock) should be spinning on the RETRY entry, > which it seems fine to me. But my question was whether it's legal to keep > them spinning even after releasing the page cache lock. After releasing the page cache lock, they should see NULL entry or huge page IIUC. > > Thanks, > > > > > > > > > xa_locked: > > > xas_unlock_irq(&xas); <-------------------------------- here > > > xa_unlocked: > > > > > > /* > > > * If collapse is successful, flush must be done now before copying. > > > * If collapse is unsuccessful, does flush actually need to be done? > > > * Do it anyway, to clear the state. > > > */ > > > try_to_unmap_flush(); > > > > > > Before insertion of the multi-index: > > > > > > /* Join all the small entries into a single multi-index entry. */ > > > xas_set_order(&xas, start, HPAGE_PMD_ORDER); > > > xas_store(&xas, hpage); > > > > > > Thanks, > > > > > > -- > > > Peter Xu > > > > > > > -- > Peter Xu >
On Thu, Feb 16, 2023 at 11:41 PM Peter Xu <peterx@redhat.com> wrote: > > On Thu, Feb 16, 2023 at 10:37:47AM +0900, David Stevens wrote: > > On Thu, Feb 16, 2023 at 7:48 AM Peter Xu <peterx@redhat.com> wrote: > > > > > > On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > > > > From: David Stevens <stevensd@chromium.org> > > > > > > > > Make sure that collapse_file respects any userfaultfds registered with > > > > MODE_MISSING. If userspace has any such userfaultfds registered, then > > > > for any page which it knows to be missing, it may expect a > > > > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > > > > collapsing a shmem range would result in replacing an empty page with a > > > > THP, so that it doesn't break userfaultfd. > > > > > > > > Synchronization when checking for userfaultfds in collapse_file is > > > > tricky because the mmap locks can't be used to prevent races with the > > > > registration of new userfaultfds. Instead, we provide synchronization by > > > > ensuring that userspace cannot observe the fact that pages are missing > > > > before we check for userfaultfds. Although this allows registration of a > > > > userfaultfd to race with collapse_file, it ensures that userspace cannot > > > > observe any pages transition from missing to present after such a race. > > > > This makes such a race indistinguishable to the collapse occurring > > > > immediately before the userfaultfd registration. > > > > > > > > The first step to provide this synchronization is to stop filling gaps > > > > during the loop iterating over the target range, since the page cache > > > > lock can be dropped during that loop. The second step is to fill the > > > > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > > > > time, to avoid races with accesses to the page cache that only take the > > > > RCU read lock. > > > > > > > > This fix is targeted at khugepaged, but the change also applies to > > > > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > > > > return EBUSY if there are any missing pages (instead of succeeding on > > > > shmem and returning EINVAL on anonymous memory). There is also now a > > > > window during MADV_COLLAPSE where a fault on a missing page will cause > > > > the syscall to fail with EAGAIN. > > > > > > > > The fact that intermediate page cache state can no longer be observed > > > > before the rollback of a failed collapse is also technically a > > > > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > > > > is exceedingly unlikely that anything relies on being able to observe > > > > that transient state. > > > > > > > > Signed-off-by: David Stevens <stevensd@chromium.org> > > > > --- > > > > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > > > > 1 file changed, 58 insertions(+), 8 deletions(-) > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > index b648f1053d95..8c2e2349e883 100644 > > > > --- a/mm/khugepaged.c > > > > +++ b/mm/khugepaged.c > > > > @@ -55,6 +55,7 @@ enum scan_result { > > > > SCAN_CGROUP_CHARGE_FAIL, > > > > SCAN_TRUNCATED, > > > > SCAN_PAGE_HAS_PRIVATE, > > > > + SCAN_PAGE_FILLED, > > > > > > PS: You may want to also touch SCAN_STATUS in huge_memory.h next time. > > > > > > > }; > > > > > > > > #define CREATE_TRACE_POINTS > > > > @@ -1725,8 +1726,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > > > > * - allocate and lock a new huge page; > > > > * - scan page cache replacing old pages with the new one > > > > * + swap/gup in pages if necessary; > > > > - * + fill in gaps; > > > > > > IIUC it's not a complete removal, but just moved downwards: > > > > > > > * + keep old pages around in case rollback is required; > > > > + * - finalize updates to the page cache; > > > > > > + fill in gaps with RETRY entries > > > + detect race conditions with userfaultfds > > > > > > > * - if replacing succeeds: > > > > * + copy data over; > > > > * + free old pages; > > > > @@ -1805,13 +1806,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > > > result = SCAN_TRUNCATED; > > > > goto xa_locked; > > > > } > > > > - xas_set(&xas, index); > > > > + xas_set(&xas, index + 1); > > > > } > > > > if (!shmem_charge(mapping->host, 1)) { > > > > result = SCAN_FAIL; > > > > goto xa_locked; > > > > } > > > > - xas_store(&xas, hpage); > > > > nr_none++; > > > > continue; > > > > } > > > > @@ -1970,6 +1970,56 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > > > put_page(page); > > > > goto xa_unlocked; > > > > } > > > > + > > > > + if (nr_none) { > > > > + struct vm_area_struct *vma; > > > > + int nr_none_check = 0; > > > > + > > > > + xas_unlock_irq(&xas); > > > > + i_mmap_lock_read(mapping); > > > > + xas_lock_irq(&xas); > > > > + > > > > + xas_set(&xas, start); > > > > + for (index = start; index < end; index++) { > > > > + if (!xas_next(&xas)) { > > > > + xas_store(&xas, XA_RETRY_ENTRY); > > > > + nr_none_check++; > > > > + } > > > > + } > > > > + > > > > + if (nr_none != nr_none_check) { > > > > + result = SCAN_PAGE_FILLED; > > > > + goto immap_locked; > > > > + } > > > > + > > > > + /* > > > > + * If userspace observed a missing page in a VMA with an armed > > > > + * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for > > > > + * that page, so we need to roll back to avoid suppressing such > > > > + * an event. Any userfaultfds armed after this point will not be > > > > + * able to observe any missing pages due to the previously > > > > + * inserted retry entries. > > > > + */ > > > > + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, start) { > > > > + if (userfaultfd_missing(vma)) { > > > > + result = SCAN_EXCEED_NONE_PTE; > > > > + goto immap_locked; > > > > + } > > > > + } > > > > + > > > > +immap_locked: > > > > + i_mmap_unlock_read(mapping); > > > > + if (result != SCAN_SUCCEED) { > > > > + xas_set(&xas, start); > > > > + for (index = start; index < end; index++) { > > > > + if (xas_next(&xas) == XA_RETRY_ENTRY) > > > > + xas_store(&xas, NULL); > > > > + } > > > > + > > > > + goto xa_locked; > > > > + } > > > > + } > > > > + > > > > > > Until here, all look fine to me (ignoring patch 1 for now; assuming the > > > hpage is always uptodate). > > > > > > My question is after here we'll release page cache lock again before > > > try_to_unmap_flush(), but is it safe to keep RETRY entries after releasing > > > page cache lock? It means other threads can be spinning. I assume page > > > lock is always safe and sleepable, but not sure about the page cache lock > > > here. > > > > We insert the multi-index entry for hpage before releasing the page > > cache lock, which should replace all of the XA_RETRY_ENTRYs. So the > > page cache will be fully up to date when we release the lock, at least > > in terms of which pages it contains. > > IIUC we released it before copying the pages: > > xa_locked: > xas_unlock_irq(&xas); <-------------------------------- here > xa_unlocked: > > /* > * If collapse is successful, flush must be done now before copying. > * If collapse is unsuccessful, does flush actually need to be done? > * Do it anyway, to clear the state. > */ > try_to_unmap_flush(); > > Before insertion of the multi-index: > > /* Join all the small entries into a single multi-index entry. */ > xas_set_order(&xas, start, HPAGE_PMD_ORDER); > xas_store(&xas, hpage); Okay, I realize what's going on. There is a change in mm-everything [1] that significantly rewrites collapse_file, and my patch is going to conflict with that patch. I'll see if I can rework my patches on top of that change. [1] https://lore.kernel.org/all/20221205234059.42971-3-jiaqiyan@google.com/T/#u -David
On Thu, Feb 16, 2023 at 6:00 PM David Stevens <stevensd@chromium.org> wrote: > > On Thu, Feb 16, 2023 at 11:41 PM Peter Xu <peterx@redhat.com> wrote: > > > > On Thu, Feb 16, 2023 at 10:37:47AM +0900, David Stevens wrote: > > > On Thu, Feb 16, 2023 at 7:48 AM Peter Xu <peterx@redhat.com> wrote: > > > > > > > > On Tue, Feb 14, 2023 at 04:57:10PM +0900, David Stevens wrote: > > > > > From: David Stevens <stevensd@chromium.org> > > > > > > > > > > Make sure that collapse_file respects any userfaultfds registered with > > > > > MODE_MISSING. If userspace has any such userfaultfds registered, then > > > > > for any page which it knows to be missing, it may expect a > > > > > UFFD_EVENT_PAGEFAULT. This means collapse_file needs to take care when > > > > > collapsing a shmem range would result in replacing an empty page with a > > > > > THP, so that it doesn't break userfaultfd. > > > > > > > > > > Synchronization when checking for userfaultfds in collapse_file is > > > > > tricky because the mmap locks can't be used to prevent races with the > > > > > registration of new userfaultfds. Instead, we provide synchronization by > > > > > ensuring that userspace cannot observe the fact that pages are missing > > > > > before we check for userfaultfds. Although this allows registration of a > > > > > userfaultfd to race with collapse_file, it ensures that userspace cannot > > > > > observe any pages transition from missing to present after such a race. > > > > > This makes such a race indistinguishable to the collapse occurring > > > > > immediately before the userfaultfd registration. > > > > > > > > > > The first step to provide this synchronization is to stop filling gaps > > > > > during the loop iterating over the target range, since the page cache > > > > > lock can be dropped during that loop. The second step is to fill the > > > > > gaps with XA_RETRY_ENTRY after the page cache lock is acquired the final > > > > > time, to avoid races with accesses to the page cache that only take the > > > > > RCU read lock. > > > > > > > > > > This fix is targeted at khugepaged, but the change also applies to > > > > > MADV_COLLAPSE. MADV_COLLAPSE on a range with a userfaultfd will now > > > > > return EBUSY if there are any missing pages (instead of succeeding on > > > > > shmem and returning EINVAL on anonymous memory). There is also now a > > > > > window during MADV_COLLAPSE where a fault on a missing page will cause > > > > > the syscall to fail with EAGAIN. > > > > > > > > > > The fact that intermediate page cache state can no longer be observed > > > > > before the rollback of a failed collapse is also technically a > > > > > userspace-visible change (via at least SEEK_DATA and SEEK_END), but it > > > > > is exceedingly unlikely that anything relies on being able to observe > > > > > that transient state. > > > > > > > > > > Signed-off-by: David Stevens <stevensd@chromium.org> > > > > > --- > > > > > mm/khugepaged.c | 66 +++++++++++++++++++++++++++++++++++++++++++------ > > > > > 1 file changed, 58 insertions(+), 8 deletions(-) > > > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > index b648f1053d95..8c2e2349e883 100644 > > > > > --- a/mm/khugepaged.c > > > > > +++ b/mm/khugepaged.c > > > > > @@ -55,6 +55,7 @@ enum scan_result { > > > > > SCAN_CGROUP_CHARGE_FAIL, > > > > > SCAN_TRUNCATED, > > > > > SCAN_PAGE_HAS_PRIVATE, > > > > > + SCAN_PAGE_FILLED, > > > > > > > > PS: You may want to also touch SCAN_STATUS in huge_memory.h next time. > > > > > > > > > }; > > > > > > > > > > #define CREATE_TRACE_POINTS > > > > > @@ -1725,8 +1726,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > > > > > * - allocate and lock a new huge page; > > > > > * - scan page cache replacing old pages with the new one > > > > > * + swap/gup in pages if necessary; > > > > > - * + fill in gaps; > > > > > > > > IIUC it's not a complete removal, but just moved downwards: > > > > > > > > > * + keep old pages around in case rollback is required; > > > > > + * - finalize updates to the page cache; > > > > > > > > + fill in gaps with RETRY entries > > > > + detect race conditions with userfaultfds > > > > > > > > > * - if replacing succeeds: > > > > > * + copy data over; > > > > > * + free old pages; > > > > > @@ -1805,13 +1806,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > > > > result = SCAN_TRUNCATED; > > > > > goto xa_locked; > > > > > } > > > > > - xas_set(&xas, index); > > > > > + xas_set(&xas, index + 1); > > > > > } > > > > > if (!shmem_charge(mapping->host, 1)) { > > > > > result = SCAN_FAIL; > > > > > goto xa_locked; > > > > > } > > > > > - xas_store(&xas, hpage); > > > > > nr_none++; > > > > > continue; > > > > > } > > > > > @@ -1970,6 +1970,56 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > > > > put_page(page); > > > > > goto xa_unlocked; > > > > > } > > > > > + > > > > > + if (nr_none) { > > > > > + struct vm_area_struct *vma; > > > > > + int nr_none_check = 0; > > > > > + > > > > > + xas_unlock_irq(&xas); > > > > > + i_mmap_lock_read(mapping); > > > > > + xas_lock_irq(&xas); > > > > > + > > > > > + xas_set(&xas, start); > > > > > + for (index = start; index < end; index++) { > > > > > + if (!xas_next(&xas)) { > > > > > + xas_store(&xas, XA_RETRY_ENTRY); > > > > > + nr_none_check++; > > > > > + } > > > > > + } > > > > > + > > > > > + if (nr_none != nr_none_check) { > > > > > + result = SCAN_PAGE_FILLED; > > > > > + goto immap_locked; > > > > > + } > > > > > + > > > > > + /* > > > > > + * If userspace observed a missing page in a VMA with an armed > > > > > + * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for > > > > > + * that page, so we need to roll back to avoid suppressing such > > > > > + * an event. Any userfaultfds armed after this point will not be > > > > > + * able to observe any missing pages due to the previously > > > > > + * inserted retry entries. > > > > > + */ > > > > > + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, start) { > > > > > + if (userfaultfd_missing(vma)) { > > > > > + result = SCAN_EXCEED_NONE_PTE; > > > > > + goto immap_locked; > > > > > + } > > > > > + } > > > > > + > > > > > +immap_locked: > > > > > + i_mmap_unlock_read(mapping); > > > > > + if (result != SCAN_SUCCEED) { > > > > > + xas_set(&xas, start); > > > > > + for (index = start; index < end; index++) { > > > > > + if (xas_next(&xas) == XA_RETRY_ENTRY) > > > > > + xas_store(&xas, NULL); > > > > > + } > > > > > + > > > > > + goto xa_locked; > > > > > + } > > > > > + } > > > > > + > > > > > > > > Until here, all look fine to me (ignoring patch 1 for now; assuming the > > > > hpage is always uptodate). > > > > > > > > My question is after here we'll release page cache lock again before > > > > try_to_unmap_flush(), but is it safe to keep RETRY entries after releasing > > > > page cache lock? It means other threads can be spinning. I assume page > > > > lock is always safe and sleepable, but not sure about the page cache lock > > > > here. > > > > > > We insert the multi-index entry for hpage before releasing the page > > > cache lock, which should replace all of the XA_RETRY_ENTRYs. So the > > > page cache will be fully up to date when we release the lock, at least > > > in terms of which pages it contains. > > > > IIUC we released it before copying the pages: > > > > xa_locked: > > xas_unlock_irq(&xas); <-------------------------------- here > > xa_unlocked: > > > > /* > > * If collapse is successful, flush must be done now before copying. > > * If collapse is unsuccessful, does flush actually need to be done? > > * Do it anyway, to clear the state. > > */ > > try_to_unmap_flush(); > > > > Before insertion of the multi-index: > > > > /* Join all the small entries into a single multi-index entry. */ > > xas_set_order(&xas, start, HPAGE_PMD_ORDER); > > xas_store(&xas, hpage); > > Okay, I realize what's going on. There is a change in mm-everything > [1] that significantly rewrites collapse_file, and my patch is going > to conflict with that patch. I'll see if I can rework my patches on > top of that change. > > [1] https://lore.kernel.org/all/20221205234059.42971-3-jiaqiyan@google.com/T/#u Aha, thanks for the heads up. I knew this patch but I didn't notice it had been landed in mm-unstable tree... > > -David
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b648f1053d95..8c2e2349e883 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -55,6 +55,7 @@ enum scan_result { SCAN_CGROUP_CHARGE_FAIL, SCAN_TRUNCATED, SCAN_PAGE_HAS_PRIVATE, + SCAN_PAGE_FILLED, }; #define CREATE_TRACE_POINTS @@ -1725,8 +1726,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, * - allocate and lock a new huge page; * - scan page cache replacing old pages with the new one * + swap/gup in pages if necessary; - * + fill in gaps; * + keep old pages around in case rollback is required; + * - finalize updates to the page cache; * - if replacing succeeds: * + copy data over; * + free old pages; @@ -1805,13 +1806,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, result = SCAN_TRUNCATED; goto xa_locked; } - xas_set(&xas, index); + xas_set(&xas, index + 1); } if (!shmem_charge(mapping->host, 1)) { result = SCAN_FAIL; goto xa_locked; } - xas_store(&xas, hpage); nr_none++; continue; } @@ -1970,6 +1970,56 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, put_page(page); goto xa_unlocked; } + + if (nr_none) { + struct vm_area_struct *vma; + int nr_none_check = 0; + + xas_unlock_irq(&xas); + i_mmap_lock_read(mapping); + xas_lock_irq(&xas); + + xas_set(&xas, start); + for (index = start; index < end; index++) { + if (!xas_next(&xas)) { + xas_store(&xas, XA_RETRY_ENTRY); + nr_none_check++; + } + } + + if (nr_none != nr_none_check) { + result = SCAN_PAGE_FILLED; + goto immap_locked; + } + + /* + * If userspace observed a missing page in a VMA with an armed + * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for + * that page, so we need to roll back to avoid suppressing such + * an event. Any userfaultfds armed after this point will not be + * able to observe any missing pages due to the previously + * inserted retry entries. + */ + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, start) { + if (userfaultfd_missing(vma)) { + result = SCAN_EXCEED_NONE_PTE; + goto immap_locked; + } + } + +immap_locked: + i_mmap_unlock_read(mapping); + if (result != SCAN_SUCCEED) { + xas_set(&xas, start); + for (index = start; index < end; index++) { + if (xas_next(&xas) == XA_RETRY_ENTRY) + xas_store(&xas, NULL); + } + + goto xa_locked; + } + } + nr = thp_nr_pages(hpage); if (is_shmem) @@ -2068,15 +2118,13 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } xas_set(&xas, start); - xas_for_each(&xas, page, end - 1) { + end = index; + for (index = start; index < end; index++) { + xas_next(&xas); page = list_first_entry_or_null(&pagelist, struct page, lru); if (!page || xas.xa_index < page->index) { - if (!nr_none) - break; nr_none--; - /* Put holes back where they were */ - xas_store(&xas, NULL); continue; } @@ -2592,11 +2640,13 @@ static int madvise_collapse_errno(enum scan_result r) case SCAN_ALLOC_HUGE_PAGE_FAIL: return -ENOMEM; case SCAN_CGROUP_CHARGE_FAIL: + case SCAN_EXCEED_NONE_PTE: return -EBUSY; /* Resource temporary unavailable - trying again might succeed */ case SCAN_PAGE_LOCK: case SCAN_PAGE_LRU: case SCAN_DEL_PAGE_LRU: + case SCAN_PAGE_FILLED: return -EAGAIN; /* * Other: Trying again likely not to succeed / error intrinsic to