Message ID | 94aec8fe-383f-892-dcbf-d4c14e460a7@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp3369706vqo; Tue, 9 May 2023 22:06:19 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4JJ5ffeNEiNc7KFKSsXXb0kAqUcX7dJOc5ETzm5qil8Tv4wu0tCOY9mmQYA2+BzNbEZGeN X-Received: by 2002:a05:6a00:21d1:b0:647:e708:a512 with SMTP id t17-20020a056a0021d100b00647e708a512mr2301099pfj.6.1683695178943; Tue, 09 May 2023 22:06:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683695178; cv=none; d=google.com; s=arc-20160816; b=ppLFd+PngJ1vRiQ5Pm0YKJRaqSfhYOMkBiwR6agPrZNUQELMB95HBxQUgtkv9wRP4z XKV8PUAHNokXYbbWM+9GLZf5mdVPbO73luZ3sRiZjSPZPtFNnm113+6QAYHDfkI+uANy TxlMSI/+aCFBZH3Y7TWSHb1EuMTo6aw84xCZAqfDYX4ZFvKBp3brPU39ZClCKJLalaD/ Q/OhS6AsXoIsZNnr3BOGW/fO72zq0Cz9pp6uvk0tg7viSVPRFpOt4tC2bU5ibqEpfIFn vLYRP0+992oi5uwFtZ/axnkfTg6Anp5LqLe8y/SgKdToguq2pe1MsMbH7YfhB/NUBZFC 7NFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=WSGFooIbg1YnCT+usP0CPskjqWFniw8CJbMJmY8IIY4=; b=zKr8qNXTOBqiFMX2lYugzMXp6JdOZWzUB+4SlPAtSzAkYWk/yE39EfKHY9dOapfbAj HqBwSuM48hqQdCcAM7Zrvvp+/h3DxTUsjGOQ1BzDLGYeOLLwIMynHheRcCk3Tb53GoEe pJeT6xPffIbCe6aGplOYfWtWhRz8awuif4hYQUOYRk/po6isLn7xUsQqpBXPiKCRmdwl 5+ZS16vwB0YRezfSc+0TtNqMHBbl1me/M+XPGIn5Q1khCmw3UArv+6JNYZABQAr6f5jr bXgriwGNef+qFrpwcM+tphUCxSaDTT6zXexzYI//7I/lhSChVWqhcqZtBjaXsYvCelQW Yo0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=vuo7rYDM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x16-20020aa79ad0000000b006435b08fee8si4016327pfp.196.2023.05.09.22.06.06; Tue, 09 May 2023 22:06:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=vuo7rYDM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235755AbjEJFB2 (ORCPT <rfc822;baris.duru.linux@gmail.com> + 99 others); Wed, 10 May 2023 01:01:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235824AbjEJFBY (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 10 May 2023 01:01:24 -0400 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CEA649D5 for <linux-kernel@vger.kernel.org>; Tue, 9 May 2023 22:01:21 -0700 (PDT) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-55a829411b5so62527657b3.1 for <linux-kernel@vger.kernel.org>; Tue, 09 May 2023 22:01:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683694880; x=1686286880; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=WSGFooIbg1YnCT+usP0CPskjqWFniw8CJbMJmY8IIY4=; b=vuo7rYDMzaDeDPZgzHMtVp+kDDpzRoF+cP2Qnr93hh+pF6J9q5tC6QhVWjQR/dpJZs St4YSsmW5E0SKuQmRdibMqMztrKviA12bUxhXRBvVCQgZMikD8H0uj5+E9csqviYPTTF W/MWYfjHA/1NssscvDoyXC6tVcA1ciumq+qFhN9YZhl/AUe/TYRGPOpbxUr3CzSoAwDs 9ZIJx+k2kcTyg+8Yd6pO3A43gfTS5JygrpCg5CjFhHivuRBYVd+ivlToDlbyQHOK26G6 DP8KUWo0VQTsOYmO3ProNs6agiD4J9mWFi/qWhwm90YAcZbtyMXr/cNFlm/A7CEZL/+P HUIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683694880; x=1686286880; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WSGFooIbg1YnCT+usP0CPskjqWFniw8CJbMJmY8IIY4=; b=SzixfgxI5Ei2e/Rd9pwFlJE8tRzvP9H+O4+H1TQ+XXQw5sF7gkH5mAIzCOh+JcIEI3 hFOMXVmee07undxfk+ToANPyQMox3mYrjhH0nPJAiEw46tbxgymPnfiQ+KSppMPR6DXP OH7LZLMdlfCa6+jHRYroWB0ok/GASUn6KRIiEKPijawuJ3O7GVQmHZnmHAEnIjIrKMsu Y9MJ/fVaPcC4KothoSn4tZGh/dEox39ExJH+pCcSGP23WI7EISYN62LZIyuBp1zYRXt3 N0ifNAxUE16EI9dZ47cTx8ArWJEjZ/xFvlTfREpjrGZ25AvfTWuF0953JctDLZ4z5LU0 zsWA== X-Gm-Message-State: AC+VfDyeSNbGQrqm4ij6vdJSPlJkOjDTF2QTtjys+Zm9deMhtiicUZOY CedeOHjMH6QExXVVQD8P5MySKA== X-Received: by 2002:a0d:d4c3:0:b0:55a:9d84:2e4e with SMTP id w186-20020a0dd4c3000000b0055a9d842e4emr17674451ywd.18.1683694880477; Tue, 09 May 2023 22:01:20 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id e2-20020a81dd02000000b00559f9e9eabcsm3815388ywn.98.2023.05.09.22.01.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 May 2023 22:01:20 -0700 (PDT) Date: Tue, 9 May 2023 22:01:16 -0700 (PDT) From: Hugh Dickins <hughd@google.com> X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton <akpm@linux-foundation.org> cc: Mike Kravetz <mike.kravetz@oracle.com>, Mike Rapoport <rppt@kernel.org>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, Matthew Wilcox <willy@infradead.org>, David Hildenbrand <david@redhat.com>, Suren Baghdasaryan <surenb@google.com>, Qi Zheng <zhengqi.arch@bytedance.com>, Russell King <linux@armlinux.org.uk>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Geert Uytterhoeven <geert@linux-m68k.org>, Greg Ungerer <gerg@linux-m68k.org>, Michal Simek <monstr@monstr.eu>, Thomas Bogendoerfer <tsbogend@alpha.franken.de>, Helge Deller <deller@gmx.de>, John David Anglin <dave.anglin@bell.net>, "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>, Michael Ellerman <mpe@ellerman.id.au>, Alexandre Ghiti <alexghiti@rivosinc.com>, Palmer Dabbelt <palmer@dabbelt.com>, Heiko Carstens <hca@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, Claudio Imbrenda <imbrenda@linux.ibm.com>, John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>, "David S. Miller" <davem@davemloft.net>, Chris Zankel <chris@zankel.net>, Max Filippov <jcmvbkbc@gmail.com>, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 15/23] s390: allow pte_offset_map_lock() to fail In-Reply-To: <77a5d8c-406b-7068-4f17-23b7ac53bc83@google.com> Message-ID: <94aec8fe-383f-892-dcbf-d4c14e460a7@google.com> References: <77a5d8c-406b-7068-4f17-23b7ac53bc83@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765482355833949112?= X-GMAIL-MSGID: =?utf-8?q?1765482355833949112?= |
Series |
arch: allow pte_offset_map[_lock]() to fail
|
|
Commit Message
Hugh Dickins
May 10, 2023, 5:01 a.m. UTC
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Signed-off-by: Hugh Dickins <hughd@google.com>
---
arch/s390/kernel/uv.c | 2 ++
arch/s390/mm/gmap.c | 2 ++
arch/s390/mm/pgtable.c | 12 +++++++++---
3 files changed, 13 insertions(+), 3 deletions(-)
Comments
On Tue, 9 May 2023 22:01:16 -0700 (PDT) Hugh Dickins <hughd@google.com> wrote: > In rare transient cases, not yet made possible, pte_offset_map() and > pte_offset_map_lock() may not find a page table: handle appropriately. > > Signed-off-by: Hugh Dickins <hughd@google.com> > --- > arch/s390/kernel/uv.c | 2 ++ > arch/s390/mm/gmap.c | 2 ++ > arch/s390/mm/pgtable.c | 12 +++++++++--- > 3 files changed, 13 insertions(+), 3 deletions(-) > > diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c > index cb2ee06df286..3c62d1b218b1 100644 > --- a/arch/s390/kernel/uv.c > +++ b/arch/s390/kernel/uv.c > @@ -294,6 +294,8 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb) > > rc = -ENXIO; > ptep = get_locked_pte(gmap->mm, uaddr, &ptelock); > + if (!ptep) > + goto out; > if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) { > page = pte_page(*ptep); > rc = -EAGAIN; > diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c > index dc90d1eb0d55..d198fc9475a2 100644 > --- a/arch/s390/mm/gmap.c > +++ b/arch/s390/mm/gmap.c > @@ -2549,6 +2549,8 @@ static int __zap_zero_pages(pmd_t *pmd, unsigned long start, > spinlock_t *ptl; > > ptep = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); > + if (!ptep) > + break; so if pte_offset_map_lock fails, we abort and skip both the failed entry and the rest of the entries? can pte_offset_map_lock be retried immediately if it fails? (consider that we currently don't allow THP with KVM guests) Would something like this: do { ptep = pte_offset_map_lock(...); mb(); /* maybe? */ } while (!ptep); make sense? otherwise maybe it's better to return an error and retry the whole walk_page_range() in s390_enable_sie() ? it's a slow path anyway. > if (is_zero_pfn(pte_pfn(*ptep))) > ptep_xchg_direct(walk->mm, addr, ptep, __pte(_PAGE_INVALID)); > pte_unmap_unlock(ptep, ptl); [...]
On Wed, 17 May 2023, Claudio Imbrenda wrote: > On Tue, 9 May 2023 22:01:16 -0700 (PDT) > Hugh Dickins <hughd@google.com> wrote: > > > In rare transient cases, not yet made possible, pte_offset_map() and > > pte_offset_map_lock() may not find a page table: handle appropriately. > > > > Signed-off-by: Hugh Dickins <hughd@google.com> > > --- > > arch/s390/kernel/uv.c | 2 ++ > > arch/s390/mm/gmap.c | 2 ++ > > arch/s390/mm/pgtable.c | 12 +++++++++--- > > 3 files changed, 13 insertions(+), 3 deletions(-) > > > > diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c > > index cb2ee06df286..3c62d1b218b1 100644 > > --- a/arch/s390/kernel/uv.c > > +++ b/arch/s390/kernel/uv.c > > @@ -294,6 +294,8 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb) > > > > rc = -ENXIO; > > ptep = get_locked_pte(gmap->mm, uaddr, &ptelock); > > + if (!ptep) > > + goto out; You may or may not be asking about this instance too. When I looked at how the code lower down handles -ENXIO (promoting it to -EFAULT if an access fails, or to -EAGAIN to ask for a retry), this looked just right (whereas using -EAGAIN here would be wrong: that expects a "page" which has not been initialized at this point). > > if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) { > > page = pte_page(*ptep); > > rc = -EAGAIN; > > diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c > > index dc90d1eb0d55..d198fc9475a2 100644 > > --- a/arch/s390/mm/gmap.c > > +++ b/arch/s390/mm/gmap.c > > @@ -2549,6 +2549,8 @@ static int __zap_zero_pages(pmd_t *pmd, unsigned long start, > > spinlock_t *ptl; > > > > ptep = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); > > + if (!ptep) > > + break; > > so if pte_offset_map_lock fails, we abort and skip both the failed > entry and the rest of the entries? Yes. > > can pte_offset_map_lock be retried immediately if it fails? (consider > that we currently don't allow THP with KVM guests) > > Would something like this: > > do { > ptep = pte_offset_map_lock(...); > mb(); /* maybe? */ > } while (!ptep); > > make sense? No. But you're absolutely right to be asking: thank you for looking into it so carefully - and I realize that it's hard at this stage to judge what's appropriate, when I've not yet even posted the endpoint of these changes, the patches which make it possible not to find a page table here. And I'm intentionally keeping that vague, because although I shall only introduce a THP case, I do expect it to be built upon later in reclaiming empty page tables: it would be nice not to have to change the arch code again when extending further. My "rare transient cases" phrase may be somewhat misleading: one thing that's wrong with your tight pte_offset_map_lock() loop above is that the pmd entry pointing to page table may have been suddenly replaced by a pmd_none() entry; and there's nothing in your loop above to break out if that is so. But if a page table is suddenly removed, that would be because it was either empty, or replaced by a THP entry, or easily reconstructable on demand (by that, I probably mean it was only mapping shared file pages, which can just be refaulted if needed again). The case you're wary of, is if the page table were removed briefly, then put back shortly after: and still contains zero pages further down. That's not something mm does now, nor at the end of my several series, nor that I imagine us wanting to do in future: but I am struggling to find a killer argument to persuade you that it could never be done - most pages in a page table do need rmap tracking, which will BUG if it's broken, but that argument happens not to apply to the zero page. (Hmm, there could be somewhere, where we would find it convenient to remove a page table with intent to do ...something, then validation of that isolated page table fails, so we just put it back again.) Is it good enough for me to promise you that we won't do that? There are several ways in which we could change __zap_zero_pages(), but I don't see them as actually dealing with the concern at hand. One change, I've tended to make at the mm end but did not dare to interfere here: it would seem more sensible to do a single pte_offset_map_lock() outside the loop, return if that fails, increment ptep inside the loop, pte_unmap_unlock() after the loop. But perhaps you have preemption reasons for not wanting that; and although it would eliminate the oddity of half-processing a page table, it would not really resolve the problem at hand: because, what if this page table got removed just before __zap_zero_pages() tries to take the lock, then got put back just after? Another change: I see __zap_zero_pages() is driven by walk_page_range(), and over at the mm end I'm usually setting walk->action to ACTION_AGAIN in these failure cases; but thought that an unnecessary piece of magic here, and cannot see how it could actually help. Your "retry the whole walk_page_range()" suggestion below would be a heavier equivalent of that: but neither way gives confidence, if a page table could actually be removed then reinserted without mmap_write_lock(). I think I want to keep this s390 __zap_zero_pages() issue in mind, it is important and thank you for raising it; but don't see any change to the patch as actually needed. Hugh > > > otherwise maybe it's better to return an error and retry the whole > walk_page_range() in s390_enable_sie() ? it's a slow path anyway. > > > if (is_zero_pfn(pte_pfn(*ptep))) > > ptep_xchg_direct(walk->mm, addr, ptep, __pte(_PAGE_INVALID)); > > pte_unmap_unlock(ptep, ptl); > > [...]
On Wed, 17 May 2023 14:50:28 -0700 (PDT) Hugh Dickins <hughd@google.com> wrote: > On Wed, 17 May 2023, Claudio Imbrenda wrote: > > On Tue, 9 May 2023 22:01:16 -0700 (PDT) > > Hugh Dickins <hughd@google.com> wrote: > > > > > In rare transient cases, not yet made possible, pte_offset_map() and > > > pte_offset_map_lock() may not find a page table: handle appropriately. > > > > > > Signed-off-by: Hugh Dickins <hughd@google.com> > > > --- > > > arch/s390/kernel/uv.c | 2 ++ > > > arch/s390/mm/gmap.c | 2 ++ > > > arch/s390/mm/pgtable.c | 12 +++++++++--- > > > 3 files changed, 13 insertions(+), 3 deletions(-) > > > > > > diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c > > > index cb2ee06df286..3c62d1b218b1 100644 > > > --- a/arch/s390/kernel/uv.c > > > +++ b/arch/s390/kernel/uv.c > > > @@ -294,6 +294,8 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb) > > > > > > rc = -ENXIO; > > > ptep = get_locked_pte(gmap->mm, uaddr, &ptelock); > > > + if (!ptep) > > > + goto out; > > You may or may not be asking about this instance too. When I looked at actually no, because of the reasons you give here :) > how the code lower down handles -ENXIO (promoting it to -EFAULT if an > access fails, or to -EAGAIN to ask for a retry), this looked just right > (whereas using -EAGAIN here would be wrong: that expects a "page" which > has not been initialized at this point). > > > > if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) { > > > page = pte_page(*ptep); > > > rc = -EAGAIN; > > > diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c > > > index dc90d1eb0d55..d198fc9475a2 100644 > > > --- a/arch/s390/mm/gmap.c > > > +++ b/arch/s390/mm/gmap.c > > > @@ -2549,6 +2549,8 @@ static int __zap_zero_pages(pmd_t *pmd, unsigned long start, > > > spinlock_t *ptl; > > > > > > ptep = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); > > > + if (!ptep) > > > + break; > > > > so if pte_offset_map_lock fails, we abort and skip both the failed > > entry and the rest of the entries? > > Yes. > > > > > can pte_offset_map_lock be retried immediately if it fails? (consider > > that we currently don't allow THP with KVM guests) > > > > Would something like this: > > > > do { > > ptep = pte_offset_map_lock(...); > > mb(); /* maybe? */ > > } while (!ptep); > > > > make sense? > > No. But you're absolutely right to be asking: thank you for looking > into it so carefully - and I realize that it's hard at this stage to > judge what's appropriate, when I've not yet even posted the endpoint > of these changes, the patches which make it possible not to find a > page table here. And I'm intentionally keeping that vague, because > although I shall only introduce a THP case, I do expect it to be built > upon later in reclaiming empty page tables: it would be nice not to > have to change the arch code again when extending further. > > My "rare transient cases" phrase may be somewhat misleading: one thing > that's wrong with your tight pte_offset_map_lock() loop above is that > the pmd entry pointing to page table may have been suddenly replaced > by a pmd_none() entry; and there's nothing in your loop above to > break out if that is so. > > But if a page table is suddenly removed, that would be because it was > either empty, or replaced by a THP entry, or easily reconstructable on > demand (by that, I probably mean it was only mapping shared file > pages, which can just be refaulted if needed again). > > The case you're wary of, is if the page table were removed briefly, > then put back shortly after: and still contains zero pages further > down. That's not something mm does now, nor at the end of my several > series, nor that I imagine us wanting to do in future: but I am > struggling to find a killer argument to persuade you that it could > never be done - most pages in a page table do need rmap tracking, > which will BUG if it's broken, but that argument happens not to apply > to the zero page. > > (Hmm, there could be somewhere, where we would find it convenient to > remove a page table with intent to do ...something, then validation > of that isolated page table fails, so we just put it back again.) > > Is it good enough for me to promise you that we won't do that? > > There are several ways in which we could change __zap_zero_pages(), > but I don't see them as actually dealing with the concern at hand. > > One change, I've tended to make at the mm end but did not dare > to interfere here: it would seem more sensible to do a single > pte_offset_map_lock() outside the loop, return if that fails, > increment ptep inside the loop, pte_unmap_unlock() after the loop. > > But perhaps you have preemption reasons for not wanting that; and > although it would eliminate the oddity of half-processing a page > table, it would not really resolve the problem at hand: because, > what if this page table got removed just before __zap_zero_pages() > tries to take the lock, then got put back just after? > > Another change: I see __zap_zero_pages() is driven by > walk_page_range(), and over at the mm end I'm usually setting > walk->action to ACTION_AGAIN in these failure cases; but thought that > an unnecessary piece of magic here, and cannot see how it could > actually help. Your "retry the whole walk_page_range()" suggestion > below would be a heavier equivalent of that: but neither way gives > confidence, if a page table could actually be removed then reinserted > without mmap_write_lock(). > > I think I want to keep this s390 __zap_zero_pages() issue in mind, it > is important and thank you for raising it; but don't see any change > to the patch as actually needed. > > Hugh so if I understand the above correctly, pte_offset_map_lock will only fail if the whole page table has disappeared, and in that case, it will never reappear with zero pages, therefore we can safely skip (in that case just break). if we were to do a continue instead of a break, we would most likely fail again anyway. in that case I would still like a small change in your patch: please write a short (2~3 lines max) comment about why it's ok to do things that way
On Tue, 23 May 2023, Claudio Imbrenda wrote: > > so if I understand the above correctly, pte_offset_map_lock will only > fail if the whole page table has disappeared, and in that case, it will > never reappear with zero pages, therefore we can safely skip (in that > case just break). if we were to do a continue instead of a break, we > would most likely fail again anyway. Yes, that's the most likely; and you hold mmap_write_lock() there, and VM_NOHUGEPAGE on all vmas, so I think it's the only foreseeable possibility. > > in that case I would still like a small change in your patch: please > write a short (2~3 lines max) comment about why it's ok to do things > that way Sure. But I now see that I've disobeyed you, and gone to 4 lines (but in the comment above the function, so as not to distract from the code itself): is this good wording to you? I needed to research how they were stopped from coming in afterwards, so wanted to put something greppable in there. And, unless I'm misunderstanding, that "after THP was enabled" was always supposed to say "after THP was disabled" (because splitting a huge zero page pmd inserts a a page table full of little zero ptes). Or would you prefer the comment in the commit message instead, or down just above the pte_offset_map_lock() line? It would much better if I could find one place at the mm end, to enforce its end of the contract; but cannot think how to do that. Hugh --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -2537,7 +2537,12 @@ static inline void thp_split_mm(struct mm_struct *mm) * Remove all empty zero pages from the mapping for lazy refaulting * - This must be called after mm->context.has_pgste is set, to avoid * future creation of zero pages - * - This must be called after THP was enabled + * - This must be called after THP was disabled. + * + * mm contracts with s390, that even if mm were to remove a page table, + * racing with the loop below and so causing pte_offset_map_lock() to fail, + * it will never insert a page table containing empty zero pages once + * mm_forbids_zeropage(mm) i.e. mm->context.has_pgste is set. */ static int __zap_zero_pages(pmd_t *pmd, unsigned long start, unsigned long end, struct mm_walk *walk)
On Tue, 23 May 2023 18:49:14 -0700 (PDT) Hugh Dickins <hughd@google.com> wrote: > On Tue, 23 May 2023, Claudio Imbrenda wrote: > > > > so if I understand the above correctly, pte_offset_map_lock will only > > fail if the whole page table has disappeared, and in that case, it will > > never reappear with zero pages, therefore we can safely skip (in that > > case just break). if we were to do a continue instead of a break, we > > would most likely fail again anyway. > > Yes, that's the most likely; and you hold mmap_write_lock() there, > and VM_NOHUGEPAGE on all vmas, so I think it's the only foreseeable > possibility. > > > > > in that case I would still like a small change in your patch: please > > write a short (2~3 lines max) comment about why it's ok to do things > > that way > > Sure. > > But I now see that I've disobeyed you, and gone to 4 lines (but in the > comment above the function, so as not to distract from the code itself): > is this good wording to you? I needed to research how they were stopped > from coming in afterwards, so wanted to put something greppable in there. > > And, unless I'm misunderstanding, that "after THP was enabled" was > always supposed to say "after THP was disabled" (because splitting a > huge zero page pmd inserts a a page table full of little zero ptes). indeed, thanks for noticing and fixing it > > Or would you prefer the comment in the commit message instead, > or down just above the pte_offset_map_lock() line? > > It would much better if I could find one place at the mm end, to > enforce its end of the contract; but cannot think how to do that. > > Hugh > > --- a/arch/s390/mm/gmap.c > +++ b/arch/s390/mm/gmap.c > @@ -2537,7 +2537,12 @@ static inline void thp_split_mm(struct mm_struct *mm) > * Remove all empty zero pages from the mapping for lazy refaulting > * - This must be called after mm->context.has_pgste is set, to avoid > * future creation of zero pages > - * - This must be called after THP was enabled > + * - This must be called after THP was disabled. > + * > + * mm contracts with s390, that even if mm were to remove a page table, > + * racing with the loop below and so causing pte_offset_map_lock() to fail, > + * it will never insert a page table containing empty zero pages once > + * mm_forbids_zeropage(mm) i.e. mm->context.has_pgste is set. > */ > static int __zap_zero_pages(pmd_t *pmd, unsigned long start, > unsigned long end, struct mm_walk *walk) looks good, thanks
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c index cb2ee06df286..3c62d1b218b1 100644 --- a/arch/s390/kernel/uv.c +++ b/arch/s390/kernel/uv.c @@ -294,6 +294,8 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb) rc = -ENXIO; ptep = get_locked_pte(gmap->mm, uaddr, &ptelock); + if (!ptep) + goto out; if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) { page = pte_page(*ptep); rc = -EAGAIN; diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index dc90d1eb0d55..d198fc9475a2 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -2549,6 +2549,8 @@ static int __zap_zero_pages(pmd_t *pmd, unsigned long start, spinlock_t *ptl; ptep = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + if (!ptep) + break; if (is_zero_pfn(pte_pfn(*ptep))) ptep_xchg_direct(walk->mm, addr, ptep, __pte(_PAGE_INVALID)); pte_unmap_unlock(ptep, ptl); diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index 6effb24de6d9..3bd2ab2a9a34 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -829,7 +829,7 @@ int set_guest_storage_key(struct mm_struct *mm, unsigned long addr, default: return -EFAULT; } - +again: ptl = pmd_lock(mm, pmdp); if (!pmd_present(*pmdp)) { spin_unlock(ptl); @@ -850,6 +850,8 @@ int set_guest_storage_key(struct mm_struct *mm, unsigned long addr, spin_unlock(ptl); ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); + if (!ptep) + goto again; new = old = pgste_get_lock(ptep); pgste_val(new) &= ~(PGSTE_GR_BIT | PGSTE_GC_BIT | PGSTE_ACC_BITS | PGSTE_FP_BIT); @@ -938,7 +940,7 @@ int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr) default: return -EFAULT; } - +again: ptl = pmd_lock(mm, pmdp); if (!pmd_present(*pmdp)) { spin_unlock(ptl); @@ -955,6 +957,8 @@ int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr) spin_unlock(ptl); ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); + if (!ptep) + goto again; new = old = pgste_get_lock(ptep); /* Reset guest reference bit only */ pgste_val(new) &= ~PGSTE_GR_BIT; @@ -1000,7 +1004,7 @@ int get_guest_storage_key(struct mm_struct *mm, unsigned long addr, default: return -EFAULT; } - +again: ptl = pmd_lock(mm, pmdp); if (!pmd_present(*pmdp)) { spin_unlock(ptl); @@ -1017,6 +1021,8 @@ int get_guest_storage_key(struct mm_struct *mm, unsigned long addr, spin_unlock(ptl); ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); + if (!ptep) + goto again; pgste = pgste_get_lock(ptep); *key = (pgste_val(pgste) & (PGSTE_ACC_BITS | PGSTE_FP_BIT)) >> 56; paddr = pte_val(*ptep) & PAGE_MASK;