Message ID | 923480d5-35ab-7cac-79d0-343d16e29318@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1309047vqr; Sun, 28 May 2023 23:20:57 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5HY6w7PdqbMyW+VGeX3JNQmwAYj4jMk7tDEm2VCZHrIPZkJYyzfJynoOcztqEip3mBOhnh X-Received: by 2002:a05:6a20:7349:b0:10b:f980:18d3 with SMTP id v9-20020a056a20734900b0010bf98018d3mr8468018pzc.1.1685341257649; Sun, 28 May 2023 23:20:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685341257; cv=none; d=google.com; s=arc-20160816; b=I6nBX5/mpO98OC2/2cif7L5ozSackEfSCi87i7TgwhfJh04wP15pJGgczax3Q7ksZA vV8rrNvxC1PKN+UvVxZLbFmM7YkR5eSVoNyo16bdIL5JvNJH+KsT2I2vLcLyTOGozO8p uzv0M5rdqMirbsNLjwlOANlwLnUQlYGUUoFUjtEHR61yIK05Y/2OhpggQ77YF4KWmyVn hZ7yzKuGLwyWznd0OyraefLef6Jet7jLMXibY2Frw7WrqIzuqM5r7JY3hpTOdtqMhHYb VSz07GxN/LK17ijQ16RtjdcYRuK3gFWO5o1J6Au1p0wElabkldpFtXXc67Yb64tAYLiF qCGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=xPq2eB/hW7ShnFytnKcTY7aukisfg/jLLLPKQ4lP3rM=; b=l0r6sfYIrUBalWcOPL3BoFfC/Wf85POHk6F9K0h7O3WIOD5Vyy3TGCS79//odvZ14O rzPjsn86AXm8mR3LFVRgq13yQEHw6RbcZ5Pmoruc4rqkokA5Nm/eg9IogUrgZm4+2Zg0 CCAcnlbsUor5mQcYCvSDxuXuA9yH68nBNUBzU/HP0OLLWnSnsewmzpmIvi8e42HSdgVA 1EFtqWC+sQLl9i6CDzC0hJBKCr3r/1ZwjUrC+yw6iZe4lmpS7oC3D+5Nor2movJeCO84 yi0hxZltpsBT+NGAcdCR4QsYujyhKQimVJoXl6QYe84BpnKtGcUBx0TMvnJg/LhQmdVm h2TA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=TyhMHkB8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u189-20020a6385c6000000b0051b7f451cc6si8662736pgd.539.2023.05.28.23.20.45; Sun, 28 May 2023 23:20:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=TyhMHkB8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231500AbjE2GQy (ORCPT <rfc822;callmefire3@gmail.com> + 99 others); Mon, 29 May 2023 02:16:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231494AbjE2GQu (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 29 May 2023 02:16:50 -0400 Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com [IPv6:2607:f8b0:4864:20::1133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DF19DD for <linux-kernel@vger.kernel.org>; Sun, 28 May 2023 23:16:26 -0700 (PDT) Received: by mail-yw1-x1133.google.com with SMTP id 00721157ae682-561f23dc55aso43805637b3.3 for <linux-kernel@vger.kernel.org>; Sun, 28 May 2023 23:16:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685340981; x=1687932981; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=xPq2eB/hW7ShnFytnKcTY7aukisfg/jLLLPKQ4lP3rM=; b=TyhMHkB8EAAdy+hUHa48C2vLWt0oIXv3HPWA7oGBtoKurezV9dmnaHp/Ug08+YXBMJ q7/doDqEaB13J8DWMDOsmxCRPRcN6V2SXOTsYe+dqJjrxA7f0LVq1fNiXr9IMn2dnMMI ESckYbJKgJgrbJ8T+uYkbBEglbSci8JLRkQ5xTjOVaaAsg2STz0pP4v2Ide+0MWDvyHe y1hUI8uDpAzlLTOdYFAa9G5upyPvxazDxgjqrZ/FkHni8ZrS3oyiM+Gr8t/oORl7+XNi bEMYjRBzoNKzLiTXOPBKXj7pF37UgxQaCI1yMdVBb74YQf5DcZFdirdn0N62WO3feJu6 qWEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685340981; x=1687932981; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xPq2eB/hW7ShnFytnKcTY7aukisfg/jLLLPKQ4lP3rM=; b=XHEIDmwH+szpUk1ygVvlskoXQSWDa84jScZy4d6fMbcrcdcFZhyxpPiJ8Sc+cj48t8 aNZsi5r6HrxnkgsDcvwEqubXaj0bYbmo+zrt6lCKLe0ldEctzoVuXURO+z0FD2YU38Sc MpqNL875yjDx176Pu2omHESmkS6xFo/pPhENzB2zGN0iA9eN+l0TK7b2D1rqhReCFtfO Vqf9pcJt6xgiYNrwZtOWf32wAIEJHlOfRNaCC2GmGj7uzzdq7xqgINewfPVRNJlCP5tw I95BTqKsvYXht+4Rfq6UOJ0vURFc2zvugwkSxnD6FtuosTi2FHD1JVyGbbGzxIn+rM9t ugCQ== X-Gm-Message-State: AC+VfDwpXtrN7Tb0gpEJDM2Jp0dJI4bHrXjG6GKTsa+LS104qAGbk0iv +xayYn16ryhyb2sxJIHR+ckRpw== X-Received: by 2002:a0d:f003:0:b0:565:a0c8:7e66 with SMTP id z3-20020a0df003000000b00565a0c87e66mr11630388ywe.0.1685340981021; Sun, 28 May 2023 23:16:21 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id n5-20020a819c45000000b00545a081847fsm3407533ywa.15.2023.05.28.23.16.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 28 May 2023 23:16:20 -0700 (PDT) Date: Sun, 28 May 2023 23:16:16 -0700 (PDT) From: Hugh Dickins <hughd@google.com> X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton <akpm@linux-foundation.org> cc: Mike Kravetz <mike.kravetz@oracle.com>, Mike Rapoport <rppt@kernel.org>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, Matthew Wilcox <willy@infradead.org>, David Hildenbrand <david@redhat.com>, Suren Baghdasaryan <surenb@google.com>, Qi Zheng <zhengqi.arch@bytedance.com>, Yang Shi <shy828301@gmail.com>, Mel Gorman <mgorman@techsingularity.net>, Peter Xu <peterx@redhat.com>, Peter Zijlstra <peterz@infradead.org>, Will Deacon <will@kernel.org>, Yu Zhao <yuzhao@google.com>, Alistair Popple <apopple@nvidia.com>, Ralph Campbell <rcampbell@nvidia.com>, Ira Weiny <ira.weiny@intel.com>, Steven Price <steven.price@arm.com>, SeongJae Park <sj@kernel.org>, Naoya Horiguchi <naoya.horiguchi@nec.com>, Christophe Leroy <christophe.leroy@csgroup.eu>, Zack Rusin <zackr@vmware.com>, Jason Gunthorpe <jgg@ziepe.ca>, Axel Rasmussen <axelrasmussen@google.com>, Anshuman Khandual <anshuman.khandual@arm.com>, Pasha Tatashin <pasha.tatashin@soleen.com>, Miaohe Lin <linmiaohe@huawei.com>, Minchan Kim <minchan@kernel.org>, Christoph Hellwig <hch@infradead.org>, Song Liu <song@kernel.org>, Thomas Hellstrom <thomas.hellstrom@linux.intel.com>, Russell King <linux@armlinux.org.uk>, "David S. Miller" <davem@davemloft.net>, Michael Ellerman <mpe@ellerman.id.au>, "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, Claudio Imbrenda <imbrenda@linux.ibm.com>, Alexander Gordeev <agordeev@linux.ibm.com>, Jann Horn <jannh@google.com>, linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 02/12] mm/pgtable: add PAE safety to __pte_offset_map() In-Reply-To: <35e983f5-7ed3-b310-d949-9ae8b130cdab@google.com> Message-ID: <923480d5-35ab-7cac-79d0-343d16e29318@google.com> References: <35e983f5-7ed3-b310-d949-9ae8b130cdab@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767208394559274297?= X-GMAIL-MSGID: =?utf-8?q?1767208394559274297?= |
Series |
mm: free retracted page table by RCU
|
|
Commit Message
Hugh Dickins
May 29, 2023, 6:16 a.m. UTC
There is a faint risk that __pte_offset_map(), on a 32-bit architecture
with a 64-bit pmd_t e.g. x86-32 with CONFIG_X86_PAE=y, would succeed on
a pmdval assembled from a pmd_low and a pmd_high which never belonged
together: their combination not pointing to a page table at all, perhaps
not even a valid pfn. pmdp_get_lockless() is not enough to prevent that.
Guard against that (on such configs) by local_irq_save() blocking TLB
flush between present updates, as linux/pgtable.h suggests. It's only
needed around the pmdp_get_lockless() in __pte_offset_map(): a race when
__pte_offset_map_lock() repeats the pmdp_get_lockless() after getting the
lock, would just send it back to __pte_offset_map() again.
CONFIG_GUP_GET_PXX_LOW_HIGH is enabled when required by mips, sh and x86.
It is not enabled by arm-32 CONFIG_ARM_LPAE: my understanding is that
Will Deacon's 2020 enhancements to READ_ONCE() are sufficient for arm.
It is not enabled by arc, but its pmd_t is 32-bit even when pte_t 64-bit.
Limit the IRQ disablement to CONFIG_HIGHPTE? Perhaps, but would need a
little more work, to retry if pmd_low good for page table, but pmd_high
non-zero from THP (and that might be making x86-specific assumptions).
Signed-off-by: Hugh Dickins <hughd@google.com>
---
mm/pgtable-generic.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
Comments
On Sun, May 28, 2023 at 11:16:16PM -0700, Hugh Dickins wrote: > +#if defined(CONFIG_GUP_GET_PXX_LOW_HIGH) && \ > + (defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RCU)) > +/* > + * See the comment above ptep_get_lockless() in include/linux/pgtable.h: > + * the barriers in pmdp_get_lockless() cannot guarantee that the value in > + * pmd_high actually belongs with the value in pmd_low; but holding interrupts > + * off blocks the TLB flush between present updates, which guarantees that a > + * successful __pte_offset_map() points to a page from matched halves. > + */ > +#define config_might_irq_save(flags) local_irq_save(flags) > +#define config_might_irq_restore(flags) local_irq_restore(flags) > +#else > +#define config_might_irq_save(flags) > +#define config_might_irq_restore(flags) I don't like the name. It should indicate that it's PMD-related, so pmd_read_start(flags) / pmd_read_end(flags)?
On Sun, May 28, 2023 at 11:16:16PM -0700, Hugh Dickins wrote: > There is a faint risk that __pte_offset_map(), on a 32-bit architecture > with a 64-bit pmd_t e.g. x86-32 with CONFIG_X86_PAE=y, would succeed on > a pmdval assembled from a pmd_low and a pmd_high which never belonged > together: their combination not pointing to a page table at all, perhaps > not even a valid pfn. pmdp_get_lockless() is not enough to prevent that. > > Guard against that (on such configs) by local_irq_save() blocking TLB > flush between present updates, as linux/pgtable.h suggests. It's only > needed around the pmdp_get_lockless() in __pte_offset_map(): a race when > __pte_offset_map_lock() repeats the pmdp_get_lockless() after getting the > lock, would just send it back to __pte_offset_map() again. What about the other places calling pmdp_get_lockless ? It seems like this is quietly making it part of the API that the caller must hold the IPIs off. And Jann had a note that this approach used by the lockless functions doesn't work anyhow: https://lore.kernel.org/linux-mm/CAG48ez3h-mnp9ZFC10v+-BW_8NQvxbwBsMYJFP8JX31o0B17Pg@mail.gmail.com/ Though we never fixed it, AFAIK.. Jason
On Wed, 31 May 2023, Jason Gunthorpe wrote: > On Sun, May 28, 2023 at 11:16:16PM -0700, Hugh Dickins wrote: > > There is a faint risk that __pte_offset_map(), on a 32-bit architecture > > with a 64-bit pmd_t e.g. x86-32 with CONFIG_X86_PAE=y, would succeed on > > a pmdval assembled from a pmd_low and a pmd_high which never belonged > > together: their combination not pointing to a page table at all, perhaps > > not even a valid pfn. pmdp_get_lockless() is not enough to prevent that. > > > > Guard against that (on such configs) by local_irq_save() blocking TLB > > flush between present updates, as linux/pgtable.h suggests. It's only > > needed around the pmdp_get_lockless() in __pte_offset_map(): a race when > > __pte_offset_map_lock() repeats the pmdp_get_lockless() after getting the > > lock, would just send it back to __pte_offset_map() again. > > What about the other places calling pmdp_get_lockless ? It seems like > this is quietly making it part of the API that the caller must hold > the IPIs off. No, I'm making no judgment of other places where pmdp_get_lockless() is used: examination might show that some need more care, but I'll just assume that each is taking as much care as it needs. But here where I'm making changes, I do see that we need this extra care. > > And Jann had a note that this approach used by the lockless functions > doesn't work anyhow: > > https://lore.kernel.org/linux-mm/CAG48ez3h-mnp9ZFC10v+-BW_8NQvxbwBsMYJFP8JX31o0B17Pg@mail.gmail.com/ Thanks a lot for the link: I don't know why, but I never saw that mail thread at all before. I have not fully digested it yet, to be honest: MADV_DONTNEED, doesn't flush TLB yet, etc - I'll have to get into the right frame of mind for that. > > Though we never fixed it, AFAIK.. I'm certainly depending very much on pmdp_get_lockless(): and hoping to find its case is easier to defend than at the ptep_get_lockless() level. Thanks, Hugh
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 674671835631..d28b63386cef 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -232,12 +232,32 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address, #endif #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#if defined(CONFIG_GUP_GET_PXX_LOW_HIGH) && \ + (defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RCU)) +/* + * See the comment above ptep_get_lockless() in include/linux/pgtable.h: + * the barriers in pmdp_get_lockless() cannot guarantee that the value in + * pmd_high actually belongs with the value in pmd_low; but holding interrupts + * off blocks the TLB flush between present updates, which guarantees that a + * successful __pte_offset_map() points to a page from matched halves. + */ +#define config_might_irq_save(flags) local_irq_save(flags) +#define config_might_irq_restore(flags) local_irq_restore(flags) +#else +#define config_might_irq_save(flags) +#define config_might_irq_restore(flags) +#endif + pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) { + unsigned long __maybe_unused flags; pmd_t pmdval; rcu_read_lock(); + config_might_irq_save(flags); pmdval = pmdp_get_lockless(pmd); + config_might_irq_restore(flags); + if (pmdvalp) *pmdvalp = pmdval; if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))