Message ID | 6dd63b39-e71f-2e8b-7e0-83e02f3bcb39@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1310588vqr; Sun, 28 May 2023 23:25:38 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6DJ/V8sp6xjwHZav+rzj0wF2RqfdjbyUkOJr2xK/9tDLYA38MtvGWqT4nCkqow6fcEwNXN X-Received: by 2002:a05:6a00:2d04:b0:64f:4019:ec5b with SMTP id fa4-20020a056a002d0400b0064f4019ec5bmr16009713pfb.7.1685341538092; Sun, 28 May 2023 23:25:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685341538; cv=none; d=google.com; s=arc-20160816; b=HJWoteHLu3oU8ikvK0cFSXvHimPStXHSGEBdExDLSfoiXKWwdRUSSaE+aSeLZhS/rh XCxMGXObp4zhcK9iB2WKlarSUdlOtP+7qvHRy6L34ZwY1ptkY5DeA/DNL3vHo3gR8FHO 6GmMWjgjy9U2AHZ0Msjf9MbZDAO6+Nq6CWprQdPOW85wVBhCtdwOSXuMuOV1V8qJigJh wukjYVk4qq3CClZTGa4ZowjBToc5eTBoWTCRd4YS855BlplX+vRRUTDx3Ten8n/cetKH Acu11zgLrynYw8j7cowwFdNdyM9P1snx2Zl7NY6VLimljsAMwPuv1CauHppe/nMfOe0S 2brw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=MHjeymamXVEDQ3Kw25JCka7yLmkzZOMAvMVAEydeLzM=; b=Oz7kG0z5yZNq/cBbmrXfSPHY7Izic470mnYMnNIxG5rjgz57G0A0+zSChp0z+ISvWI 0pyZuzPgL2JwLmXsUBBH3rfYgnd8t/JwYmOUHCGiG1dlo8VQ1sv0xRVrV0xRVgSc91r9 HgvO7OUQnesdgJUUAH/FLQAqE+JigQwrxeyyL2yH6ll2S0W2kjslkZ4v/Mp9KQzzZ8im OIKl1NEtI24Kud1MJGbdnxvtprCdXN6wd1044cfJNcrKSpDpDQdDTA1NeATWxz9yKqUJ fLUqXg1rgN2VTpWkmoaCtj8liTSGGvthJ84gCx7KHaf8vhtoeNOyQDXhmqFQgHW8cBUl yH/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=JnCv5o3g; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r23-20020a638f57000000b00535f192eac7si8634516pgn.211.2023.05.28.23.25.23; Sun, 28 May 2023 23:25:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=JnCv5o3g; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231552AbjE2GWz (ORCPT <rfc822;callmefire3@gmail.com> + 99 others); Mon, 29 May 2023 02:22:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229570AbjE2GWw (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 29 May 2023 02:22:52 -0400 Received: from mail-yw1-x112e.google.com (mail-yw1-x112e.google.com [IPv6:2607:f8b0:4864:20::112e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05D06DE for <linux-kernel@vger.kernel.org>; Sun, 28 May 2023 23:22:47 -0700 (PDT) Received: by mail-yw1-x112e.google.com with SMTP id 00721157ae682-565c7399afaso27085197b3.1 for <linux-kernel@vger.kernel.org>; Sun, 28 May 2023 23:22:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685341366; x=1687933366; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=MHjeymamXVEDQ3Kw25JCka7yLmkzZOMAvMVAEydeLzM=; b=JnCv5o3gZCSjIaRpRDj860LJkDOANHqriyLGSv5pPF/RAyg9m3lQdupCXaTyei8DMf e3KqZBCpZIzZeMjk/yaxPctH6c49veiWRbUeGgx17gSNxkwyxqN1QFATHf6q6YutmR3t HvT+zDNBDGXXzqUl/LmcNCNapCFXfgc8yyTJIwdBnMkvFfp9LdRJqcRYSLqq8ynJ3LiF LXxpx3gFYAZnedKKq/SPuYQMj/8ol6u4/3db6BOFo/ME2oJub0w8Z8IEMye4507Zx0Ui smMsg0FV9ON+iRDLGBbSQlzdjRNF2P0/vV32Ee1vXvhsjJhpzkmQMG6hoZRiP5zjDNGY IC9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685341366; x=1687933366; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MHjeymamXVEDQ3Kw25JCka7yLmkzZOMAvMVAEydeLzM=; b=Fq3VGt4qHHnM28qsoNWhnSNsXpVUMV0ROmQzF623MrGlEOupmN1p1lYmvXMkooMyDk EzkiG0BEb5WQEMnRfe2rHwL/rbLEDTsX90BjlhIdlYTc4u9uzKwTGsFN1xuo27e7nhSy RbD7YQ/gZq7d4rQoP7yzsvXxdxBmII09JQjW/LzfSKyZ2w4Kl2h8AtWJfnlp7hwjv1/g U9xOKQQtfWxym+UWWa4BBoPwhPMkNjUpw06nTV/hXbD1nf8XsSO8LmZbfA5lmS3IBY0g PMPDaugtarECgnanSiQq8D4bjo1fml5Gw0PhaLETZFX6JwJ7VyMRhTfMa/MkADTLdbz2 XovA== X-Gm-Message-State: AC+VfDxFPJcXLJERniKuvQP/dYdA+XphdypcVJQFm791pysA6dwFdRvQ 91LM8G30C+6mQsIzOkfEGkUgXg== X-Received: by 2002:a81:a043:0:b0:560:beeb:6fc1 with SMTP id x64-20020a81a043000000b00560beeb6fc1mr13114394ywg.16.1685341365991; Sun, 28 May 2023 23:22:45 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id t66-20020a818345000000b00568938ca41bsm405426ywf.53.2023.05.28.23.22.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 28 May 2023 23:22:45 -0700 (PDT) Date: Sun, 28 May 2023 23:22:40 -0700 (PDT) From: Hugh Dickins <hughd@google.com> X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton <akpm@linux-foundation.org> cc: Mike Kravetz <mike.kravetz@oracle.com>, Mike Rapoport <rppt@kernel.org>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, Matthew Wilcox <willy@infradead.org>, David Hildenbrand <david@redhat.com>, Suren Baghdasaryan <surenb@google.com>, Qi Zheng <zhengqi.arch@bytedance.com>, Yang Shi <shy828301@gmail.com>, Mel Gorman <mgorman@techsingularity.net>, Peter Xu <peterx@redhat.com>, Peter Zijlstra <peterz@infradead.org>, Will Deacon <will@kernel.org>, Yu Zhao <yuzhao@google.com>, Alistair Popple <apopple@nvidia.com>, Ralph Campbell <rcampbell@nvidia.com>, Ira Weiny <ira.weiny@intel.com>, Steven Price <steven.price@arm.com>, SeongJae Park <sj@kernel.org>, Naoya Horiguchi <naoya.horiguchi@nec.com>, Christophe Leroy <christophe.leroy@csgroup.eu>, Zack Rusin <zackr@vmware.com>, Jason Gunthorpe <jgg@ziepe.ca>, Axel Rasmussen <axelrasmussen@google.com>, Anshuman Khandual <anshuman.khandual@arm.com>, Pasha Tatashin <pasha.tatashin@soleen.com>, Miaohe Lin <linmiaohe@huawei.com>, Minchan Kim <minchan@kernel.org>, Christoph Hellwig <hch@infradead.org>, Song Liu <song@kernel.org>, Thomas Hellstrom <thomas.hellstrom@linux.intel.com>, Russell King <linux@armlinux.org.uk>, "David S. Miller" <davem@davemloft.net>, Michael Ellerman <mpe@ellerman.id.au>, "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, Claudio Imbrenda <imbrenda@linux.ibm.com>, Alexander Gordeev <agordeev@linux.ibm.com>, Jann Horn <jannh@google.com>, linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async() In-Reply-To: <35e983f5-7ed3-b310-d949-9ae8b130cdab@google.com> Message-ID: <6dd63b39-e71f-2e8b-7e0-83e02f3bcb39@google.com> References: <35e983f5-7ed3-b310-d949-9ae8b130cdab@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767208688776413605?= X-GMAIL-MSGID: =?utf-8?q?1767208688776413605?= |
Series |
mm: free retracted page table by RCU
|
|
Commit Message
Hugh Dickins
May 29, 2023, 6:22 a.m. UTC
Add s390-specific pte_free_defer(), to call pte_free() via call_rcu().
pte_free_defer() will be called inside khugepaged's retract_page_tables()
loop, where allocating extra memory cannot be relied upon. This precedes
the generic version to avoid build breakage from incompatible pgtable_t.
This version is more complicated than others: because page_table_free()
needs to know which fragment is being freed, and which mm to link it to.
page_table_free()'s fragment handling is clever, but I could too easily
break it: what's done here in pte_free_defer() and pte_free_now() might
be better integrated with page_table_free()'s cleverness, but not by me!
By the time that page_table_free() gets called via RCU, it's conceivable
that mm would already have been freed: so mmgrab() in pte_free_defer()
and mmdrop() in pte_free_now(). No, that is not a good context to call
mmdrop() from, so make mmdrop_async() public and use that.
Signed-off-by: Hugh Dickins <hughd@google.com>
---
arch/s390/include/asm/pgalloc.h | 4 ++++
arch/s390/mm/pgalloc.c | 34 +++++++++++++++++++++++++++++++++
include/linux/mm_types.h | 2 +-
include/linux/sched/mm.h | 1 +
kernel/fork.c | 2 +-
5 files changed, 41 insertions(+), 2 deletions(-)
Comments
On Sun, 28 May 2023, Hugh Dickins wrote: > Add s390-specific pte_free_defer(), to call pte_free() via call_rcu(). > pte_free_defer() will be called inside khugepaged's retract_page_tables() > loop, where allocating extra memory cannot be relied upon. This precedes > the generic version to avoid build breakage from incompatible pgtable_t. > > This version is more complicated than others: because page_table_free() > needs to know which fragment is being freed, and which mm to link it to. > > page_table_free()'s fragment handling is clever, but I could too easily > break it: what's done here in pte_free_defer() and pte_free_now() might > be better integrated with page_table_free()'s cleverness, but not by me! > > By the time that page_table_free() gets called via RCU, it's conceivable > that mm would already have been freed: so mmgrab() in pte_free_defer() > and mmdrop() in pte_free_now(). No, that is not a good context to call > mmdrop() from, so make mmdrop_async() public and use that. But Matthew Wilcox quickly pointed out that sharing one page->rcu_head between multiple page tables is tricky: something I knew but had lost sight of. So the powerpc and s390 patches were broken: powerpc fairly easily fixed, but s390 more painful. In https://lore.kernel.org/linux-s390/20230601155751.7c949ca4@thinkpad-T15/ On Thu, 1 Jun 2023 15:57:51 +0200 Gerald Schaefer <gerald.schaefer@linux.ibm.com> wrote: > > Yes, we have 2 pagetables in one 4K page, which could result in same > rcu_head reuse. It might be possible to use the cleverness from our > page_table_free() function, e.g. to only do the call_rcu() once, for > the case where both 2K pagetable fragments become unused, similar to > how we decide when to actually call __free_page(). > > However, it might be much worse, and page->rcu_head from a pagetable > page cannot be used at all for s390, because we also use page->lru > to keep our list of free 2K pagetable fragments. I always get confused > by struct page unions, so not completely sure, but it seems to me that > page->rcu_head would overlay with page->lru, right? Sigh, yes, page->rcu_head overlays page->lru. But (please correct me if I'm wrong) I think that s390 could use exactly the same technique for its list of free 2K pagetable fragments as it uses for its list of THP "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use the first two longs of the page table itself for threading the list. And while it could use third and fourth longs instead, I don't see any need for that: a deposited pagetable has been allocated, so would not be on the list of free fragments. Below is one of the grossest patches I've ever posted: gross because it's a rushed attempt to see whether that is viable, while it would take me longer to understand all the s390 cleverness there (even though the PP AA commentary above page_table_alloc() is excellent). I'm hoping the use of page->lru in arch/s390/mm/gmap.c is disjoint. And cmma_init_nodat()? Ah, that's __init so I guess disjoint. Gerald, s390 folk: would it be possible for you to give this a try, suggest corrections and improvements, and then I can make it a separate patch of the series; and work on avoiding concurrent use of the rcu_head by pagetable fragment buddies (ideally fit in with the scheme already there, maybe DD bits to go along with the PP AA). Why am I even asking you to move away from page->lru: why don't I thread s390's pte_free_defer() pagetables like THP's deposit does? I cannot, because the deferred pagetables have to remain accessible as valid pagetables, until the RCU grace period has elapsed - unless all the list pointers would appear as pte_none(), which I doubt. (That may limit our possibilities with the deposited pagetables in future: I can imagine them too wanting to remain accessible as valid pagetables. But that's not needed by this series, and s390 only uses deposit/withdraw for anon THP; and some are hoping that we might be able to move away from deposit/withdraw altogther - though powerpc's special use will make that more difficult.) Thanks! Hugh --- 6.4-rc5/arch/s390/mm/pgalloc.c +++ linux/arch/s390/mm/pgalloc.c @@ -232,6 +232,7 @@ void page_table_free_pgste(struct page * */ unsigned long *page_table_alloc(struct mm_struct *mm) { + struct list_head *listed; unsigned long *table; struct page *page; unsigned int mask, bit; @@ -241,8 +242,8 @@ unsigned long *page_table_alloc(struct m table = NULL; spin_lock_bh(&mm->context.lock); if (!list_empty(&mm->context.pgtable_list)) { - page = list_first_entry(&mm->context.pgtable_list, - struct page, lru); + listed = mm->context.pgtable_list.next; + page = virt_to_page(listed); mask = atomic_read(&page->_refcount) >> 24; /* * The pending removal bits must also be checked. @@ -259,9 +260,12 @@ unsigned long *page_table_alloc(struct m bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; + BUG_ON(table != (unsigned long *)listed); atomic_xor_bits(&page->_refcount, 0x01U << (bit + 24)); - list_del(&page->lru); + list_del(listed); + set_pte((pte_t *)&table[0], __pte(_PAGE_INVALID)); + set_pte((pte_t *)&table[1], __pte(_PAGE_INVALID)); } } spin_unlock_bh(&mm->context.lock); @@ -288,8 +292,9 @@ unsigned long *page_table_alloc(struct m /* Return the first 2K fragment of the page */ atomic_xor_bits(&page->_refcount, 0x01U << 24); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); + listed = (struct list head *)(table + PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); - list_add(&page->lru, &mm->context.pgtable_list); + list_add(listed, &mm->context.pgtable_list); spin_unlock_bh(&mm->context.lock); } return table; @@ -310,6 +315,7 @@ static void page_table_release_check(str void page_table_free(struct mm_struct *mm, unsigned long *table) { + struct list_head *listed; unsigned int mask, bit, half; struct page *page; @@ -325,10 +331,24 @@ void page_table_free(struct mm_struct *m */ mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); mask >>= 24; - if (mask & 0x03U) - list_add(&page->lru, &mm->context.pgtable_list); - else - list_del(&page->lru); + if (mask & 0x03U) { + listed = (struct list_head *)table; + list_add(listed, &mm->context.pgtable_list); + } else { + /* + * Get address of the other page table sharing the page. + * There are sure to be MUCH better ways to do all this! + * But I'm rushing, while trying to keep to the obvious. + */ + listed = (struct list_head *)(table + PTRS_PER_PTE); + if (virt_to_page(listed) != page) { + /* sizeof(*listed) is twice sizeof(*table) */ + listed -= PTRS_PER_PTE; + } + list_del(listed); + set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID)); + set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID)); + } spin_unlock_bh(&mm->context.lock); mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24)); mask >>= 24; @@ -349,6 +369,7 @@ void page_table_free(struct mm_struct *m void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, unsigned long vmaddr) { + struct list_head *listed; struct mm_struct *mm; struct page *page; unsigned int bit, mask; @@ -370,10 +391,24 @@ void page_table_free_rcu(struct mmu_gath */ mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); mask >>= 24; - if (mask & 0x03U) - list_add_tail(&page->lru, &mm->context.pgtable_list); - else - list_del(&page->lru); + if (mask & 0x03U) { + listed = (struct list_head *)table; + list_add_tail(listed, &mm->context.pgtable_list); + } else { + /* + * Get address of the other page table sharing the page. + * There are sure to be MUCH better ways to do all this! + * But I'm rushing, and trying to keep to the obvious. + */ + listed = (struct list_head *)(table + PTRS_PER_PTE); + if (virt_to_page(listed) != page) { + /* sizeof(*listed) is twice sizeof(*table) */ + listed -= PTRS_PER_PTE; + } + list_del(listed); + set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID)); + set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID)); + } spin_unlock_bh(&mm->context.lock); table = (unsigned long *) ((unsigned long) table | (0x01U << bit)); tlb_remove_table(tlb, table);
On Mon, Jun 05, 2023 at 10:11:52PM -0700, Hugh Dickins wrote: > "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use > the first two longs of the page table itself for threading the list. It is not RCU anymore if it writes to the page table itself before the grace period, so this change seems to break the RCU behavior of page_table_free_rcu().. The rcu sync is inside tlb_remove_table() called after the stores. Maybe something like an xarray on the mm to hold the frags? Jason
On Mon, 5 Jun 2023 22:11:52 -0700 (PDT) Hugh Dickins <hughd@google.com> wrote: > On Sun, 28 May 2023, Hugh Dickins wrote: > > > Add s390-specific pte_free_defer(), to call pte_free() via call_rcu(). > > pte_free_defer() will be called inside khugepaged's retract_page_tables() > > loop, where allocating extra memory cannot be relied upon. This precedes > > the generic version to avoid build breakage from incompatible pgtable_t. > > > > This version is more complicated than others: because page_table_free() > > needs to know which fragment is being freed, and which mm to link it to. > > > > page_table_free()'s fragment handling is clever, but I could too easily > > break it: what's done here in pte_free_defer() and pte_free_now() might > > be better integrated with page_table_free()'s cleverness, but not by me! > > > > By the time that page_table_free() gets called via RCU, it's conceivable > > that mm would already have been freed: so mmgrab() in pte_free_defer() > > and mmdrop() in pte_free_now(). No, that is not a good context to call > > mmdrop() from, so make mmdrop_async() public and use that. > > But Matthew Wilcox quickly pointed out that sharing one page->rcu_head > between multiple page tables is tricky: something I knew but had lost > sight of. So the powerpc and s390 patches were broken: powerpc fairly > easily fixed, but s390 more painful. > > In https://lore.kernel.org/linux-s390/20230601155751.7c949ca4@thinkpad-T15/ > On Thu, 1 Jun 2023 15:57:51 +0200 > Gerald Schaefer <gerald.schaefer@linux.ibm.com> wrote: > > > > Yes, we have 2 pagetables in one 4K page, which could result in same > > rcu_head reuse. It might be possible to use the cleverness from our > > page_table_free() function, e.g. to only do the call_rcu() once, for > > the case where both 2K pagetable fragments become unused, similar to > > how we decide when to actually call __free_page(). > > > > However, it might be much worse, and page->rcu_head from a pagetable > > page cannot be used at all for s390, because we also use page->lru > > to keep our list of free 2K pagetable fragments. I always get confused > > by struct page unions, so not completely sure, but it seems to me that > > page->rcu_head would overlay with page->lru, right? > > Sigh, yes, page->rcu_head overlays page->lru. But (please correct me if > I'm wrong) I think that s390 could use exactly the same technique for > its list of free 2K pagetable fragments as it uses for its list of THP > "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use > the first two longs of the page table itself for threading the list. Nice idea, I think that could actually work, since we only need the empty 2K halves on the list. So it should be possible to store the list_head inside those. > > And while it could use third and fourth longs instead, I don't see any > need for that: a deposited pagetable has been allocated, so would not > be on the list of free fragments. Correct, that should not interfere. > > Below is one of the grossest patches I've ever posted: gross because > it's a rushed attempt to see whether that is viable, while it would take > me longer to understand all the s390 cleverness there (even though the > PP AA commentary above page_table_alloc() is excellent). Sounds fair, this is also one of the grossest code we have, which is also why Alexander added the comment. I guess we could need even more comments inside the code, as it still confuses me more than it should. Considering that, you did remarkably well. Your patch seems to work fine, at least it survived some LTP mm tests. I will also add it to our CI runs, to give it some more testing. Will report tomorrow when it broke something. See also below for some patch comments. > > I'm hoping the use of page->lru in arch/s390/mm/gmap.c is disjoint. > And cmma_init_nodat()? Ah, that's __init so I guess disjoint. cmma_init_nodat() should be disjoint, not only because it is __init, but also because it explicitly skips pagetable pages, so it should never touch page->lru of those. Not very familiar with the gmap code, it does look disjoint, and we should also use complete 4K pages for pagetables instead of 2K fragments there, but Christian or Claudio should also have a look. > > Gerald, s390 folk: would it be possible for you to give this > a try, suggest corrections and improvements, and then I can make it > a separate patch of the series; and work on avoiding concurrent use > of the rcu_head by pagetable fragment buddies (ideally fit in with > the scheme already there, maybe DD bits to go along with the PP AA). It feels like it could be possible to not only avoid the double rcu_head, but also avoid passing over the mm via page->pt_mm. I.e. have pte_free_defer(), which has the mm, do all the checks and list updates that page_table_free() does, for which we need the mm. Then just skip the pgtable_pte_page_dtor() + __free_page() at the end, and do call_rcu(pte_free_now) instead. The pte_free_now() could then just do _dtor/__free_page similar to the generic version. I must admit that I still have no good overview of the "big picture" here, and especially if this approach would still fit in. Probably not, as the to-be-freed pagetables would still be accessible, but not really valid, if we added them back to the list, with list_heads inside them. So maybe call_rcu() has to be done always, and not only for the case where the whole 4K page becomes free, then we probably cannot do w/o passing over the mm for proper list handling. Ah, and they could also be re-used, once they are back on the list, which will probably not go well. Is that what you meant with DD bits, i.e. mark such fragments to prevent re-use? Smells a bit like the "pending purge" > > Why am I even asking you to move away from page->lru: why don't I > thread s390's pte_free_defer() pagetables like THP's deposit does? > I cannot, because the deferred pagetables have to remain accessible > as valid pagetables, until the RCU grace period has elapsed - unless > all the list pointers would appear as pte_none(), which I doubt. Yes, only empty and invalid PTEs will appear as pte_none(), i.e. entries that contain only 0x400. Ok, I guess that also explains why the approach mentioned above, to avoid passing over the mm and do the list handling already in pte_free_defer(), will not be so easy or possible at all. > > (That may limit our possibilities with the deposited pagetables in > future: I can imagine them too wanting to remain accessible as valid > pagetables. But that's not needed by this series, and s390 only uses > deposit/withdraw for anon THP; and some are hoping that we might be > able to move away from deposit/withdraw altogther - though powerpc's > special use will make that more difficult.) > > Thanks! > Hugh > > --- 6.4-rc5/arch/s390/mm/pgalloc.c > +++ linux/arch/s390/mm/pgalloc.c > @@ -232,6 +232,7 @@ void page_table_free_pgste(struct page * > */ > unsigned long *page_table_alloc(struct mm_struct *mm) > { > + struct list_head *listed; > unsigned long *table; > struct page *page; > unsigned int mask, bit; > @@ -241,8 +242,8 @@ unsigned long *page_table_alloc(struct m > table = NULL; > spin_lock_bh(&mm->context.lock); > if (!list_empty(&mm->context.pgtable_list)) { > - page = list_first_entry(&mm->context.pgtable_list, > - struct page, lru); > + listed = mm->context.pgtable_list.next; > + page = virt_to_page(listed); > mask = atomic_read(&page->_refcount) >> 24; > /* > * The pending removal bits must also be checked. > @@ -259,9 +260,12 @@ unsigned long *page_table_alloc(struct m > bit = mask & 1; /* =1 -> second 2K */ > if (bit) > table += PTRS_PER_PTE; > + BUG_ON(table != (unsigned long *)listed); > atomic_xor_bits(&page->_refcount, > 0x01U << (bit + 24)); > - list_del(&page->lru); > + list_del(listed); > + set_pte((pte_t *)&table[0], __pte(_PAGE_INVALID)); > + set_pte((pte_t *)&table[1], __pte(_PAGE_INVALID)); > } > } > spin_unlock_bh(&mm->context.lock); > @@ -288,8 +292,9 @@ unsigned long *page_table_alloc(struct m > /* Return the first 2K fragment of the page */ > atomic_xor_bits(&page->_refcount, 0x01U << 24); > memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); > + listed = (struct list head *)(table + PTRS_PER_PTE); Missing "_" in "struct list head" > spin_lock_bh(&mm->context.lock); > - list_add(&page->lru, &mm->context.pgtable_list); > + list_add(listed, &mm->context.pgtable_list); > spin_unlock_bh(&mm->context.lock); > } > return table; > @@ -310,6 +315,7 @@ static void page_table_release_check(str > > void page_table_free(struct mm_struct *mm, unsigned long *table) > { > + struct list_head *listed; > unsigned int mask, bit, half; > struct page *page; Not sure if "reverse X-mas" is still part of any style guidelines, but I still am a big fan of that :-). Although the other code in that file is also not consistently using it ... > > @@ -325,10 +331,24 @@ void page_table_free(struct mm_struct *m > */ > mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); > mask >>= 24; > - if (mask & 0x03U) > - list_add(&page->lru, &mm->context.pgtable_list); > - else > - list_del(&page->lru); > + if (mask & 0x03U) { > + listed = (struct list_head *)table; > + list_add(listed, &mm->context.pgtable_list); > + } else { > + /* > + * Get address of the other page table sharing the page. > + * There are sure to be MUCH better ways to do all this! > + * But I'm rushing, while trying to keep to the obvious. > + */ > + listed = (struct list_head *)(table + PTRS_PER_PTE); > + if (virt_to_page(listed) != page) { > + /* sizeof(*listed) is twice sizeof(*table) */ > + listed -= PTRS_PER_PTE; > + } Bitwise XOR with 0x800 should do the trick here, i.e. give you the address of the other 2K half, like this: listed = (struct list_head *)((unsigned long) table ^ 0x800UL); > + list_del(listed); > + set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID)); > + set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID)); > + } > spin_unlock_bh(&mm->context.lock); > mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24)); > mask >>= 24; > @@ -349,6 +369,7 @@ void page_table_free(struct mm_struct *m > void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, > unsigned long vmaddr) > { > + struct list_head *listed; > struct mm_struct *mm; > struct page *page; > unsigned int bit, mask; > @@ -370,10 +391,24 @@ void page_table_free_rcu(struct mmu_gath > */ > mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); > mask >>= 24; > - if (mask & 0x03U) > - list_add_tail(&page->lru, &mm->context.pgtable_list); > - else > - list_del(&page->lru); > + if (mask & 0x03U) { > + listed = (struct list_head *)table; > + list_add_tail(listed, &mm->context.pgtable_list); > + } else { > + /* > + * Get address of the other page table sharing the page. > + * There are sure to be MUCH better ways to do all this! > + * But I'm rushing, and trying to keep to the obvious. > + */ > + listed = (struct list_head *)(table + PTRS_PER_PTE); > + if (virt_to_page(listed) != page) { > + /* sizeof(*listed) is twice sizeof(*table) */ > + listed -= PTRS_PER_PTE; > + } Same as above. > + list_del(listed); > + set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID)); > + set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID)); > + } > spin_unlock_bh(&mm->context.lock); > table = (unsigned long *) ((unsigned long) table | (0x01U << bit)); > tlb_remove_table(tlb, table); Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
On Tue, 6 Jun 2023, Jason Gunthorpe wrote: > On Mon, Jun 05, 2023 at 10:11:52PM -0700, Hugh Dickins wrote: > > > "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use > > the first two longs of the page table itself for threading the list. > > It is not RCU anymore if it writes to the page table itself before the > grace period, so this change seems to break the RCU behavior of > page_table_free_rcu().. The rcu sync is inside tlb_remove_table() > called after the stores. Yes indeed, thanks for pointing that out. > > Maybe something like an xarray on the mm to hold the frags? I think we can manage without that: I'll say slightly more in reply to Gerald. Hugh
On Tue, 6 Jun 2023, Gerald Schaefer wrote: > On Mon, 5 Jun 2023 22:11:52 -0700 (PDT) > Hugh Dickins <hughd@google.com> wrote: > > On Thu, 1 Jun 2023 15:57:51 +0200 > > Gerald Schaefer <gerald.schaefer@linux.ibm.com> wrote: > > > > > > Yes, we have 2 pagetables in one 4K page, which could result in same > > > rcu_head reuse. It might be possible to use the cleverness from our > > > page_table_free() function, e.g. to only do the call_rcu() once, for > > > the case where both 2K pagetable fragments become unused, similar to > > > how we decide when to actually call __free_page(). > > > > > > However, it might be much worse, and page->rcu_head from a pagetable > > > page cannot be used at all for s390, because we also use page->lru > > > to keep our list of free 2K pagetable fragments. I always get confused > > > by struct page unions, so not completely sure, but it seems to me that > > > page->rcu_head would overlay with page->lru, right? > > > > Sigh, yes, page->rcu_head overlays page->lru. But (please correct me if > > I'm wrong) I think that s390 could use exactly the same technique for > > its list of free 2K pagetable fragments as it uses for its list of THP > > "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use > > the first two longs of the page table itself for threading the list. > > Nice idea, I think that could actually work, since we only need the empty > 2K halves on the list. So it should be possible to store the list_head > inside those. Jason quickly pointed out the flaw in my thinking there. > > > > > And while it could use third and fourth longs instead, I don't see any > > need for that: a deposited pagetable has been allocated, so would not > > be on the list of free fragments. > > Correct, that should not interfere. > > > > > Below is one of the grossest patches I've ever posted: gross because > > it's a rushed attempt to see whether that is viable, while it would take > > me longer to understand all the s390 cleverness there (even though the > > PP AA commentary above page_table_alloc() is excellent). > > Sounds fair, this is also one of the grossest code we have, which is also > why Alexander added the comment. I guess we could need even more comments > inside the code, as it still confuses me more than it should. > > Considering that, you did remarkably well. Your patch seems to work fine, > at least it survived some LTP mm tests. I will also add it to our CI runs, > to give it some more testing. Will report tomorrow when it broke something. > See also below for some patch comments. Many thanks for your effort on this patch. I don't expect the testing of it to catch Jason's point, that I'm corrupting the page table while it's on its way through RCU to being freed, but he's right nonetheless. I'll integrate your fixes below into what I have here, but probably just archive it as something to refer to later in case it might play a part; but probably it will not - sorry for wasting your time. > > > > > I'm hoping the use of page->lru in arch/s390/mm/gmap.c is disjoint. > > And cmma_init_nodat()? Ah, that's __init so I guess disjoint. > > cmma_init_nodat() should be disjoint, not only because it is __init, > but also because it explicitly skips pagetable pages, so it should > never touch page->lru of those. > > Not very familiar with the gmap code, it does look disjoint, and we should > also use complete 4K pages for pagetables instead of 2K fragments there, > but Christian or Claudio should also have a look. > > > > > Gerald, s390 folk: would it be possible for you to give this > > a try, suggest corrections and improvements, and then I can make it > > a separate patch of the series; and work on avoiding concurrent use > > of the rcu_head by pagetable fragment buddies (ideally fit in with > > the scheme already there, maybe DD bits to go along with the PP AA). > > It feels like it could be possible to not only avoid the double > rcu_head, but also avoid passing over the mm via page->pt_mm. > I.e. have pte_free_defer(), which has the mm, do all the checks and > list updates that page_table_free() does, for which we need the mm. > Then just skip the pgtable_pte_page_dtor() + __free_page() at the end, > and do call_rcu(pte_free_now) instead. The pte_free_now() could then > just do _dtor/__free_page similar to the generic version. I'm not sure: I missed your suggestion there when I first skimmed through, and today have spent more time getting deeper into how it's done at present. I am now feeling more confident of a way forward, a nicely integrated way forward, than I was yesterday. Though getting it right may not be so easy. When Jason pointed out the existing RCU, I initially hoped that it might already provide the necessary framework: but sadly not, because the unbatched case (used when additional memory is not available) does not use RCU at all, but instead the tlb_remove_table_sync_one() IRQ hack. If I used that, it would cripple the s390 implementation unacceptably. > > I must admit that I still have no good overview of the "big picture" > here, and especially if this approach would still fit in. Probably not, > as the to-be-freed pagetables would still be accessible, but not really > valid, if we added them back to the list, with list_heads inside them. > So maybe call_rcu() has to be done always, and not only for the case > where the whole 4K page becomes free, then we probably cannot do w/o > passing over the mm for proper list handling. My current thinking (but may be proved wrong) is along the lines of: why does something on its way to being freed need to be on any list than the rcu_head list? I expect the current answer is, that the other half is allocated, so the page won't be freed; but I hope that we can put it back on that list once we're through with the rcu_head. But the less I say now, the less I shall make a fool of myself: I need to get deeper in. > > Ah, and they could also be re-used, once they are back on the list, > which will probably not go well. Is that what you meant with DD bits, > i.e. mark such fragments to prevent re-use? Smells a bit like the > "pending purge" Yes, we may not need those DD defer bits at all: the pte_free_defer() pagetables should fit very well with "pending purge" as it is. They will go down an unbatched route, but should be obeying the same rules. > > > > > Why am I even asking you to move away from page->lru: why don't I > > thread s390's pte_free_defer() pagetables like THP's deposit does? > > I cannot, because the deferred pagetables have to remain accessible > > as valid pagetables, until the RCU grace period has elapsed - unless > > all the list pointers would appear as pte_none(), which I doubt. > > Yes, only empty and invalid PTEs will appear as pte_none(), i.e. entries > that contain only 0x400. > > Ok, I guess that also explains why the approach mentioned above, > to avoid passing over the mm and do the list handling already in > pte_free_defer(), will not be so easy or possible at all. > > > > > (That may limit our possibilities with the deposited pagetables in > > future: I can imagine them too wanting to remain accessible as valid > > pagetables. But that's not needed by this series, and s390 only uses > > deposit/withdraw for anon THP; and some are hoping that we might be > > able to move away from deposit/withdraw altogther - though powerpc's > > special use will make that more difficult.) > > > > Thanks! > > Hugh > > > > --- 6.4-rc5/arch/s390/mm/pgalloc.c > > +++ linux/arch/s390/mm/pgalloc.c > > @@ -232,6 +232,7 @@ void page_table_free_pgste(struct page * > > */ > > unsigned long *page_table_alloc(struct mm_struct *mm) > > { > > + struct list_head *listed; > > unsigned long *table; > > struct page *page; > > unsigned int mask, bit; > > @@ -241,8 +242,8 @@ unsigned long *page_table_alloc(struct m > > table = NULL; > > spin_lock_bh(&mm->context.lock); > > if (!list_empty(&mm->context.pgtable_list)) { > > - page = list_first_entry(&mm->context.pgtable_list, > > - struct page, lru); > > + listed = mm->context.pgtable_list.next; > > + page = virt_to_page(listed); > > mask = atomic_read(&page->_refcount) >> 24; > > /* > > * The pending removal bits must also be checked. > > @@ -259,9 +260,12 @@ unsigned long *page_table_alloc(struct m > > bit = mask & 1; /* =1 -> second 2K */ > > if (bit) > > table += PTRS_PER_PTE; > > + BUG_ON(table != (unsigned long *)listed); > > atomic_xor_bits(&page->_refcount, > > 0x01U << (bit + 24)); > > - list_del(&page->lru); > > + list_del(listed); > > + set_pte((pte_t *)&table[0], __pte(_PAGE_INVALID)); > > + set_pte((pte_t *)&table[1], __pte(_PAGE_INVALID)); > > } > > } > > spin_unlock_bh(&mm->context.lock); > > @@ -288,8 +292,9 @@ unsigned long *page_table_alloc(struct m > > /* Return the first 2K fragment of the page */ > > atomic_xor_bits(&page->_refcount, 0x01U << 24); > > memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); > > + listed = (struct list head *)(table + PTRS_PER_PTE); > > Missing "_" in "struct list head" > > > spin_lock_bh(&mm->context.lock); > > - list_add(&page->lru, &mm->context.pgtable_list); > > + list_add(listed, &mm->context.pgtable_list); > > spin_unlock_bh(&mm->context.lock); > > } > > return table; > > @@ -310,6 +315,7 @@ static void page_table_release_check(str > > > > void page_table_free(struct mm_struct *mm, unsigned long *table) > > { > > + struct list_head *listed; > > unsigned int mask, bit, half; > > struct page *page; > > Not sure if "reverse X-mas" is still part of any style guidelines, > but I still am a big fan of that :-). Although the other code in that > file is also not consistently using it ... > > > > > @@ -325,10 +331,24 @@ void page_table_free(struct mm_struct *m > > */ > > mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); > > mask >>= 24; > > - if (mask & 0x03U) > > - list_add(&page->lru, &mm->context.pgtable_list); > > - else > > - list_del(&page->lru); > > + if (mask & 0x03U) { > > + listed = (struct list_head *)table; > > + list_add(listed, &mm->context.pgtable_list); > > + } else { > > + /* > > + * Get address of the other page table sharing the page. > > + * There are sure to be MUCH better ways to do all this! > > + * But I'm rushing, while trying to keep to the obvious. > > + */ > > + listed = (struct list_head *)(table + PTRS_PER_PTE); > > + if (virt_to_page(listed) != page) { > > + /* sizeof(*listed) is twice sizeof(*table) */ > > + listed -= PTRS_PER_PTE; > > + } > > Bitwise XOR with 0x800 should do the trick here, i.e. give you the address > of the other 2K half, like this: > > listed = (struct list_head *)((unsigned long) table ^ 0x800UL); > > > + list_del(listed); > > + set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID)); > > + set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID)); > > + } > > spin_unlock_bh(&mm->context.lock); > > mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24)); > > mask >>= 24; > > @@ -349,6 +369,7 @@ void page_table_free(struct mm_struct *m > > void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, > > unsigned long vmaddr) > > { > > + struct list_head *listed; > > struct mm_struct *mm; > > struct page *page; > > unsigned int bit, mask; > > @@ -370,10 +391,24 @@ void page_table_free_rcu(struct mmu_gath > > */ > > mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); > > mask >>= 24; > > - if (mask & 0x03U) > > - list_add_tail(&page->lru, &mm->context.pgtable_list); > > - else > > - list_del(&page->lru); > > + if (mask & 0x03U) { > > + listed = (struct list_head *)table; > > + list_add_tail(listed, &mm->context.pgtable_list); > > + } else { > > + /* > > + * Get address of the other page table sharing the page. > > + * There are sure to be MUCH better ways to do all this! > > + * But I'm rushing, and trying to keep to the obvious. > > + */ > > + listed = (struct list_head *)(table + PTRS_PER_PTE); > > + if (virt_to_page(listed) != page) { > > + /* sizeof(*listed) is twice sizeof(*table) */ > > + listed -= PTRS_PER_PTE; > > + } > > Same as above. > > > + list_del(listed); > > + set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID)); > > + set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID)); > > + } > > spin_unlock_bh(&mm->context.lock); > > table = (unsigned long *) ((unsigned long) table | (0x01U << bit)); > > tlb_remove_table(tlb, table); > > Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Thanks a lot, Gerald, sorry that it now looks like wasted effort. I'm feeling confident enough of getting into s390 PP-AA-world now, that I think my top priority should be posting a v2 of the two preliminary series: get those out before focusing back on s390 mm/pgalloc.c. Is it too early to wish you a happy reverse Xmas? Hugh
On Wed, Jun 07, 2023 at 08:35:05PM -0700, Hugh Dickins wrote: > My current thinking (but may be proved wrong) is along the lines of: > why does something on its way to being freed need to be on any list > than the rcu_head list? I expect the current answer is, that the > other half is allocated, so the page won't be freed; but I hope that > we can put it back on that list once we're through with the rcu_head. I was having the same thought. It is pretty tricky, but if this was made into some core helper then PPC and S390 could both use it and PPC would get a nice upgrade to have the S390 frag re-use instead of leaking frags. Broadly we have three states: all frags free at least one frag free all frags used 'all frags free' should be returned to the allocator 'at least one frag free' should have the struct page on the mmu_struct's list 'all frags used' should be on no list. So if we go from all frags used -> at least one frag free Then we put it on the RCU then the RCU puts it on the mmu_struct list If we go from at least one frag free -> all frags free Then we take it off the mmu_struct list, put it on the RCU, and RCU frees it. Your trick to put the list_head for the mm_struct list into the frag memory looks like the right direction. So 'at least one frag free' has a single already RCU free'd frag hold the list head pointer. Thus we never use the LRU and the rcu_head is always available. The struct page itself can contain the actual free frag bitmask. I think if we split up the memory used for pt_frag_refcount we can get enough bits to keep track of everything. With only 2-4 frags we should be OK. So we track this data in the struct page: - Current RCU free TODO bitmask - if non-zero then a RCU is already triggered - Next RCU TODO bitmaks - If an RCU is already triggrered then we accumulate more free'd frags here - Current Free Bits - Only updated by the RCU callback ? We'd also need to store the mmu_struct pointer in the struct page for the RCU to be able to add/remove from the mm_struct list. I'm not sure how much of the work can be done with atomics and how much would need to rely on spinlock inside the mm_struct. It feels feasible and not so bad. :) Figure it out and test it on S390 then make power use the same common code, and we get full RCU page table freeing using a reliable rcu_head on both of these previously troublesome architectures :) Yay Jason
On Wed, 7 Jun 2023 20:35:05 -0700 (PDT) Hugh Dickins <hughd@google.com> wrote: > On Tue, 6 Jun 2023, Gerald Schaefer wrote: > > On Mon, 5 Jun 2023 22:11:52 -0700 (PDT) > > Hugh Dickins <hughd@google.com> wrote: > > > On Thu, 1 Jun 2023 15:57:51 +0200 > > > Gerald Schaefer <gerald.schaefer@linux.ibm.com> wrote: > > > > > > > > Yes, we have 2 pagetables in one 4K page, which could result in same > > > > rcu_head reuse. It might be possible to use the cleverness from our > > > > page_table_free() function, e.g. to only do the call_rcu() once, for > > > > the case where both 2K pagetable fragments become unused, similar to > > > > how we decide when to actually call __free_page(). > > > > > > > > However, it might be much worse, and page->rcu_head from a pagetable > > > > page cannot be used at all for s390, because we also use page->lru > > > > to keep our list of free 2K pagetable fragments. I always get confused > > > > by struct page unions, so not completely sure, but it seems to me that > > > > page->rcu_head would overlay with page->lru, right? > > > > > > Sigh, yes, page->rcu_head overlays page->lru. But (please correct me if > > > I'm wrong) I think that s390 could use exactly the same technique for > > > its list of free 2K pagetable fragments as it uses for its list of THP > > > "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use > > > the first two longs of the page table itself for threading the list. > > > > Nice idea, I think that could actually work, since we only need the empty > > 2K halves on the list. So it should be possible to store the list_head > > inside those. > > Jason quickly pointed out the flaw in my thinking there. Yes, while I had the right concerns about "the to-be-freed pagetables would still be accessible, but not really valid, if we added them back to the list, with list_heads inside them", when suggesting the approach w/o passing over the mm, I missed that we would have the very same issue already with the existing page_table_free_rcu(). Thankfully Jason was watching out! > > > > > > > > > And while it could use third and fourth longs instead, I don't see any > > > need for that: a deposited pagetable has been allocated, so would not > > > be on the list of free fragments. > > > > Correct, that should not interfere. > > > > > > > > Below is one of the grossest patches I've ever posted: gross because > > > it's a rushed attempt to see whether that is viable, while it would take > > > me longer to understand all the s390 cleverness there (even though the > > > PP AA commentary above page_table_alloc() is excellent). > > > > Sounds fair, this is also one of the grossest code we have, which is also > > why Alexander added the comment. I guess we could need even more comments > > inside the code, as it still confuses me more than it should. > > > > Considering that, you did remarkably well. Your patch seems to work fine, > > at least it survived some LTP mm tests. I will also add it to our CI runs, > > to give it some more testing. Will report tomorrow when it broke something. > > See also below for some patch comments. > > Many thanks for your effort on this patch. I don't expect the testing > of it to catch Jason's point, that I'm corrupting the page table while > it's on its way through RCU to being freed, but he's right nonetheless. Right, tests ran fine, but we would have introduced subtle issues with racing gup_fast, I guess. > > I'll integrate your fixes below into what I have here, but probably > just archive it as something to refer to later in case it might play > a part; but probably it will not - sorry for wasting your time. No worries, looking at that s390 code can never be amiss. It seems I need regular refresh, at least I'm sure I already understood it better in the past. And who knows, with Jasons recent thoughts, that "list_head inside pagetable" idea might not be dead yet. > > > > > > > > > I'm hoping the use of page->lru in arch/s390/mm/gmap.c is disjoint. > > > And cmma_init_nodat()? Ah, that's __init so I guess disjoint. > > > > cmma_init_nodat() should be disjoint, not only because it is __init, > > but also because it explicitly skips pagetable pages, so it should > > never touch page->lru of those. > > > > Not very familiar with the gmap code, it does look disjoint, and we should > > also use complete 4K pages for pagetables instead of 2K fragments there, > > but Christian or Claudio should also have a look. > > > > > > > > Gerald, s390 folk: would it be possible for you to give this > > > a try, suggest corrections and improvements, and then I can make it > > > a separate patch of the series; and work on avoiding concurrent use > > > of the rcu_head by pagetable fragment buddies (ideally fit in with > > > the scheme already there, maybe DD bits to go along with the PP AA). > > > > It feels like it could be possible to not only avoid the double > > rcu_head, but also avoid passing over the mm via page->pt_mm. > > I.e. have pte_free_defer(), which has the mm, do all the checks and > > list updates that page_table_free() does, for which we need the mm. > > Then just skip the pgtable_pte_page_dtor() + __free_page() at the end, > > and do call_rcu(pte_free_now) instead. The pte_free_now() could then > > just do _dtor/__free_page similar to the generic version. > > I'm not sure: I missed your suggestion there when I first skimmed > through, and today have spent more time getting deeper into how it's > done at present. I am now feeling more confident of a way forward, > a nicely integrated way forward, than I was yesterday. > Though getting it right may not be so easy. I think my "feeling" was a déjà vu of the existing logic that we use for page_table_free_rcu() -> __tlb_remove_table(), where we also have no mm any more at the end, and use the PP bits magic to find out if the page can be freed, or if we still have fragments left. Of course, in that case, we also would not need the mm any more for list handling, as the to-be-freed fragments were already put back on the list, but with PP bits set, to prevent re-use. And clearing those would then make the fragment usable from the list again. I guess that would also be the major difference here, i.e. your RCU call-back would need to be able to add fragments back to the list, after having them removed before to make room for page->rcu_head, but with Jasons thoughts that does not seem so impossible after all. I do not yet understand if the list_head would then compulsorily need to be inside the pagetable, because page->rcu_head/lru still cannot be used (again). But you already have a patch for that, so either way might be possible. > > When Jason pointed out the existing RCU, I initially hoped that it might > already provide the necessary framework: but sadly not, because the > unbatched case (used when additional memory is not available) does not > use RCU at all, but instead the tlb_remove_table_sync_one() IRQ hack. > If I used that, it would cripple the s390 implementation unacceptably. > > > > > I must admit that I still have no good overview of the "big picture" > > here, and especially if this approach would still fit in. Probably not, > > as the to-be-freed pagetables would still be accessible, but not really > > valid, if we added them back to the list, with list_heads inside them. > > So maybe call_rcu() has to be done always, and not only for the case > > where the whole 4K page becomes free, then we probably cannot do w/o > > passing over the mm for proper list handling. > > My current thinking (but may be proved wrong) is along the lines of: > why does something on its way to being freed need to be on any list > than the rcu_head list? I expect the current answer is, that the > other half is allocated, so the page won't be freed; but I hope that > we can put it back on that list once we're through with the rcu_head. Yes, that looks promising. Such a fragment would not necessarily need to be on the list, because while it is on its way, i.e. before the RCU call-back finished, it cannot be re-used anyway. page_table_alloc() could currently find such a fragment on the list, but only to see the PP bits set, so it will not use it. Only after __tlb_remove_table() in the RCU call-back resets the bits, it would be usable again. In your case, that could correspond to adding it back to the list. That could even be an improvement, because page_table_alloc() would not be bothered by such unusable fragments. [...] > > Is it too early to wish you a happy reverse Xmas? Nice idea, we should make June 24th the reverse Xmas Remembrance Day :-)
diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 17eb618f1348..89a9d5ef94f8 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -143,6 +143,10 @@ static inline void pmd_populate(struct mm_struct *mm, #define pte_free_kernel(mm, pte) page_table_free(mm, (unsigned long *) pte) #define pte_free(mm, pte) page_table_free(mm, (unsigned long *) pte) +/* arch use pte_free_defer() implementation in arch/s390/mm/pgalloc.c */ +#define pte_free_defer pte_free_defer +void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable); + void vmem_map_init(void); void *vmem_crst_alloc(unsigned long val); pte_t *vmem_pte_alloc(void); diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 66ab68db9842..0129de9addfd 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -346,6 +346,40 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) __free_page(page); } +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static void pte_free_now(struct rcu_head *head) +{ + struct page *page; + unsigned long mm_bit; + struct mm_struct *mm; + unsigned long *table; + + page = container_of(head, struct page, rcu_head); + table = (unsigned long *)page_to_virt(page); + mm_bit = (unsigned long)page->pt_mm; + /* 4K page has only two 2K fragments, but alignment allows eight */ + mm = (struct mm_struct *)(mm_bit & ~7); + table += PTRS_PER_PTE * (mm_bit & 7); + page_table_free(mm, table); + mmdrop_async(mm); +} + +void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) +{ + struct page *page; + unsigned long mm_bit; + + mmgrab(mm); + page = virt_to_page(pgtable); + /* Which 2K page table fragment of a 4K page? */ + mm_bit = ((unsigned long)pgtable & ~PAGE_MASK) / + (PTRS_PER_PTE * sizeof(pte_t)); + mm_bit += (unsigned long)mm; + page->pt_mm = (struct mm_struct *)mm_bit; + call_rcu(&page->rcu_head, pte_free_now); +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, unsigned long vmaddr) { diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 306a3d1a0fa6..1667a1bdb8a8 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -146,7 +146,7 @@ struct page { pgtable_t pmd_huge_pte; /* protected by page->ptl */ unsigned long _pt_pad_2; /* mapping */ union { - struct mm_struct *pt_mm; /* x86 pgds only */ + struct mm_struct *pt_mm; /* x86 pgd, s390 */ atomic_t pt_frag_refcount; /* powerpc */ }; #if ALLOC_SPLIT_PTLOCKS diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 8d89c8c4fac1..a9043d1a0d55 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -41,6 +41,7 @@ static inline void smp_mb__after_mmgrab(void) smp_mb__after_atomic(); } +extern void mmdrop_async(struct mm_struct *mm); extern void __mmdrop(struct mm_struct *mm); static inline void mmdrop(struct mm_struct *mm) diff --git a/kernel/fork.c b/kernel/fork.c index ed4e01daccaa..fa4486b65c56 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -942,7 +942,7 @@ static void mmdrop_async_fn(struct work_struct *work) __mmdrop(mm); } -static void mmdrop_async(struct mm_struct *mm) +void mmdrop_async(struct mm_struct *mm) { if (unlikely(atomic_dec_and_test(&mm->mm_count))) { INIT_WORK(&mm->async_put_work, mmdrop_async_fn);