Message ID | 20240103091423.400294-7-peterx@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-15321-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp4911487dyb; Wed, 3 Jan 2024 01:17:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IFzYF8UH3m7Lp0qNdvpU63ml+capQtIg1Qrikw5phefU4emcm0GNCSzpvXolUZarAr6mTKh X-Received: by 2002:a17:906:1091:b0:a23:6484:5cc9 with SMTP id u17-20020a170906109100b00a2364845cc9mr7959723eju.146.1704273435035; Wed, 03 Jan 2024 01:17:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704273435; cv=none; d=google.com; s=arc-20160816; b=iZ77H6zM0UKifwd6i1mt8coOF03lXeM3tMl+4YoSP33PuXxHWoRgK6K1TrcNuh1wuY 9rnvr9BjDSvlLdK7SR3rD5X0UAdTBAQptpdjdeFffW62ZenWZboAhfrgCp6icZwazLGx fdZKtfUc2kTBa+ZSnfMBez8ucLzS05AwZtxQMjY6oEuVogzAGpkhoLk24TOAN7wc527S 9etlzMgJdn19PZPt/fi36vvaMzMWjJnHasPwHD+pNHQvOuD4CAfyr44I+EqzwDAEvwCE yiGLBl33DTHez64ZOiTiV3kecnWyhN10TQL8JAxAUlX/ixGOdv4thPyZ/lYvryOpjWAV +txw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=G2mMJAnaoVdNju5RvSwq+KPjZVLCTkTT7F8NTqIl1do=; fh=JvQ3nGflNTIwPBfhSW2OJAIjHOHR+R1SiFkwzYoQoWY=; b=tA+mKGY2cBBkw4rAchZp9dRQn7912ui+Dfx9BSMBQ/epJSTlFW6SNFhw04Dgh2wlZq 4b79eVln7dk6aV3L9AwiKPNw1Fi5tVgs4HWHS9AZUciEVgLTI4w8GTQyij8QK31G0uzE xzfl8ea8OE8RfIlQSPwYEj0l4y0nnWXUmokNHzgiX3dXW4UX3ECMcxTazXXeJ/R7WbZW YJXGPVfrheOLWy2TgwtPgNigPi8OmpZG3k6BmHf7zX3WtxgwpxNcGP3g6HxRde0kH1Vi QVfA7er2FoArqIPNi+b8q3LS/NkNDaqQAlTeW3B2ZqXcqSNcKDgOFUWSqFjhS7jKBjSU 7GPA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=W2nDFnAo; spf=pass (google.com: domain of linux-kernel+bounces-15321-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15321-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id an14-20020a17090656ce00b00a2825fc959esi1656046ejc.770.2024.01.03.01.17.14 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Jan 2024 01:17:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-15321-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=W2nDFnAo; spf=pass (google.com: domain of linux-kernel+bounces-15321-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15321-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id A58731F210EF for <ouuuleilei@gmail.com>; Wed, 3 Jan 2024 09:17:14 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7E9571947E; Wed, 3 Jan 2024 09:16:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="W2nDFnAo" X-Original-To: linux-kernel@vger.kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D61D19460 for <linux-kernel@vger.kernel.org>; Wed, 3 Jan 2024 09:15:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273357; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G2mMJAnaoVdNju5RvSwq+KPjZVLCTkTT7F8NTqIl1do=; b=W2nDFnAo2k/r2mcW39A/X1smFLZBFebFY6dVdlmkG+cpBa/G5kzVVIe0fdI9hvcNc5oWNf DelYLVoXhZnROpgS8bGkAsCTxEYKxi08nEdhGsDRZt1vA3QQqcJZbXG+eKrIZHI5cmFBlA Ei37M+7CX10S+m/zLlaHsccumopGPpk= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-597-sJHnyUgTOIGCIKVC1MWAaQ-1; Wed, 03 Jan 2024 04:15:53 -0500 X-MC-Unique: sJHnyUgTOIGCIKVC1MWAaQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 731831C0512D; Wed, 3 Jan 2024 09:15:52 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id DBCD9492BE6; Wed, 3 Jan 2024 09:15:41 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton <jthoughton@google.com>, David Hildenbrand <david@redhat.com>, "Kirill A . Shutemov" <kirill@shutemov.name>, Yang Shi <shy828301@gmail.com>, peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton <akpm@linux-foundation.org>, "Aneesh Kumar K . V" <aneesh.kumar@kernel.org>, Rik van Riel <riel@surriel.com>, Andrea Arcangeli <aarcange@redhat.com>, Axel Rasmussen <axelrasmussen@google.com>, Mike Rapoport <rppt@kernel.org>, John Hubbard <jhubbard@nvidia.com>, Vlastimil Babka <vbabka@suse.cz>, Michael Ellerman <mpe@ellerman.id.au>, Christophe Leroy <christophe.leroy@csgroup.eu>, Andrew Jones <andrew.jones@linux.dev>, linuxppc-dev@lists.ozlabs.org, Mike Kravetz <mike.kravetz@oracle.com>, Muchun Song <muchun.song@linux.dev>, linux-arm-kernel@lists.infradead.org, Jason Gunthorpe <jgg@nvidia.com>, Christoph Hellwig <hch@infradead.org>, Lorenzo Stoakes <lstoakes@gmail.com>, Matthew Wilcox <willy@infradead.org> Subject: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing Date: Wed, 3 Jan 2024 17:14:16 +0800 Message-ID: <20240103091423.400294-7-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787060221106611174 X-GMAIL-MSGID: 1787060221106611174 |
Series |
mm/gup: Unify hugetlb, part 2
|
|
Commit Message
Peter Xu
Jan. 3, 2024, 9:14 a.m. UTC
From: Peter Xu <peterx@redhat.com> Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are some kernel usage of hugepd (can refer to hugepd_populate_kernel() for PPC_8XX), however those pages are not candidates for GUP. Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings") added a check to fail gup-fast if there's potential risk of violating GUP over writeback file systems. That should never apply to hugepd. Considering that hugepd is an old format (and even software-only), there's no plan to extend hugepd into other file typed memories that is prone to the same issue. Drop that check, not only because it'll never be true for hugepd per any known plan, but also it paves way for reusing the function outside fast-gup. To make sure we'll still remember this issue just in case hugepd will be extended to support non-hugetlbfs memories, add a rich comment above gup_huge_pd(), explaining the issue with proper references. Cc: Christoph Hellwig <hch@infradead.org> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Peter Xu <peterx@redhat.com> --- mm/gup.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
Comments
On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote: > From: Peter Xu <peterx@redhat.com> > > Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are > some kernel usage of hugepd (can refer to hugepd_populate_kernel() for > PPC_8XX), however those pages are not candidates for GUP. > > Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to > file-backed mappings") added a check to fail gup-fast if there's potential > risk of violating GUP over writeback file systems. That should never apply > to hugepd. Considering that hugepd is an old format (and even > software-only), there's no plan to extend hugepd into other file typed > memories that is prone to the same issue. I didn't dig into the ppc stuff too deeply, but this looks to me like it is the same thing as ARM's contig bits? ie a chunk of PMD/etc entries are all managed together as though they are a virtual larger entry and we use the hugepte_addr_end() stuff to iterate over each sub entry. But WHY is GUP doing this or caring about this? GUP should have no problem handling the super-size entry (eg 8M on nohash) as a single thing. It seems we only lack an API to get this out of the arch code? It seems to me we should see ARM and PPC agree on what the API is for this and then get rid of hugepd by making both use the same page table walker API. Is that too hopeful? > Drop that check, not only because it'll never be true for hugepd per any > known plan, but also it paves way for reusing the function outside > fast-gup. I didn't see any other caller of this function in this series? When does this re-use happen?? Jason
Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit : > On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote: >> From: Peter Xu <peterx@redhat.com> >> >> Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are >> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for >> PPC_8XX), however those pages are not candidates for GUP. >> >> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to >> file-backed mappings") added a check to fail gup-fast if there's potential >> risk of violating GUP over writeback file systems. That should never apply >> to hugepd. Considering that hugepd is an old format (and even >> software-only), there's no plan to extend hugepd into other file typed >> memories that is prone to the same issue. > > I didn't dig into the ppc stuff too deeply, but this looks to me like > it is the same thing as ARM's contig bits? > > ie a chunk of PMD/etc entries are all managed together as though they > are a virtual larger entry and we use the hugepte_addr_end() stuff to > iterate over each sub entry. As far as I understand ARM's contig stuff, hugepd on powerpc is something different. hugepd is a page directory dedicated to huge pages, where you have huge pages listed instead of regular pages. For instance, on powerpc 32 with each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A hugepd for 512k is a page table with 8 entries. And for 8Mbytes entries, the hugepd is a page table with only one entry. And 2 consecutive PGS entries will point to the same hugepd to cover the entire 8Mbytes. > > But WHY is GUP doing this or caring about this? GUP should have no > problem handling the super-size entry (eg 8M on nohash) as a single > thing. It seems we only lack an API to get this out of the arch code? > > It seems to me we should see ARM and PPC agree on what the API is for > this and then get rid of hugepd by making both use the same page table > walker API. Is that too hopeful? Can't see the similarity between ARM contig PTE and PPC huge page directories. > >> Drop that check, not only because it'll never be true for hugepd per any >> known plan, but also it paves way for reusing the function outside >> fast-gup. > > I didn't see any other caller of this function in this series? When > does this re-use happen?? > > Jason Christophe
On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote: > > > Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit : > > On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote: > >> From: Peter Xu <peterx@redhat.com> > >> > >> Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are > >> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for > >> PPC_8XX), however those pages are not candidates for GUP. > >> > >> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to > >> file-backed mappings") added a check to fail gup-fast if there's potential > >> risk of violating GUP over writeback file systems. That should never apply > >> to hugepd. Considering that hugepd is an old format (and even > >> software-only), there's no plan to extend hugepd into other file typed > >> memories that is prone to the same issue. > > > > I didn't dig into the ppc stuff too deeply, but this looks to me like > > it is the same thing as ARM's contig bits? > > > > ie a chunk of PMD/etc entries are all managed together as though they > > are a virtual larger entry and we use the hugepte_addr_end() stuff to > > iterate over each sub entry. > > As far as I understand ARM's contig stuff, hugepd on powerpc is > something different. > > hugepd is a page directory dedicated to huge pages, where you have huge > pages listed instead of regular pages. For instance, on powerpc 32 with > each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A > hugepd for 512k is a page table with 8 entries. > > And for 8Mbytes entries, the hugepd is a page table with only one entry. > And 2 consecutive PGS entries will point to the same hugepd to cover the > entire 8Mbytes. That still sounds alot like the ARM thing - except ARM replicates the entry, you also said PPC relicates the entry like ARM to get to the 8M? I guess the difference is in how the table memory is layed out? ARM marks the size in the same entry that has the physical address so the entries are self describing and then replicated. It kind of sounds like PPC is marking the size in prior level and then reconfiguring the layout of the lower level? Otherwise it surely must do the same replication to make a radix index work.. If yes, I guess that is the main problem, the mm APIs don't have way today to convey data from the pgd level to understand how to parse the pmd level? > > It seems to me we should see ARM and PPC agree on what the API is for > > this and then get rid of hugepd by making both use the same page table > > walker API. Is that too hopeful? > > Can't see the similarity between ARM contig PTE and PPC huge page > directories. Well, they are both variable sized entries. So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size() that would return enough information for both. Jason
Le 16/01/2024 à 13:31, Jason Gunthorpe a écrit : > On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote: >> >> >> Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit : >>> On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote: >>>> From: Peter Xu <peterx@redhat.com> >>>> >>>> Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are >>>> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for >>>> PPC_8XX), however those pages are not candidates for GUP. >>>> >>>> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to >>>> file-backed mappings") added a check to fail gup-fast if there's potential >>>> risk of violating GUP over writeback file systems. That should never apply >>>> to hugepd. Considering that hugepd is an old format (and even >>>> software-only), there's no plan to extend hugepd into other file typed >>>> memories that is prone to the same issue. >>> >>> I didn't dig into the ppc stuff too deeply, but this looks to me like >>> it is the same thing as ARM's contig bits? >>> >>> ie a chunk of PMD/etc entries are all managed together as though they >>> are a virtual larger entry and we use the hugepte_addr_end() stuff to >>> iterate over each sub entry. >> >> As far as I understand ARM's contig stuff, hugepd on powerpc is >> something different. >> >> hugepd is a page directory dedicated to huge pages, where you have huge >> pages listed instead of regular pages. For instance, on powerpc 32 with >> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A >> hugepd for 512k is a page table with 8 entries. >> >> And for 8Mbytes entries, the hugepd is a page table with only one entry. >> And 2 consecutive PGS entries will point to the same hugepd to cover the >> entire 8Mbytes. > > That still sounds alot like the ARM thing - except ARM replicates the > entry, you also said PPC relicates the entry like ARM to get to the > 8M? Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 directory, even for 8M. You can see in attached picture what the hardware expects. > > I guess the difference is in how the table memory is layed out? ARM > marks the size in the same entry that has the physical address so the > entries are self describing and then replicated. It kind of sounds > like PPC is marking the size in prior level and then reconfiguring the > layout of the lower level? Otherwise it surely must do the same > replication to make a radix index work.. Yes that's how it works on powerpc. For 8xx we used to do that for both 8M and 512k pages. Now for 512k pages we do kind of like ARM (which means replicating the entry 128 times) as that's needed to allow mixing different page sizes for a given PGD entry. But for 8M pages that would mean replicating the entry 2048 times. That's a bit too much isn't it ? > > If yes, I guess that is the main problem, the mm APIs don't have way > today to convey data from the pgd level to understand how to parse the > pmd level? > >>> It seems to me we should see ARM and PPC agree on what the API is for >>> this and then get rid of hugepd by making both use the same page table >>> walker API. Is that too hopeful? >> >> Can't see the similarity between ARM contig PTE and PPC huge page >> directories. > > Well, they are both variable sized entries. > > So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size() > that would return enough information for both. pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) level. It must be a two-level process even for pages bigger than a PMD entry. Christophe
On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote: > >> hugepd is a page directory dedicated to huge pages, where you have huge > >> pages listed instead of regular pages. For instance, on powerpc 32 with > >> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A > >> hugepd for 512k is a page table with 8 entries. > >> > >> And for 8Mbytes entries, the hugepd is a page table with only one entry. > >> And 2 consecutive PGS entries will point to the same hugepd to cover the > >> entire 8Mbytes. > > > > That still sounds alot like the ARM thing - except ARM replicates the > > entry, you also said PPC relicates the entry like ARM to get to the > > 8M? > > Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 > directory, even for 8M. Your diagram looks almost exactly like ARM to me. The key thing is that the address for the L2 Table is *always* formed as: L2 Table Base << 12 + L2 Index << 2 + 00 Then the L2 Descriptor must contains bits indicating the page size. The L2 Descriptor is replicated to every 4k entry that the page size covers. The only difference I see is the 8M case which has a page size greater than a single L1 entry. > Yes that's how it works on powerpc. For 8xx we used to do that for both > 8M and 512k pages. Now for 512k pages we do kind of like ARM (which > means replicating the entry 128 times) as that's needed to allow mixing > different page sizes for a given PGD entry. Right, you want to have granular page sizes or it becomes unusable in the general case > But for 8M pages that would mean replicating the entry 2048 times. > That's a bit too much isn't it ? Indeed, de-duplicating the L2 Table is a neat optimization. > > So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size() > > that would return enough information for both. > > pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) > level. It must be a two-level process even for pages bigger than a PMD > entry. Right, this is the normal THP/hugetlb situation on x86/etc. It wouldn't apply here since it seems the HW doesn't have a bit in the L1 descriptor to indicate leaf. Instead for PPC this hugepd stuff should start to follow Ryan's generic work for ARM contig: https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/ Specifically the arch implementation: https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/ Ie the arch should ultimately wire up the replication and variable page size bits within its implementation of set_ptes(). set_ptes()s gets a contiguous run of address and should install it with maximum use of the variable page sizes. The core code will start to call set_ptes() in more cases as Ryan gets along his project. For the purposes of GUP, where are are today and where we are going, it would be much better to not have a special PPC specific "hugepd" parser. Just process each of the 4k replicates one by one like ARM is starting with. The arch would still have to return the correct page address from pte_phys() which I think Ryan is doing by having the replicates encode the full 4k based address in each entry. The HW will ignore those low bits and pte_phys() then works properly. This would work for PPC as well, excluding the 8M optimization. Going forward I'd expect to see some pte_page_size() that returns the size bits and GUP can have logic to skip reading replicates. The advantage of all this is that it stops making the feature special and the work Ryan is doing to generically push larger folios into set_ptes will become usable on these PPC platforms as well. And we can kill the PPC specific hugepd. Jason
On 17/01/2024 13:22, Jason Gunthorpe wrote: > On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote: >>>> hugepd is a page directory dedicated to huge pages, where you have huge >>>> pages listed instead of regular pages. For instance, on powerpc 32 with >>>> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A >>>> hugepd for 512k is a page table with 8 entries. >>>> >>>> And for 8Mbytes entries, the hugepd is a page table with only one entry. >>>> And 2 consecutive PGS entries will point to the same hugepd to cover the >>>> entire 8Mbytes. >>> >>> That still sounds alot like the ARM thing - except ARM replicates the >>> entry, you also said PPC relicates the entry like ARM to get to the >>> 8M? >> >> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 >> directory, even for 8M. > > Your diagram looks almost exactly like ARM to me. > > The key thing is that the address for the L2 Table is *always* formed as: > > L2 Table Base << 12 + L2 Index << 2 + 00 > > Then the L2 Descriptor must contains bits indicating the page > size. The L2 Descriptor is replicated to every 4k entry that the page > size covers. > > The only difference I see is the 8M case which has a page size greater > than a single L1 entry. > >> Yes that's how it works on powerpc. For 8xx we used to do that for both >> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which >> means replicating the entry 128 times) as that's needed to allow mixing >> different page sizes for a given PGD entry. > > Right, you want to have granular page sizes or it becomes unusable in > the general case > >> But for 8M pages that would mean replicating the entry 2048 times. >> That's a bit too much isn't it ? > > Indeed, de-duplicating the L2 Table is a neat optimization. > >>> So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size() >>> that would return enough information for both. >> >> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) >> level. It must be a two-level process even for pages bigger than a PMD >> entry. > > Right, this is the normal THP/hugetlb situation on x86/etc. It > wouldn't apply here since it seems the HW doesn't have a bit in the L1 > descriptor to indicate leaf. > > Instead for PPC this hugepd stuff should start to follow Ryan's > generic work for ARM contig: > > https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/ > > Specifically the arch implementation: > > https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/ > > Ie the arch should ultimately wire up the replication and variable > page size bits within its implementation of set_ptes(). set_ptes()s > gets a contiguous run of address and should install it with maximum > use of the variable page sizes. The core code will start to call > set_ptes() in more cases as Ryan gets along his project. Note that it's not just set_ptes() that you want to batch; there are other calls that can benefit too. See patches 2 and 3 in the series you linked. (although I'm working with DavidH on this and the details are going to change a little). > > For the purposes of GUP, where are are today and where we are going, > it would be much better to not have a special PPC specific "hugepd" > parser. Just process each of the 4k replicates one by one like ARM is > starting with. > > The arch would still have to return the correct page address from > pte_phys() which I think Ryan is doing by having the replicates encode > the full 4k based address in each entry. Yes; although its actually also a requirement of the arm architecture. Since the contig bit is just a hint that the HW may or may not take any notice of, the page tables have to be correct for the case where the HW just reads them in base pages. Fixing up the bottom bits should be trivial using the PTE pointer, if needed for ppc. > The HW will ignore those low > bits and pte_phys() then works properly. This would work for PPC as > well, excluding the 8M optimization. > > Going forward I'd expect to see some pte_page_size() that returns the > size bits and GUP can have logic to skip reading replicates. Yes; pte_batch_remaining() in patch 2 is an attempt at this. But as I said the details will likely change a little. > > The advantage of all this is that it stops making the feature special > and the work Ryan is doing to generically push larger folios into > set_ptes will become usable on these PPC platforms as well. And we can > kill the PPC specific hugepd. > > Jason
On Mon, Jan 15, 2024 at 02:37:48PM -0400, Jason Gunthorpe wrote: > > Drop that check, not only because it'll never be true for hugepd per any > > known plan, but also it paves way for reusing the function outside > > fast-gup. > > I didn't see any other caller of this function in this series? When > does this re-use happen?? It's reused in patch 12 ("mm/gup: Handle hugepd for follow_page()"). Thanks,
diff --git a/mm/gup.c b/mm/gup.c index eebae70d2465..fa93e14b7fca 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; } - if (!folio_fast_pin_allowed(folio, flags)) { - gup_put_folio(folio, refs, flags); - return 0; - } - if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; @@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 1; } +/* + * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file + * systems on Power, which does not have issue with folio writeback against + * GUP updates. When hugepd will be extended to support non-hugetlbfs or + * even anonymous memory, we need to do extra check as what we do with most + * of the other folios. See writable_file_mapping_allowed() and + * folio_fast_pin_allowed() for more information. + */ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, struct page **pages, int *nr)