Message ID | 20240122194200.381241-2-david@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-34069-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2bc4:b0:101:a8e8:374 with SMTP id hx4csp2797247dyb; Mon, 22 Jan 2024 11:45:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IEDA9gLEmuTPhc/rfJ58jj6qL3bS558hreSEpEE/8neJmwCysaFtzngXAXcdqwBEJbcs5XY X-Received: by 2002:a17:902:b211:b0:1d7:2dc4:fceb with SMTP id t17-20020a170902b21100b001d72dc4fcebmr4772436plr.28.1705952756686; Mon, 22 Jan 2024 11:45:56 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1705952756; cv=pass; d=google.com; s=arc-20160816; b=geoIbTEtvkJ283lbcD8PfdB809j4JK0hxYKR7Rc4FN+gAx2y1fG+onIPzHt4t48l3m CaKyi+73vfs0mH2ycpVe2q2V2iMps2K67+7VAvPBB/oA1i2aX2EXEUPsuR73Tw6bx452 RktuJFOIpR3R52/0jDR9yv6QNnfIKnVcsbyazhqR/L5igvOOtwuLXWWr2pGaahIkcDgi SZpXG6xpyQua2I7l5QBzXo+nlGspZ52XFrorG0EfhtKwmrR7BH9RLm6FfEhI7qEPUPBJ H5z5CpAXfv1cxtpsjSW5Gxc86Yu3M9jjd7sU1XjMFrWdjWxF/dg9zDyY2mmYO1nZZDMk OOXw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=vFTvJ98fhkS9/MTVBvrs88F9yaB/zvK8UHNYcTujcyU=; fh=lZsZCQsH4ia1jFJuQK1Gi3cXLvdnrj+ub8bSs7lwlcw=; b=oXVOuW4fuEKtL47U7yPoAP5nBt7GKv1xIqkcBl0YHoLSR9Bt2VkeWO4eCR0IGcI2nR urWvo/fuwAGJNTI7nm0Gmu63R7JTHsY4rwVBekEBExFDCJGfnqmu0oC8VSMxon6QoKlU 3n3IOIPNo+Rpqoh5rxwJOl/WoAcEoWsKS8USyuzFe8xvOk3tZSv9ZwczzetyAkB0krVQ qamghB3l/b8/svudr9tdccdXnyAfbijYorKAqqzUrfueKFKJkNq8VWK+0Fo74dAzp6N4 06UZrY/24epuhVy12aGY4A0VnEZZibYtL0NtgkBFJ2eHE2/j54aigVTzA6sW4rd61TN4 B2ng== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GoEm8g71; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-34069-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-34069-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id m4-20020a63ed44000000b005ceca0f3bbasi8432799pgk.429.2024.01.22.11.45.56 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jan 2024 11:45:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-34069-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GoEm8g71; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-34069-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-34069-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id E8CCD28C361 for <ouuuleilei@gmail.com>; Mon, 22 Jan 2024 19:43:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 17EFA3F8FF; Mon, 22 Jan 2024 19:42:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GoEm8g71" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99840405C0 for <linux-kernel@vger.kernel.org>; Mon, 22 Jan 2024 19:42:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705952545; cv=none; b=OuEk3NR6KaIvPgewnczk3Ugn9a8Scx9QxracIU48H6NF0SP4AbBfap/tEcp40yViq5vxcU+1Zp6rvn9ejzD208qzaUvZx2LhkiJBN6iUQb2JgZbixFpnP6cT1OOjYD7wah3kWJL3J7Mgn15iidgDSgs7QSp2KaV7ESjVvIoP0Z0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705952545; c=relaxed/simple; bh=BuIry1wpQrdCgEqqC0zAZpdlcA2B8UtTHyl4a7b2fPA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HJIOGLwWyINEjpIhoUAOVS3I1Lt8su5tmDEexKr+GNzD4nfDO+1v3NLvl47HdDOhpGZCDVx4d3sahOCqiN019qm8v66sK/+w0tvWP0+A9IvAXmuPVRLOS3IPZFrgMdEntoXKpkYzzdfjm6HCn0XLtAYEB+1GXuI0Rif+Uesf93Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GoEm8g71; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1705952542; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vFTvJ98fhkS9/MTVBvrs88F9yaB/zvK8UHNYcTujcyU=; b=GoEm8g71c1X/mGHjiq8hy2+IpIDveFJ0itN8TuYfZe/TqXwgU6PCjq3pOT9/Djjmk+Mn3T b5Y+PSYG0JrpY6tVnrTYyJiCgzmjz9aqtjGrZk7gAyzQx9G2tS8dgodVPtHX6Nekp5Rv75 5z0DcKmOSSLpMQDcE5xHPOE4ZmrU0KE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-167-RMuEdrELNYKNm7WJWua3og-1; Mon, 22 Jan 2024 14:42:17 -0500 X-MC-Unique: RMuEdrELNYKNm7WJWua3og-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6EBB6185A782; Mon, 22 Jan 2024 19:42:15 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.195.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1AAFB3C2E; Mon, 22 Jan 2024 19:42:09 +0000 (UTC) From: David Hildenbrand <david@redhat.com> To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand <david@redhat.com>, Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, Ryan Roberts <ryan.roberts@arm.com>, Russell King <linux@armlinux.org.uk>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Dinh Nguyen <dinguyen@kernel.org>, Michael Ellerman <mpe@ellerman.id.au>, Nicholas Piggin <npiggin@gmail.com>, Christophe Leroy <christophe.leroy@csgroup.eu>, "Aneesh Kumar K.V" <aneesh.kumar@kernel.org>, "Naveen N. Rao" <naveen.n.rao@linux.ibm.com>, Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, Alexander Gordeev <agordeev@linux.ibm.com>, Gerald Schaefer <gerald.schaefer@linux.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, Sven Schnelle <svens@linux.ibm.com>, "David S. Miller" <davem@davemloft.net>, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org Subject: [PATCH v1 01/11] arm/pgtable: define PFN_PTE_SHIFT on arm and arm64 Date: Mon, 22 Jan 2024 20:41:50 +0100 Message-ID: <20240122194200.381241-2-david@redhat.com> In-Reply-To: <20240122194200.381241-1-david@redhat.com> References: <20240122194200.381241-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788821117831361424 X-GMAIL-MSGID: 1788821117831361424 |
Series |
mm/memory: optimize fork() with PTE-mapped THP
|
|
Commit Message
David Hildenbrand
Jan. 22, 2024, 7:41 p.m. UTC
We want to make use of pte_next_pfn() outside of set_ptes(). Let's
simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
Signed-off-by: David Hildenbrand <david@redhat.com>
---
arch/arm/include/asm/pgtable.h | 2 ++
arch/arm64/include/asm/pgtable.h | 2 ++
2 files changed, 4 insertions(+)
Comments
On 23.01.24 11:34, Ryan Roberts wrote: > On 22/01/2024 19:41, David Hildenbrand wrote: >> We want to make use of pte_next_pfn() outside of set_ptes(). Let's >> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn(). >> >> Signed-off-by: David Hildenbrand <david@redhat.com> >> --- >> arch/arm/include/asm/pgtable.h | 2 ++ >> arch/arm64/include/asm/pgtable.h | 2 ++ >> 2 files changed, 4 insertions(+) >> >> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h >> index d657b84b6bf70..be91e376df79e 100644 >> --- a/arch/arm/include/asm/pgtable.h >> +++ b/arch/arm/include/asm/pgtable.h >> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval) >> extern void __sync_icache_dcache(pte_t pteval); >> #endif >> >> +#define PFN_PTE_SHIFT PAGE_SHIFT >> + >> void set_ptes(struct mm_struct *mm, unsigned long addr, >> pte_t *ptep, pte_t pteval, unsigned int nr); >> #define set_ptes set_ptes >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >> index 79ce70fbb751c..d4b3bd96e3304 100644 >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages) >> mte_sync_tags(pte, nr_pages); >> } >> >> +#define PFN_PTE_SHIFT PAGE_SHIFT > > I think this is buggy. And so is the arm64 implementation of set_ptes(). It > works fine for 48-bit output address, but for 52-bit OAs, the high bits are not > kept contigously, so if you happen to be setting a mapping for which the > physical memory block straddles bit 48, this won't work. Right, as soon as the PTE bits are not contiguous, this stops working, just like set_ptes() would, which I used as orientation. > > Today, only the 64K base page config can support 52 bits, and for this, > OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is > coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8]. > Fortunately we already have helpers in arm64 to abstract this. > > So I think arm64 will want to define its own pte_next_pfn(): > > #define pte_next_pfn pte_next_pfn > static inline pte_t pte_next_pfn(pte_t pte) > { > return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte)); > } > > I'll do a separate patch to fix the already broken arm64 set_ptes() implementation. Make sense. > > I'm not sure if this type of problem might also apply to other arches? I saw similar handling in the PPC implementation of set_ptes, but was not able to convince me that it is actually required there. pte_pfn on ppc does: static inline unsigned long pte_pfn(pte_t pte) { return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT; } But that means that the PFNs *are* contiguous. If high bits are used for something else, then we might produce a garbage PTE on overflow, but that shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not detect "belongs to this folio batch" either way. Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot(). I guess pte_pfn() implementations should tell us if anything special needs to happen.
On 23.01.24 11:48, David Hildenbrand wrote: > On 23.01.24 11:34, Ryan Roberts wrote: >> On 22/01/2024 19:41, David Hildenbrand wrote: >>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's >>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn(). >>> >>> Signed-off-by: David Hildenbrand <david@redhat.com> >>> --- >>> arch/arm/include/asm/pgtable.h | 2 ++ >>> arch/arm64/include/asm/pgtable.h | 2 ++ >>> 2 files changed, 4 insertions(+) >>> >>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h >>> index d657b84b6bf70..be91e376df79e 100644 >>> --- a/arch/arm/include/asm/pgtable.h >>> +++ b/arch/arm/include/asm/pgtable.h >>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval) >>> extern void __sync_icache_dcache(pte_t pteval); >>> #endif >>> >>> +#define PFN_PTE_SHIFT PAGE_SHIFT >>> + >>> void set_ptes(struct mm_struct *mm, unsigned long addr, >>> pte_t *ptep, pte_t pteval, unsigned int nr); >>> #define set_ptes set_ptes >>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >>> index 79ce70fbb751c..d4b3bd96e3304 100644 >>> --- a/arch/arm64/include/asm/pgtable.h >>> +++ b/arch/arm64/include/asm/pgtable.h >>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages) >>> mte_sync_tags(pte, nr_pages); >>> } >>> >>> +#define PFN_PTE_SHIFT PAGE_SHIFT >> >> I think this is buggy. And so is the arm64 implementation of set_ptes(). It >> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not >> kept contigously, so if you happen to be setting a mapping for which the >> physical memory block straddles bit 48, this won't work. > > Right, as soon as the PTE bits are not contiguous, this stops working, > just like set_ptes() would, which I used as orientation. > >> >> Today, only the 64K base page config can support 52 bits, and for this, >> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is >> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8]. >> Fortunately we already have helpers in arm64 to abstract this. >> >> So I think arm64 will want to define its own pte_next_pfn(): >> >> #define pte_next_pfn pte_next_pfn >> static inline pte_t pte_next_pfn(pte_t pte) >> { >> return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte)); >> } >> Digging into the details, on arm64 we have: #define pte_pfn(pte) (__pte_to_phys(pte) >> PAGE_SHIFT) and #define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK) But that implies, that upstream the PFN is always contiguous, no?
Le 23/01/2024 à 11:48, David Hildenbrand a écrit : > On 23.01.24 11:34, Ryan Roberts wrote: >> On 22/01/2024 19:41, David Hildenbrand wrote: >>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's >>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn(). >>> >>> Signed-off-by: David Hildenbrand <david@redhat.com> >>> --- >>> arch/arm/include/asm/pgtable.h | 2 ++ >>> arch/arm64/include/asm/pgtable.h | 2 ++ >>> 2 files changed, 4 insertions(+) >>> >>> diff --git a/arch/arm/include/asm/pgtable.h >>> b/arch/arm/include/asm/pgtable.h >>> index d657b84b6bf70..be91e376df79e 100644 >>> --- a/arch/arm/include/asm/pgtable.h >>> +++ b/arch/arm/include/asm/pgtable.h >>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t >>> pteval) >>> extern void __sync_icache_dcache(pte_t pteval); >>> #endif >>> +#define PFN_PTE_SHIFT PAGE_SHIFT >>> + >>> void set_ptes(struct mm_struct *mm, unsigned long addr, >>> pte_t *ptep, pte_t pteval, unsigned int nr); >>> #define set_ptes set_ptes >>> diff --git a/arch/arm64/include/asm/pgtable.h >>> b/arch/arm64/include/asm/pgtable.h >>> index 79ce70fbb751c..d4b3bd96e3304 100644 >>> --- a/arch/arm64/include/asm/pgtable.h >>> +++ b/arch/arm64/include/asm/pgtable.h >>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t >>> pte, unsigned int nr_pages) >>> mte_sync_tags(pte, nr_pages); >>> } >>> +#define PFN_PTE_SHIFT PAGE_SHIFT >> >> I think this is buggy. And so is the arm64 implementation of >> set_ptes(). It >> works fine for 48-bit output address, but for 52-bit OAs, the high >> bits are not >> kept contigously, so if you happen to be setting a mapping for which the >> physical memory block straddles bit 48, this won't work. > > Right, as soon as the PTE bits are not contiguous, this stops working, > just like set_ptes() would, which I used as orientation. > >> >> Today, only the 64K base page config can support 52 bits, and for this, >> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base >> pages is >> coming (hopefully v6.9) and in this case OA[51:50] are stored in >> PTE[9:8]. >> Fortunately we already have helpers in arm64 to abstract this. >> >> So I think arm64 will want to define its own pte_next_pfn(): >> >> #define pte_next_pfn pte_next_pfn >> static inline pte_t pte_next_pfn(pte_t pte) >> { >> return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte)); >> } >> >> I'll do a separate patch to fix the already broken arm64 set_ptes() >> implementation. > > Make sense. > >> >> I'm not sure if this type of problem might also apply to other arches? > > I saw similar handling in the PPC implementation of set_ptes, but was > not able to convince me that it is actually required there. > > pte_pfn on ppc does: > > static inline unsigned long pte_pfn(pte_t pte) > { > return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT; > } > > But that means that the PFNs *are* contiguous. If high bits are used for > something else, then we might produce a garbage PTE on overflow, but > that shouldn't really matter I concluded for folio_pte_batch() purposes, > we'd not detect "belongs to this folio batch" either way. Yes PFNs are contiguous. The only thing is that the PFN is not located at PAGE_SHIFT, see https://elixir.bootlin.com/linux/v6.3-rc2/source/arch/powerpc/include/asm/nohash/pte-e500.h#L63 On powerpc e500 we have 24 PTE flags and the RPN starts above that. The mask is then standard: #define PTE_RPN_MASK (~((1ULL << PTE_RPN_SHIFT) - 1)) Christophe > > Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I > just hope that we don't lose any other arbitrary PTE bits by doing the > pte_pgprot(). > > > I guess pte_pfn() implementations should tell us if anything special > needs to happen. >
Le 23/01/2024 à 12:08, Ryan Roberts a écrit : > On 23/01/2024 10:48, David Hildenbrand wrote: >> On 23.01.24 11:34, Ryan Roberts wrote: >>> On 22/01/2024 19:41, David Hildenbrand wrote: >>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's >>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn(). >>>> >>>> Signed-off-by: David Hildenbrand <david@redhat.com> >>>> --- >>>> arch/arm/include/asm/pgtable.h | 2 ++ >>>> arch/arm64/include/asm/pgtable.h | 2 ++ >>>> 2 files changed, 4 insertions(+) >>>> >>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h >>>> index d657b84b6bf70..be91e376df79e 100644 >>>> --- a/arch/arm/include/asm/pgtable.h >>>> +++ b/arch/arm/include/asm/pgtable.h >>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval) >>>> extern void __sync_icache_dcache(pte_t pteval); >>>> #endif >>>> +#define PFN_PTE_SHIFT PAGE_SHIFT >>>> + >>>> void set_ptes(struct mm_struct *mm, unsigned long addr, >>>> pte_t *ptep, pte_t pteval, unsigned int nr); >>>> #define set_ptes set_ptes >>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >>>> index 79ce70fbb751c..d4b3bd96e3304 100644 >>>> --- a/arch/arm64/include/asm/pgtable.h >>>> +++ b/arch/arm64/include/asm/pgtable.h >>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, >>>> unsigned int nr_pages) >>>> mte_sync_tags(pte, nr_pages); >>>> } >>>> +#define PFN_PTE_SHIFT PAGE_SHIFT >>> >>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It >>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not >>> kept contigously, so if you happen to be setting a mapping for which the >>> physical memory block straddles bit 48, this won't work. >> >> Right, as soon as the PTE bits are not contiguous, this stops working, just like >> set_ptes() would, which I used as orientation. >> >>> >>> Today, only the 64K base page config can support 52 bits, and for this, >>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is >>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8]. >>> Fortunately we already have helpers in arm64 to abstract this. >>> >>> So I think arm64 will want to define its own pte_next_pfn(): >>> >>> #define pte_next_pfn pte_next_pfn >>> static inline pte_t pte_next_pfn(pte_t pte) >>> { >>> return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte)); >>> } >>> >>> I'll do a separate patch to fix the already broken arm64 set_ptes() >>> implementation. >> >> Make sense. >> >>> >>> I'm not sure if this type of problem might also apply to other arches? >> >> I saw similar handling in the PPC implementation of set_ptes, but was not able >> to convince me that it is actually required there. >> >> pte_pfn on ppc does: >> >> static inline unsigned long pte_pfn(pte_t pte) >> { >> return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT; >> } >> >> But that means that the PFNs *are* contiguous. > > all the ppc pfn_pte() implementations also only shift the pfn, so I think ppc is > safe to just define PFN_PTE_SHIFT. Although 2 of the 3 implementations shift by > PTE_RPN_SHIFT and the other shifts by PAGE_SIZE, so you might want to define > PFN_PTE_SHIFT separately for all 3 configs? We have PTE_RPN_SHIFT defined for all 4 implementations, for some of them you are right it is defined as PAGE_SHIFT, but I see no reason to define PFN_PTE_SHIFT separately. > >> If high bits are used for >> something else, then we might produce a garbage PTE on overflow, but that >> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not >> detect "belongs to this folio batch" either way. > > Exactly. > >> >> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just >> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot(). > > I don't see the need for ppc to implement pte_next_pfn(). Agreed. > > pte_pgprot() is not a "proper" arch interface (its only required by the core-mm > if the arch implements a certain Kconfig IIRC). For arm64, all bits that are not > pfn are pgprot, so there are no bits lost. > >> >> >> I guess pte_pfn() implementations should tell us if anything special needs to >> happen. >> >
>> >>> If high bits are used for >>> something else, then we might produce a garbage PTE on overflow, but that >>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not >>> detect "belongs to this folio batch" either way. >> >> Exactly. >> >>> >>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just >>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot(). >> >> I don't see the need for ppc to implement pte_next_pfn(). > > Agreed. So likely we should then do on top for powerpc (whitespace damage): diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index a04ae4449a025..549a440ed7f65 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, break; ptep++; addr += PAGE_SIZE; - /* - * increment the pfn. - */ - pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte))); + pte = pte_next_pfn(pte); } }
On 23.01.24 12:17, Ryan Roberts wrote: > On 23/01/2024 11:02, David Hildenbrand wrote: >> On 23.01.24 11:48, David Hildenbrand wrote: >>> On 23.01.24 11:34, Ryan Roberts wrote: >>>> On 22/01/2024 19:41, David Hildenbrand wrote: >>>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's >>>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn(). >>>>> >>>>> Signed-off-by: David Hildenbrand <david@redhat.com> >>>>> --- >>>>> arch/arm/include/asm/pgtable.h | 2 ++ >>>>> arch/arm64/include/asm/pgtable.h | 2 ++ >>>>> 2 files changed, 4 insertions(+) >>>>> >>>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h >>>>> index d657b84b6bf70..be91e376df79e 100644 >>>>> --- a/arch/arm/include/asm/pgtable.h >>>>> +++ b/arch/arm/include/asm/pgtable.h >>>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval) >>>>> extern void __sync_icache_dcache(pte_t pteval); >>>>> #endif >>>>> +#define PFN_PTE_SHIFT PAGE_SHIFT >>>>> + >>>>> void set_ptes(struct mm_struct *mm, unsigned long addr, >>>>> pte_t *ptep, pte_t pteval, unsigned int nr); >>>>> #define set_ptes set_ptes >>>>> diff --git a/arch/arm64/include/asm/pgtable.h >>>>> b/arch/arm64/include/asm/pgtable.h >>>>> index 79ce70fbb751c..d4b3bd96e3304 100644 >>>>> --- a/arch/arm64/include/asm/pgtable.h >>>>> +++ b/arch/arm64/include/asm/pgtable.h >>>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, >>>>> unsigned int nr_pages) >>>>> mte_sync_tags(pte, nr_pages); >>>>> } >>>>> +#define PFN_PTE_SHIFT PAGE_SHIFT >>>> >>>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It >>>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not >>>> kept contigously, so if you happen to be setting a mapping for which the >>>> physical memory block straddles bit 48, this won't work. >>> >>> Right, as soon as the PTE bits are not contiguous, this stops working, >>> just like set_ptes() would, which I used as orientation. >>> >>>> >>>> Today, only the 64K base page config can support 52 bits, and for this, >>>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is >>>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8]. >>>> Fortunately we already have helpers in arm64 to abstract this. >>>> >>>> So I think arm64 will want to define its own pte_next_pfn(): >>>> >>>> #define pte_next_pfn pte_next_pfn >>>> static inline pte_t pte_next_pfn(pte_t pte) >>>> { >>>> return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte)); >>>> } >>>> >> >> Digging into the details, on arm64 we have: >> >> #define pte_pfn(pte) (__pte_to_phys(pte) >> PAGE_SHIFT) >> >> and >> >> #define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK) >> >> But that implies, that upstream the PFN is always contiguous, no? >> > > > But __pte_to_phys() and __phys_to_pte_val() depend on a Kconfig. If PA bits is > 52, the bits are not all contiguous: > > #ifdef CONFIG_ARM64_PA_BITS_52 > static inline phys_addr_t __pte_to_phys(pte_t pte) > { > return (pte_val(pte) & PTE_ADDR_LOW) | > ((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT); > } > static inline pteval_t __phys_to_pte_val(phys_addr_t phys) > { > return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK; > } > #else > #define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK) > #define __phys_to_pte_val(phys) (phys) > #endif > Ah, how could I've missed that. Agreed, set_ptes() and this patch are broken. Do you want to send a patch to implement pte_next_pfn() on arm64, and then use pte_next_pfn() in set_ptes()? Then I can drop this patch here completely from this series.
On 23.01.24 12:38, Ryan Roberts wrote: > On 23/01/2024 11:31, David Hildenbrand wrote: >>>> >>>>> If high bits are used for >>>>> something else, then we might produce a garbage PTE on overflow, but that >>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not >>>>> detect "belongs to this folio batch" either way. >>>> >>>> Exactly. >>>> >>>>> >>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just >>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot(). >>>> >>>> I don't see the need for ppc to implement pte_next_pfn(). >>> >>> Agreed. >> >> So likely we should then do on top for powerpc (whitespace damage): >> >> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c >> index a04ae4449a025..549a440ed7f65 100644 >> --- a/arch/powerpc/mm/pgtable.c >> +++ b/arch/powerpc/mm/pgtable.c >> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, >> pte_t *ptep, >> break; >> ptep++; >> addr += PAGE_SIZE; >> - /* >> - * increment the pfn. >> - */ >> - pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte))); >> + pte = pte_next_pfn(pte); >> } >> } > > Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling > arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple > increment to this more complex approach, but the log doesn't say why. @Aneesh, was that change on purpose?
Le 23/01/2024 à 12:38, Ryan Roberts a écrit : > On 23/01/2024 11:31, David Hildenbrand wrote: >>>> >>>>> If high bits are used for >>>>> something else, then we might produce a garbage PTE on overflow, but that >>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not >>>>> detect "belongs to this folio batch" either way. >>>> >>>> Exactly. >>>> >>>>> >>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just >>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot(). >>>> >>>> I don't see the need for ppc to implement pte_next_pfn(). >>> >>> Agreed. >> >> So likely we should then do on top for powerpc (whitespace damage): >> >> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c >> index a04ae4449a025..549a440ed7f65 100644 >> --- a/arch/powerpc/mm/pgtable.c >> +++ b/arch/powerpc/mm/pgtable.c >> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, >> pte_t *ptep, >> break; >> ptep++; >> addr += PAGE_SIZE; >> - /* >> - * increment the pfn. >> - */ >> - pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte))); >> + pte = pte_next_pfn(pte); >> } >> } > > Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling > arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple > increment to this more complex approach, but the log doesn't say why. Right. There was a discussion about it without any conclusion: https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20231024143604.16749-1-aneesh.kumar@linux.ibm.com/ As far as understand the simple increment is better on ppc/32 but worse in ppc/64. Christophe
On 23.01.24 12:48, Christophe Leroy wrote: > > > Le 23/01/2024 à 12:38, Ryan Roberts a écrit : >> On 23/01/2024 11:31, David Hildenbrand wrote: >>>>> >>>>>> If high bits are used for >>>>>> something else, then we might produce a garbage PTE on overflow, but that >>>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not >>>>>> detect "belongs to this folio batch" either way. >>>>> >>>>> Exactly. >>>>> >>>>>> >>>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just >>>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot(). >>>>> >>>>> I don't see the need for ppc to implement pte_next_pfn(). >>>> >>>> Agreed. >>> >>> So likely we should then do on top for powerpc (whitespace damage): >>> >>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c >>> index a04ae4449a025..549a440ed7f65 100644 >>> --- a/arch/powerpc/mm/pgtable.c >>> +++ b/arch/powerpc/mm/pgtable.c >>> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, >>> pte_t *ptep, >>> break; >>> ptep++; >>> addr += PAGE_SIZE; >>> - /* >>> - * increment the pfn. >>> - */ >>> - pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte))); >>> + pte = pte_next_pfn(pte); >>> } >>> } >> >> Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling >> arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple >> increment to this more complex approach, but the log doesn't say why. > > Right. There was a discussion about it without any conclusion: > https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20231024143604.16749-1-aneesh.kumar@linux.ibm.com/ > > As far as understand the simple increment is better on ppc/32 but worse > in ppc/64. Sounds like we're micro-optimizing for a specific compiler version output. Hurray.
David Hildenbrand <david@redhat.com> writes: > On 23.01.24 12:38, Ryan Roberts wrote: >> On 23/01/2024 11:31, David Hildenbrand wrote: >>>>> >>>>>> If high bits are used for >>>>>> something else, then we might produce a garbage PTE on overflow, but that >>>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not >>>>>> detect "belongs to this folio batch" either way. >>>>> >>>>> Exactly. >>>>> >>>>>> >>>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just >>>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot(). >>>>> >>>>> I don't see the need for ppc to implement pte_next_pfn(). >>>> >>>> Agreed. >>> >>> So likely we should then do on top for powerpc (whitespace damage): >>> >>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c >>> index a04ae4449a025..549a440ed7f65 100644 >>> --- a/arch/powerpc/mm/pgtable.c >>> +++ b/arch/powerpc/mm/pgtable.c >>> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, >>> pte_t *ptep, >>> break; >>> ptep++; >>> addr += PAGE_SIZE; >>> - /* >>> - * increment the pfn. >>> - */ >>> - pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte))); >>> + pte = pte_next_pfn(pte); >>> } >>> } >> >> Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling >> arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple >> increment to this more complex approach, but the log doesn't say why. > > @Aneesh, was that change on purpose? > Because we had a bug with the patch that introduced the change and that line was confusing. The right thing should have been to add pte_pfn_next() to make it clear. It was confusing because not all pte format had pfn at PAGE_SHIFT offset (even though we did use the correct PTE_RPN_SHIFT in this specific case). To make it simpler I ended up switching that line to pte_pfn(pte) + 1 . -aneesh
David Hildenbrand <david@redhat.com> writes: >>> >>>> If high bits are used for >>>> something else, then we might produce a garbage PTE on overflow, but that >>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not >>>> detect "belongs to this folio batch" either way. >>> >>> Exactly. >>> >>>> >>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just >>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot(). >>> >>> I don't see the need for ppc to implement pte_next_pfn(). >> >> Agreed. > > So likely we should then do on top for powerpc (whitespace damage): > > diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c > index a04ae4449a025..549a440ed7f65 100644 > --- a/arch/powerpc/mm/pgtable.c > +++ b/arch/powerpc/mm/pgtable.c > @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, > break; > ptep++; > addr += PAGE_SIZE; > - /* > - * increment the pfn. > - */ > - pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte))); > + pte = pte_next_pfn(pte); > } > } Agreed. -aneesh
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index d657b84b6bf70..be91e376df79e 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval) extern void __sync_icache_dcache(pte_t pteval); #endif +#define PFN_PTE_SHIFT PAGE_SHIFT + void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval, unsigned int nr); #define set_ptes set_ptes diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 79ce70fbb751c..d4b3bd96e3304 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages) mte_sync_tags(pte, nr_pages); } +#define PFN_PTE_SHIFT PAGE_SHIFT + static inline void set_ptes(struct mm_struct *mm, unsigned long __always_unused addr, pte_t *ptep, pte_t pte, unsigned int nr)