Message ID | 20230315051444.3229621-28-willy@infradead.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2152488wrd; Tue, 14 Mar 2023 22:45:01 -0700 (PDT) X-Google-Smtp-Source: AK7set8LK9A9qmLbbE9mISHIsAYW4JWmi9dF8VAjlYPtXWfWCYeCTBKO6HjxTDc1pbdshCIWH4Fo X-Received: by 2002:a17:90a:cb8e:b0:23d:35cf:44be with SMTP id a14-20020a17090acb8e00b0023d35cf44bemr4708448pju.6.1678859100830; Tue, 14 Mar 2023 22:45:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1678859100; cv=none; d=google.com; s=arc-20160816; b=WrpxD1hD8/J93lcevGOiEYSy2Xz4/kx8ATowPMetY5rYKeMNxAXiDI91cmY4U/zRFw o/Sg3YndUh56IkALsckQgW8AXpGKjEORJcUFKUB2jeBG9nSEU1QMQfnNSbWhFGuewbbH aBp1vAVRBIkvQZz8mXmNeF5Ejj+Yg7Rh0mT5oFjzy28Fa233j770CX3y0nSCCQP1ZKAK H96oYO5A07BMz1MvMPLQXOpYoZSQLmbkVBfaSnXvGj/y6RLPIH/KiJu4L7G+lpJgfsM5 cwh3I7k/B2cB4Hfxrc5V4jT5/2UFW/ZcnczG/h5ynzdhRx8WSx2YQ9Fdm8UvwxlnG1bu SgXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RJm37s6exHMhL80A15n5dqXB4pKNus7yxXTpBOkgNUI=; b=aE6yYvpwG/PQQeeRPDy08PKQ6U/lnE1YZzhogNXH033zNJ1bysvlsibBwtxWJnmfeS 3xE8O/yO9WpIBG18nvOLQc1/QItisT1zRTzIeeRH4j1lS4jh5Cxhr4TduZeL2GSk81kr gaRIDROVPHpfIzJ75bAav1FoJkf83XdFbCZK1N1VnaRj5zynHSCj2eAe50BIpAy+o9Ss hkwG5H59IP53W0aMvPurwer/rDuAW3iuUYXCys4vvP8teZ9hj7TJ1rONvhKmh8A2ACo7 WvKstv1+/4iXMyfDiojbbrHjINUATp2m0nYkKOhg8l+1k7ZpgA5ICj9MOLB04viYcXU5 +taw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=CgXM4A7X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u9-20020a17090341c900b001a076b5a192si1668514ple.326.2023.03.14.22.44.46; Tue, 14 Mar 2023 22:45:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=CgXM4A7X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231592AbjCOFQR (ORCPT <rfc822;realc9580@gmail.com> + 99 others); Wed, 15 Mar 2023 01:16:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231307AbjCOFPI (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 15 Mar 2023 01:15:08 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 818952748D; Tue, 14 Mar 2023 22:14:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RJm37s6exHMhL80A15n5dqXB4pKNus7yxXTpBOkgNUI=; b=CgXM4A7XHbUEAmUDTF22UPLVCd zvb+5FvXwSGpI2Q/Z66mghrtiyZmHXJIph19KRPz/fBfzE/+9CCB0Bmt+elDnjhZ1YDKbiVhfyd3c hHd/olfBHzkGv7CkRoS0Vyr5N43mJTWVKq1Ua7jzhPGE8cb1a8qDACgRemAqw2HCP7Y1BbC00UouK oZ5b5pVsjuyhG9FxOwaQOqDZ3hceid7bahSI+fqwNsM26XOCuKFQcm3DGZ0GpA0pBkwsxmwsYwIOH 4FgZp/0MyQGgy6+/6Ma10Q/XbKrp8wIQ1qmoahbMNU/lEca8Nm5MawGg0fp5nL86jr4kr2FyClPHI GqqqmsoQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTN-00DYD4-MQ; Wed, 15 Mar 2023 05:14:49 +0000 From: "Matthew Wilcox (Oracle)" <willy@infradead.org> To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com> Subject: [PATCH v4 27/36] x86: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:35 +0000 Message-Id: <20230315051444.3229621-28-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760411360370770949?= X-GMAIL-MSGID: =?utf-8?q?1760411360370770949?= |
Series |
New page table range API
|
|
Commit Message
Matthew Wilcox
March 15, 2023, 5:14 a.m. UTC
Add PFN_PTE_SHIFT and a noop update_mmu_cache_range().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
arch/x86/include/asm/pgtable.h | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
Comments
On Wed, Mar 15, 2023 at 05:14:35AM +0000, Matthew Wilcox (Oracle) wrote: > Add PFN_PTE_SHIFT and a noop update_mmu_cache_range(). > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Borislav Petkov <bp@alien8.de> > Cc: Dave Hansen <dave.hansen@linux.intel.com> > Cc: x86@kernel.org > Cc: "H. Peter Anvin" <hpa@zytor.com> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> > --- > arch/x86/include/asm/pgtable.h | 13 ++++++------- > 1 file changed, 6 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index 1031025730d0..b237878061c4 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -184,6 +184,8 @@ static inline int pte_special(pte_t pte) > > static inline u64 protnone_mask(u64 val); > > +#define PFN_PTE_SHIFT PAGE_SHIFT > + > static inline unsigned long pte_pfn(pte_t pte) > { > phys_addr_t pfn = pte_val(pte); > @@ -1019,13 +1021,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) > return res; > } > > -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, > - pte_t *ptep, pte_t pte) > -{ > - page_table_check_ptes_set(mm, addr, ptep, pte, 1); > - set_pte(ptep, pte); > -} > - > static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, > pmd_t *pmdp, pmd_t pmd) > { > @@ -1291,6 +1286,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, > unsigned long addr, pte_t *ptep) > { > } > +static inline void update_mmu_cache_range(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, unsigned int nr) > +{ > +} > static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, > unsigned long addr, pmd_t *pmd) > { > -- > 2.39.2 > >
On Wed, Mar 15, 2023 at 05:14:35AM +0000, Matthew Wilcox (Oracle) wrote: > Add PFN_PTE_SHIFT and a noop update_mmu_cache_range(). > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Borislav Petkov <bp@alien8.de> > Cc: Dave Hansen <dave.hansen@linux.intel.com> > Cc: x86@kernel.org > Cc: "H. Peter Anvin" <hpa@zytor.com> > --- > arch/x86/include/asm/pgtable.h | 13 ++++++------- > 1 file changed, 6 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index 1031025730d0..b237878061c4 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -184,6 +184,8 @@ static inline int pte_special(pte_t pte) > > static inline u64 protnone_mask(u64 val); > > +#define PFN_PTE_SHIFT PAGE_SHIFT > + > static inline unsigned long pte_pfn(pte_t pte) > { > phys_addr_t pfn = pte_val(pte); > @@ -1019,13 +1021,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) > return res; > } > > -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, > - pte_t *ptep, pte_t pte) > -{ > - page_table_check_ptes_set(mm, addr, ptep, pte, 1); > - set_pte(ptep, pte); > -} > - And remove set_pte_at() apparently.. whut?!? > static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, > pmd_t *pmdp, pmd_t pmd) > { > @@ -1291,6 +1286,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, > unsigned long addr, pte_t *ptep) > { > } > +static inline void update_mmu_cache_range(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, unsigned int nr) > +{ > +} > static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, > unsigned long addr, pmd_t *pmd) > { > -- > 2.39.2 >
On Wed, Mar 15, 2023 at 11:34:36AM +0100, Peter Zijlstra wrote: > On Wed, Mar 15, 2023 at 05:14:35AM +0000, Matthew Wilcox (Oracle) wrote: > > Add PFN_PTE_SHIFT and a noop update_mmu_cache_range(). > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > Cc: Thomas Gleixner <tglx@linutronix.de> > > Cc: Ingo Molnar <mingo@redhat.com> > > Cc: Borislav Petkov <bp@alien8.de> > > Cc: Dave Hansen <dave.hansen@linux.intel.com> > > Cc: x86@kernel.org > > Cc: "H. Peter Anvin" <hpa@zytor.com> > > --- > > arch/x86/include/asm/pgtable.h | 13 ++++++------- > > 1 file changed, 6 insertions(+), 7 deletions(-) > > > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > > index 1031025730d0..b237878061c4 100644 > > --- a/arch/x86/include/asm/pgtable.h > > +++ b/arch/x86/include/asm/pgtable.h > > @@ -184,6 +184,8 @@ static inline int pte_special(pte_t pte) > > > > static inline u64 protnone_mask(u64 val); > > > > +#define PFN_PTE_SHIFT PAGE_SHIFT > > + > > static inline unsigned long pte_pfn(pte_t pte) > > { > > phys_addr_t pfn = pte_val(pte); > > @@ -1019,13 +1021,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) > > return res; > > } > > > > -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, > > - pte_t *ptep, pte_t pte) > > -{ > > - page_table_check_ptes_set(mm, addr, ptep, pte, 1); > > - set_pte(ptep, pte); > > -} > > - > > And remove set_pte_at() apparently.. whut?!? It's now in include/linux/pgtable.h > > static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, > > pmd_t *pmdp, pmd_t pmd) > > { > > @@ -1291,6 +1286,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, > > unsigned long addr, pte_t *ptep) > > { > > } > > +static inline void update_mmu_cache_range(struct vm_area_struct *vma, > > + unsigned long addr, pte_t *ptep, unsigned int nr) > > +{ > > +} > > static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, > > unsigned long addr, pmd_t *pmd) > > { > > -- > > 2.39.2 > > >
On Wed, Mar 15, 2023 at 01:16:24PM +0200, Mike Rapoport wrote: > On Wed, Mar 15, 2023 at 11:34:36AM +0100, Peter Zijlstra wrote: > > On Wed, Mar 15, 2023 at 05:14:35AM +0000, Matthew Wilcox (Oracle) wrote: > > > Add PFN_PTE_SHIFT and a noop update_mmu_cache_range(). > > > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > > Cc: Thomas Gleixner <tglx@linutronix.de> > > > Cc: Ingo Molnar <mingo@redhat.com> > > > Cc: Borislav Petkov <bp@alien8.de> > > > Cc: Dave Hansen <dave.hansen@linux.intel.com> > > > Cc: x86@kernel.org > > > Cc: "H. Peter Anvin" <hpa@zytor.com> > > > --- > > > arch/x86/include/asm/pgtable.h | 13 ++++++------- > > > 1 file changed, 6 insertions(+), 7 deletions(-) > > > > > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > > > index 1031025730d0..b237878061c4 100644 > > > --- a/arch/x86/include/asm/pgtable.h > > > +++ b/arch/x86/include/asm/pgtable.h > > > @@ -184,6 +184,8 @@ static inline int pte_special(pte_t pte) > > > > > > static inline u64 protnone_mask(u64 val); > > > > > > +#define PFN_PTE_SHIFT PAGE_SHIFT > > > + > > > static inline unsigned long pte_pfn(pte_t pte) > > > { > > > phys_addr_t pfn = pte_val(pte); > > > @@ -1019,13 +1021,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) > > > return res; > > > } > > > > > > -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, > > > - pte_t *ptep, pte_t pte) > > > -{ > > > - page_table_check_ptes_set(mm, addr, ptep, pte, 1); > > > - set_pte(ptep, pte); > > > -} > > > - > > > > And remove set_pte_at() apparently.. whut?!? > > It's now in include/linux/pgtable.h All I have is this one patch -- and the changelog doesn't mention this. HTF am I supposed to know that?
On Wed, Mar 15, 2023 at 12:19:41PM +0100, Peter Zijlstra wrote: > On Wed, Mar 15, 2023 at 01:16:24PM +0200, Mike Rapoport wrote: > > On Wed, Mar 15, 2023 at 11:34:36AM +0100, Peter Zijlstra wrote: > > > On Wed, Mar 15, 2023 at 05:14:35AM +0000, Matthew Wilcox (Oracle) wrote: > > > > Add PFN_PTE_SHIFT and a noop update_mmu_cache_range(). > > > > > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > > > Cc: Thomas Gleixner <tglx@linutronix.de> > > > > Cc: Ingo Molnar <mingo@redhat.com> > > > > Cc: Borislav Petkov <bp@alien8.de> > > > > Cc: Dave Hansen <dave.hansen@linux.intel.com> > > > > Cc: x86@kernel.org > > > > Cc: "H. Peter Anvin" <hpa@zytor.com> > > > > --- > > > > arch/x86/include/asm/pgtable.h | 13 ++++++------- > > > > 1 file changed, 6 insertions(+), 7 deletions(-) > > > > > > > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > > > > index 1031025730d0..b237878061c4 100644 > > > > --- a/arch/x86/include/asm/pgtable.h > > > > +++ b/arch/x86/include/asm/pgtable.h > > > > @@ -184,6 +184,8 @@ static inline int pte_special(pte_t pte) > > > > > > > > static inline u64 protnone_mask(u64 val); > > > > > > > > +#define PFN_PTE_SHIFT PAGE_SHIFT > > > > + > > > > static inline unsigned long pte_pfn(pte_t pte) > > > > { > > > > phys_addr_t pfn = pte_val(pte); > > > > @@ -1019,13 +1021,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) > > > > return res; > > > > } > > > > > > > > -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, > > > > - pte_t *ptep, pte_t pte) > > > > -{ > > > > - page_table_check_ptes_set(mm, addr, ptep, pte, 1); > > > > - set_pte(ptep, pte); > > > > -} > > > > - > > > > > > And remove set_pte_at() apparently.. whut?!? > > > > It's now in include/linux/pgtable.h > > All I have is this one patch -- and the changelog doesn't mention this. > HTF am I supposed to know that? You should be subscribed to linux-arch. I literally can't cc all arch maintainers on every patch; many of the mailing lists will reject the emails based on "too many recipients". That's what linux-arch is _for_.
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 1031025730d0..b237878061c4 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -184,6 +184,8 @@ static inline int pte_special(pte_t pte) static inline u64 protnone_mask(u64 val); +#define PFN_PTE_SHIFT PAGE_SHIFT + static inline unsigned long pte_pfn(pte_t pte) { phys_addr_t pfn = pte_val(pte); @@ -1019,13 +1021,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) return res; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - set_pte(ptep, pte); -} - static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { @@ -1291,6 +1286,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ +} static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) {