From patchwork Tue Feb 28 21:37:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62622 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267395wrd; Tue, 28 Feb 2023 13:39:39 -0800 (PST) X-Google-Smtp-Source: AK7set/fA9E74ut/JT0MZoPXr+hIUmgeMzhmUn2ot8u3nUFC5PDf9Q0sbwgoTTIl6JoxPVQRF/Ne X-Received: by 2002:aa7:9e05:0:b0:5cd:81a7:4094 with SMTP id y5-20020aa79e05000000b005cd81a74094mr3753061pfq.5.1677620379220; Tue, 28 Feb 2023 13:39:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620379; cv=none; d=google.com; s=arc-20160816; b=jAPdhjrBGt0KxGJulif53agXV0+o68nXjOdTXOdmNu7dE1krCT27F3oPKrEPllINjl Hi1R8PtYHIz1acMsK/IHQsuTa9o5aoYfRvCGZ5gbn3UuRL7PJDoum3dcZOSnNp0UA8r+ r60aw2U8kZ8uv4hP6JHyVFG0w/TpSj2EC7EOX9R0FD8VR6CB4LmTiIA4gzCCBUwUlX0w I1gkrN+rFl0l76RWmmNnYPnQnTSb835i0R1FON71Ypwa3BCPLrQHybPmytO3y0HHBG/I /R7u2E6a8oHjtsIEpOMuEdcPPoeJKTpYshqFk70h9chjiJiltof6+k+fwOKAwbNdG9bm zmVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ohh0hBicoR9cEJC1f0QXvtIvk3E2NGoKfRsPDD6eKkI=; b=RQKAss2IVwL+Vx/KbHRXo/CRPxQ75BRpoPJuZUZSML8IROBPWfZWxXhrlhXGV5XLbY I79toJ0jll9uO21sSIH/W4aUzbhsXCMMtKKiEboI5dSuo4VsWjxgvhWIP7du1B/5OTY1 L7wdv/o7hxF506wk9x/c+ZZK3aCzgNaQ8gUN/LdEC/C4PLnWJnzuyUjXFUqoRe5zlg6/ yt6/9SJ3dzt7bHQB9RrzmahSA79Ky/tZemq9rk4zIqsqN1rcO1nX2psfxSj5j8GJFglm Ta4PYwcnsVCVBQnc5SjqqQx/811GRDZEHimosHYN1iGNwq9YgXaXhjGduklkMAmW8AQI ODCQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=C4l9hofg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w9-20020a056a0014c900b00592a19cde96si405552pfu.369.2023.02.28.13.39.25; Tue, 28 Feb 2023 13:39:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=C4l9hofg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229752AbjB1Vin (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229988AbjB1ViF (ORCPT ); Tue, 28 Feb 2023 16:38:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEC8F3403B; Tue, 28 Feb 2023 13:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ohh0hBicoR9cEJC1f0QXvtIvk3E2NGoKfRsPDD6eKkI=; b=C4l9hofg4rXftyXEerYvPHhQWb dfQB5LKm+FDa6C47uGBFnFC3zM3BryxXu8sj/gjDOY2O+D49YhfYWpVTZxEV/shabx37cO1Dva5zN D3mCsa9NIAKTpB7OrGAFf8e+244BialSZcSfy1ocAy27AG1DniLnfMQPg8aRpFWr/OMM6UK1Mkpxv W0As55hUkjr87Ho6l/vHOgFkeKR8CsjU1Un3j625SmEIZ3v+hTCZK+nvQnTAQOlaNV29QcYwvG8zF YfTNGdH3SPPddEAtGGWqWC+fugeAGBF4tEhu7ueNcMRAbblnJOP5ra5PWDQ3HUsUpBnaOyHqPvSza PlHdlyag==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oU-J8; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 01/34] mm: Convert page_table_check_pte_set() to page_table_check_ptes_set() Date: Tue, 28 Feb 2023 21:37:04 +0000 Message-Id: <20230228213738.272178-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112467175537018?= X-GMAIL-MSGID: =?utf-8?q?1759112467175537018?= Tell the page table check how many PTEs & PFNs we want it to check. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm64/include/asm/pgtable.h | 2 +- arch/riscv/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 2 +- include/linux/page_table_check.h | 14 +++++++------- mm/page_table_check.c | 14 ++++++++------ 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b6ba466e2e8a..69765dc697af 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -358,7 +358,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); return __set_pte_at(mm, addr, ptep, pte); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index ab05f892d317..b516f3b59616 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -459,7 +459,7 @@ static inline void __set_pte_at(struct mm_struct *mm, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) { - page_table_check_pte_set(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, 1); __set_pte_at(mm, addr, ptep, pteval); } diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7425f32e5293..84be3e07b112 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1022,7 +1022,7 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); set_pte(ptep, pte); } diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h index 01e16c7696ec..ba269c7009e4 100644 --- a/include/linux/page_table_check.h +++ b/include/linux/page_table_check.h @@ -20,8 +20,8 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr, pmd_t pmd); void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, pud_t pud); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte); +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, @@ -73,14 +73,14 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, __page_table_check_pud_clear(mm, addr, pud); } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; - __page_table_check_pte_set(mm, addr, ptep, pte); + __page_table_check_ptes_set(mm, addr, ptep, pte, nr); } static inline void page_table_check_pmd_set(struct mm_struct *mm, @@ -138,9 +138,9 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, { } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { } diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 25d8610c0042..e6f4d40caaa2 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -184,20 +184,22 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(__page_table_check_pud_clear); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + if (&init_mm == mm) return; - __page_table_check_pte_clear(mm, addr, *ptep); + for (i = 0; i < nr; i++) + __page_table_check_pte_clear(mm, addr, ptep[i]); if (pte_user_accessible_page(pte)) { - page_table_check_set(mm, addr, pte_pfn(pte), - PAGE_SIZE >> PAGE_SHIFT, + page_table_check_set(mm, addr, pte_pfn(pte), nr, pte_write(pte)); } } -EXPORT_SYMBOL(__page_table_check_pte_set); +EXPORT_SYMBOL(__page_table_check_ptes_set); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) From patchwork Tue Feb 28 21:37:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62627 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267549wrd; Tue, 28 Feb 2023 13:40:06 -0800 (PST) X-Google-Smtp-Source: AK7set+R+w/WrwCPbHwVu3cqB3xm5qM5+NCCJad/9oVlmDvc+EjC0i8vhLGwf1nQWqp8nB+z4qe7 X-Received: by 2002:a17:902:7089:b0:19c:d5c7:e3df with SMTP id z9-20020a170902708900b0019cd5c7e3dfmr3546871plk.8.1677620405687; Tue, 28 Feb 2023 13:40:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620405; cv=none; d=google.com; s=arc-20160816; b=l4yHbE/y0QCrtVtR+9iJQSLE7ltXRo9JdB5QV9UzWWYfjeUa8zkxTWNN9EgNcS1K6z yLgTNNnT6GnDnH6xne+PhO65IZ49jqij1/E4lRzFJNPfQpcjCu1U3MmTG1DGr9ilwGfC lV2bhKhro+KAciPt42WFo7qt6d1VBlkZXEiCIZqoYdDvbyxv9FBqrEWwRYYSHTTtcKkj fu2b0NY4mjIxnZ4UeBMen2ew3eqcf74NWDh71ZLShOwL6Msf2xZYGw7uPrCHx1GJbP8Q rZxillUdUpBuaIA6BwmA9VNF2oYuB+Sg9No2V0ZQ8rVWuXs8W2VkeeE4eQYb3fFbFKp3 Mc5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=goynt5niP0v9aEgqldbKfY9zrWSKLPIYoftetp9ETpc=; b=vzX4YntftZm+7+PDRS9yzML+KC2twFDPXC+YAyeimiiObQU59UzefTYVpgiz4y54bu /irvjijjjymCQh0hftQdI2CgLRvhmFG05kL2+RgyFTcGhUt0lXG0+wI6bNe6Vgy1L527 n1oaWw7leo7law/D0GazLDFtCSjDVTmRzCfS6+g+CAm0fU8xEi5pF1FHhXg2sD9pvn5K fKSnh86pjZU4C3u+mVxqUff14o3tkkGsxuJ6AzOWWqW+SWIsr4kZefkNkb8FYog2rrVW YloAosaWUhjU64YBMsl0bKTdfcxhHif5U9NTH0uKr85haGwAnpd7RHMGoBVl0AORUb+d byfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=SnSZ9lDB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lc3-20020a170902fa8300b00189ad19dd26si11321173plb.326.2023.02.28.13.39.51; Tue, 28 Feb 2023 13:40:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=SnSZ9lDB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230130AbjB1VjJ (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229916AbjB1ViF (ORCPT ); Tue, 28 Feb 2023 16:38:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9282D34312; Tue, 28 Feb 2023 13:37:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=goynt5niP0v9aEgqldbKfY9zrWSKLPIYoftetp9ETpc=; b=SnSZ9lDB/BBxP/CV69NmolSCQS 0cuOuB1PvC3Cyo73sMiE72INL5B+MlCDkHoO/vCso0e/cPBdn1Mo66xeJgrZvdDk2JEmaFWJIGbNL C+m/5sDD/h7mlDQW0dBCb7b3LMoxbGCVKIJcosPbhZnBx1tAP7kLWRM8OlecdSwoKgquypQCiBlI0 yQySTZC0IrVYS4ytXo8/oBAashPe0mVVrXG02yJo5exF0rIXUvmTkZvZEMo1oSHLpQidx4NitEJX7 kPKib2ibpZZXHT1STrdXWxBQdErEEB+e3GKFC5s81JmJmd2otXTYWYnZiZTDzGPZ9JftAhMFId4Ln lOQWXEig==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oW-MA; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 02/34] mm: Add generic flush_icache_pages() and documentation Date: Tue, 28 Feb 2023 21:37:05 +0000 Message-Id: <20230228213738.272178-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112494483236726?= X-GMAIL-MSGID: =?utf-8?q?1759112494483236726?= flush_icache_page() is deprecated but not yet removed, so add a range version of it. Change the documentation to refer to update_mmu_cache_range() instead of update_mmu_cache(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) --- Documentation/core-api/cachetlb.rst | 35 +++++++++++++++-------------- include/asm-generic/cacheflush.h | 5 +++++ 2 files changed, 23 insertions(+), 17 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index 5c0552e78c58..d4c9e2a28d36 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -88,13 +88,13 @@ changes occur: This is used primarily during fault processing. -5) ``void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep)`` +5) ``void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr)`` - At the end of every page fault, this routine is invoked to - tell the architecture specific code that a translation - now exists at virtual address "address" for address space - "vma->vm_mm", in the software page tables. + At the end of every page fault, this routine is invoked to tell + the architecture specific code that translations now exists + in the software page tables for address space "vma->vm_mm" + at virtual address "address" for "nr" consecutive pages. A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -306,17 +306,18 @@ maps this page at its virtual address. private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. - This allows these interfaces to be implemented much more efficiently. - It allows one to "defer" (perhaps indefinitely) the actual flush if - there are currently no user processes mapping this page. See sparc64's - flush_dcache_page and update_mmu_cache implementations for an example - of how to go about doing this. + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) the + actual flush if there are currently no user processes mapping this + page. See sparc64's flush_dcache_page and update_mmu_cache_range + implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if page_file_mapping() - returns a mapping, and mapping_mapped on that mapping returns %false, - just mark the architecture private page flag bit. Later, in - update_mmu_cache(), a check is made of this flag bit, and if set the - flush is done and the flag bit is cleared. + The idea is, first at flush_dcache_page() time, if + page_file_mapping() returns a mapping, and mapping_mapped on that + mapping returns %false, just mark the architecture private page + flag bit. Later, in update_mmu_cache_range(), a check is made + of this flag bit, and if set the flush is done and the flag bit + is cleared. .. important:: @@ -369,7 +370,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache. In the future, the hope + flush_dcache_page and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index f46258d1a080..09d51a680765 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #endif #ifndef flush_icache_page +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} + static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) { From patchwork Tue Feb 28 21:37:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62611 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267158wrd; Tue, 28 Feb 2023 13:39:08 -0800 (PST) X-Google-Smtp-Source: AK7set+TpdXgBYcRy9X5UzvpADKV1cO6ijhC64xoyT8DF4baOo5KrK7qHznYd2DmlA/VEEBvUGXC X-Received: by 2002:a17:90a:19c5:b0:233:e0d5:a2b8 with SMTP id 5-20020a17090a19c500b00233e0d5a2b8mr4748286pjj.29.1677620348325; Tue, 28 Feb 2023 13:39:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620348; cv=none; d=google.com; s=arc-20160816; b=s+eyD5HtSFY0Sg1eTjM5vUNs2sDWSwgARZRv7Akz1Wq15YawIJj+e1lO8InJdP0by5 D7buKnWVAmG5BdYzk5NsM9Hb9AjOKVltKs/zmSJ+zOJyYpo9kcWq/uPr0xOH1Arq7qCi xA6BfcrPvllQmrxqJGRNivplr7gT9lU8ukAmom+CRINoqpw4xzal8T/ttxJay9W3BroY H0yy8pWtzNbIKZlEECi4tADkbvmOGvZWgKaxJdF9GwZxvuJQtY3h0jvCoCek9Nvt3aMs LhHpLPr+90Fp2l/ZIWqpbK04X6uoGLvYy5b5ejqJapNOJwhs2ub88kStgxtPG2sdxb2S 5TqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4/i61ZL50yz6ye25FSWBiGacTMwROTdk2fb5bLUdjZQ=; b=CDddKLiY8BNqzChScRdVSsQo81cGWdnSSnrPdCd66ceujnDSkmGlwnatTHDle9Xnvf S8SMeSOh12mM+KohmSfPbadDI+HCxelBHIh3ZLcvcAWlbFqglk/ms34H/0pOcR9FBx0/ IdIMd6rN5+ldFEIkwmpdbra5eQu8mbuowdK5wg+iqnlt1sjOGmbG2ArubS0IL8Ory1mg B5abqk1DOqzwsMGq3uscaEEVG2pl2YjwyZ7c9GStZZIEQa7eMnfcZ2ASHa6yJGwtQOQh /hqwC1y0W5dBCulUQshyZkT+qkYxwuieHp8AB5ClUyy9lbigckcGXpOj9cUAsV64jY6Z 4djQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=V7oeXdyW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m3-20020a632603000000b0050324795c32si11299427pgm.319.2023.02.28.13.38.55; Tue, 28 Feb 2023 13:39:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=V7oeXdyW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229947AbjB1Vhv (ORCPT + 99 others); Tue, 28 Feb 2023 16:37:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229836AbjB1Vhp (ORCPT ); Tue, 28 Feb 2023 16:37:45 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF8D73403E; Tue, 28 Feb 2023 13:37:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4/i61ZL50yz6ye25FSWBiGacTMwROTdk2fb5bLUdjZQ=; b=V7oeXdyWrSBQgRuryxvAjDdmK/ PmEA/0paMrtQ6HBE6tePsYJZz6k8Eh2IV/Cduw5PCatHygY5uf1HHh9QXAJPpxd+2Gt1XNNJ38oBs 7jgK2jLi0kEqTGbXcVMBwUjbU8B8InVRi5eVhNtY/7bIXj5PQ0pDmofSBVZsUlt31MoRZzGWn21xe ShtzNNaoqjpLDJEoZAQx6SbSpLUGkzQUxHAln8/TjK2Ab5c/gDiu2+elc2SGxedi70Wnn4dR15nyK UfY0ANK45Jvq1+6mUEFQF1tYFeZx/1od4+UoclrTdV+pqDp7Fq4FbeDqoDP2CwXIOnaiRJmBGcjTR 1cStI3qg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oY-P6; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 03/34] mm: Add folio_flush_mapping() Date: Tue, 28 Feb 2023 21:37:06 +0000 Message-Id: <20230228213738.272178-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112434084795220?= X-GMAIL-MSGID: =?utf-8?q?1759112434084795220?= This is the folio equivalent of page_mapping_file(), but rename it to make it clear that it's very different from page_file_mapping(). Theoretically, there's nothing flush-only about it, but there are no other users today, and I doubt there will be; it's almost always more useful to know the swapfile's mapping or the swapcache's mapping. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 51b75b89730e..1b1ba3d5100d 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -369,6 +369,26 @@ static inline struct address_space *folio_file_mapping(struct folio *folio) return folio->mapping; } +/** + * folio_flush_mapping - Find the file mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Anonymous folios return NULL, even if they're in + * the swap cache. Other kinds of folio also return NULL. + * + * This is ONLY used by architecture cache flushing code. If you aren't + * writing cache flushing code, you want either folio_mapping() or + * folio_file_mapping(). + */ +static inline struct address_space *folio_flush_mapping(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return NULL; + + return folio_mapping(folio); +} + static inline struct address_space *page_file_mapping(struct page *page) { return folio_file_mapping(page_folio(page)); @@ -379,11 +399,7 @@ static inline struct address_space *page_file_mapping(struct page *page) */ static inline struct address_space *page_mapping_file(struct page *page) { - struct folio *folio = page_folio(page); - - if (unlikely(folio_test_swapcache(folio))) - return NULL; - return folio_mapping(folio); + return folio_flush_mapping(page_folio(page)); } /** From patchwork Tue Feb 28 21:37:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62645 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3274105wrd; Tue, 28 Feb 2023 13:58:19 -0800 (PST) X-Google-Smtp-Source: AK7set/1h42mKltykn9buQk1Y67x6AtW25vL4nWKpvN3++6f0URpXUaRsGjyc5tY5iisShMkUGya X-Received: by 2002:a17:90b:1a90:b0:230:acb2:e3e8 with SMTP id ng16-20020a17090b1a9000b00230acb2e3e8mr4361838pjb.23.1677621499521; Tue, 28 Feb 2023 13:58:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677621499; cv=none; d=google.com; s=arc-20160816; b=x6RRJGhW23PD7yUeALFlnOdLvZyAlzQYlBZEtqDLXhEfkZX9PIxzQTKbsvl3DxZMnl zogoPFsVbNuXjYQI2p+n7/1AbvuhgHsyGlFhlA0UVhYy7if8sZMP5r8isdv0b73DV7+V iST4IFzVimOFjQllzmSXqWNO/RvX+flHmmehwvabYNQR7Fkbc6xKP2WUwmRqzPUF4HiR 5qV5bugBLM7WddfVlz4lKgldL/2ejahzJ2EvQE6LxIkTCssEKS/PEDKjdgIzvILV1vSl OyALnyDGgs6YeOcz4yWFsIeyXCTO3eTAT6qisFpccR4QbZDdaeRg3xmukH2M9rC0zqny NOEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PAvoXOK82YrXRCPGzqy64Am4nG7Ga3Zg7EaGsyDnllk=; b=UoD4JuV17BOQqh26EDoXwr9PAur30UFYsNjNMZUj1wA0luWOYp7/ycQEq3O9CZTsNX FqiPZdq1EyjjDwG5oUobbTRKAccNmRxOJC1dmto60NlszmtkbGlmqjo4mpFSI+Hz6qjM URFL1QthWYH2XmdNReJflWl5PbxyBZOtxDUOtwUdUVb34pOO0AHxScLmyDlouNPvR4/3 FTcms8irbzlpop33yZunxzpbideeVfXxYu/uFF3I/SmE2j98c83eNmDCunF26sqxOD2T Q4j3au16FwD4lMSoMvp84Hly4qU1e2LQ2sVmzgrKvN0+HrpcqbhLzaw2p+/y6UPjH2pq Gsjg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="Mb2B/eUZ"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h189-20020a6383c6000000b00502e27a4e6esi10941457pge.617.2023.02.28.13.58.06; Tue, 28 Feb 2023 13:58:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="Mb2B/eUZ"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229987AbjB1Vjv (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229995AbjB1Vi4 (ORCPT ); Tue, 28 Feb 2023 16:38:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B19A0198E; Tue, 28 Feb 2023 13:37:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PAvoXOK82YrXRCPGzqy64Am4nG7Ga3Zg7EaGsyDnllk=; b=Mb2B/eUZDO3A6Nr3GqwqsvSxmi lKzUIgIND8zK6qKTldjjPDYxZwwUlrvqIytprySaAQ0Lpm+NiO9mXdipCnTBaC9jHw5AmdYfzRvvY 8fFXPiTzh+I/yYVlLSZ0W2vDwreJkCfzxII57ZVJ/rDE+jp1VGmOYdZ6Ks6anE8rOpKH9Z0roqoXQ yuS+T7W9QUQlE5mJGWTdDHOUq5nZtOaK2PzoF0ieWlTtKT/I0MczMVn53GhjjhuuoHRrr+tnaFoOA +6Zp2va55mYQfwv8i0peZPzId35gouagzWffGihIaW1Y3/efFZrJk9mAWxH+i1U1gOygMMhnuqGdT wNNvlzFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oa-S8; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 04/34] mm: Remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO Date: Tue, 28 Feb 2023 21:37:07 +0000 Message-Id: <20230228213738.272178-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759113641725622063?= X-GMAIL-MSGID: =?utf-8?q?1759113641725622063?= Current best practice is to reuse the name of the function as a define to indicate that the function is implemented by the architecture. Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/core-api/cachetlb.rst | 24 +++++++++--------------- include/linux/cacheflush.h | 4 ++-- mm/util.c | 2 +- 3 files changed, 12 insertions(+), 18 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index d4c9e2a28d36..770008afd409 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -269,7 +269,7 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, these two routines may simply call memcpy/memset directly and do nothing more. - ``void flush_dcache_page(struct page *page)`` + ``void flush_dcache_folio(struct folio *folio)`` This routines must be called when: @@ -277,7 +277,7 @@ maps this page at its virtual address. and / or in high memory b) the kernel is about to read from a page cache page and user space shared/writable mappings of this page potentially exist. Note - that {get,pin}_user_pages{_fast} already call flush_dcache_page + that {get,pin}_user_pages{_fast} already call flush_dcache_folio on any page found in the user address space and thus driver code rarely needs to take this into account. @@ -291,7 +291,7 @@ maps this page at its virtual address. The phrase "kernel writes to a page cache page" means, specifically, that the kernel executes store instructions that dirty data in that - page at the page->virtual mapping of that page. It is important to + page at the kernel virtual mapping of that page. It is important to flush here to handle D-cache aliasing, to make sure these kernel stores are visible to user space mappings of that page. @@ -302,18 +302,18 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, this routine may simply be defined as a nop on that architecture. - There is a bit set aside in page->flags (PG_arch_1) as "architecture + There is a bit set aside in folio->flags (PG_arch_1) as "architecture private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. This allows these interfaces to be implemented much more efficiently. It allows one to "defer" (perhaps indefinitely) the actual flush if there are currently no user processes mapping this - page. See sparc64's flush_dcache_page and update_mmu_cache_range + page. See sparc64's flush_dcache_folio and update_mmu_cache_range implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if - page_file_mapping() returns a mapping, and mapping_mapped on that + The idea is, first at flush_dcache_folio() time, if + folio_flush_mapping() returns a mapping, and mapping_mapped() on that mapping returns %false, just mark the architecture private page flag bit. Later, in update_mmu_cache_range(), a check is made of this flag bit, and if set the flush is done and the flag bit @@ -327,12 +327,6 @@ maps this page at its virtual address. dirty. Again, see sparc64 for examples of how to deal with this. - ``void flush_dcache_folio(struct folio *folio)`` - This function is called under the same circumstances as - flush_dcache_page(). It allows the architecture to - optimise for flushing the entire folio of pages instead - of flushing one page at a time. - ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long user_vaddr, void *dst, void *src, int len)`` ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page, @@ -353,7 +347,7 @@ maps this page at its virtual address. When the kernel needs to access the contents of an anonymous page, it calls this function (currently only - get_user_pages()). Note: flush_dcache_page() deliberately + get_user_pages()). Note: flush_dcache_folio() deliberately doesn't work for an anonymous page. The default implementation is a nop (and should remain so for all coherent architectures). For incoherent architectures, it should flush @@ -370,7 +364,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache_range. In the future, the hope + flush_dcache_folio and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index a6189d21f2ba..82136f3fcf54 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -7,14 +7,14 @@ struct folio; #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio); #endif #else static inline void flush_dcache_folio(struct folio *folio) { } -#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 +#define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* _LINUX_CACHEFLUSH_H */ diff --git a/mm/util.c b/mm/util.c index b8ed9dbc7fd5..f66e0ca82d2d 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1124,7 +1124,7 @@ void page_offline_end(void) } EXPORT_SYMBOL(page_offline_end); -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio) { long i, nr = folio_nr_pages(folio); From patchwork Tue Feb 28 21:37:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62613 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267184wrd; Tue, 28 Feb 2023 13:39:12 -0800 (PST) X-Google-Smtp-Source: AK7set+/TubF8Fnz4YK/JosamI0+7L7++o2PSfTVrrLoLpIvfoAftfl+Us7On3OvKIVzmEhhV5bE X-Received: by 2002:a05:6a20:b556:b0:cd:53b0:960d with SMTP id ev22-20020a056a20b55600b000cd53b0960dmr3693033pzb.10.1677620352181; Tue, 28 Feb 2023 13:39:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620352; cv=none; d=google.com; s=arc-20160816; b=xbEZREszBluO6Be6erp2gFADDTGqqiK/mqdz63twWPf1RFxcC7EDkImXvsNxjOngo7 KXnVuyoxmn2UzE615qTNhyCLMzqAL4FirZllixsBc9J/iG2DtL4CqCzRkWshaxqcP/HD Ut1vRsSW4s6unOpYH1Jx2l8yBRhP6uxqdix+hzPmBPK15GNqR5ANR2VDYGBQCbgB9pF/ pyi7DbwGKiX++hD2XJQRAQZ8oS4S1TD6VKB2pxcVtcCuC+0CZkb/1TwtuUUh0aexV2M7 MRXloAxG7Qp4IzBEiuvTybmuy2rSpEvkB8oF3KyxnnrxP34mwhYDaHVH9QfnLY8gaXtu eu8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sxlK/dYV0ak5B9qHymTAjJFaSXLib2XJVaYWuBBEzbA=; b=iPhlhvL4QhaVPN6qU8QAn/SyN5jv9GQdc1qkgkVPgz2iNsHE3OYsWSrGhwpbTiJyms W4LuwlLhbpUqzygx1+0vOWyrnCBKFNDC0u+ZKmULRcmJHeXlSQrFXx2IIscFVrrer+nq szKGtopBo5v68Pe2pRKr3UA8sdBjYZIwpPfQiAoI5iKXJn2dHHpo2crR8qygmoraxfaj Qg1Nub8luZWJ66cs0ZMr3gEj8GIY35DN+ScNYeuY/Sq8NbfJuKGUSGFy3117fyjtoLyS yjq53lwXewdN9RhTuvhiM8VQag1NOYTbMJEkx5HgxxUyCoH5FtQ+upvH8/q1aRZQSc5d eZaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=jUkVd05i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a64-20020a639043000000b0050126b5d089si10978716pge.386.2023.02.28.13.38.59; Tue, 28 Feb 2023 13:39:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=jUkVd05i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229970AbjB1ViA (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229881AbjB1Vhq (ORCPT ); Tue, 28 Feb 2023 16:37:46 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63CE73432C; Tue, 28 Feb 2023 13:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sxlK/dYV0ak5B9qHymTAjJFaSXLib2XJVaYWuBBEzbA=; b=jUkVd05icQZ06ujFqkhm4tlL7P 3KsRXEKFKwCYnut2p/HhHGJkGHEBc4DSoZLw9EV+fHO+JETgxVJaBIVfaro/iPWSQppKKi06COZPd +9YJxdtCk7ZYzWLChXfjkBEgIkaMHE9V9zJzcF0jNsOJfQKOVMWVvAUcddP8L3MltcuwBECQossZT qNF+iiYu12rBvdEwuvZTfX/fZj2BqaORssvM3jcGnJlA/ZhjimkeX5x5KyIZnX0S+jZvFM84Tsr3G u6rH8EcgKI6WeV84/NCL1Diz44x7ciNaxkPnaUmY1jDi7vB9nR0Lq7X58u2P+JTOikuyef9xzWg9c neOWuItA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oc-Ud; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH v3 05/34] alpha: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:08 +0000 Message-Id: <20230228213738.272178-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112438677821772?= X-GMAIL-MSGID: =?utf-8?q?1759112438677821772?= Add set_ptes(), update_mmu_cache_range() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: linux-alpha@vger.kernel.org --- arch/alpha/include/asm/cacheflush.h | 10 ++++++++++ arch/alpha/include/asm/pgtable.h | 18 +++++++++++++++++- 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 9945ff483eaf..3956460e69e2 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -57,6 +57,16 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_page(vma, page) \ flush_icache_user_page((vma), (page), 0, 0) +/* + * Both implementations of flush_icache_user_page flush the entire + * address space, so one call, no matter how many pages. + */ +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ + flush_icache_user_page(vma, page, 0, 0); +} + #include #endif /* _ALPHA_CACHEFLUSH_H */ diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index ba43cb841d19..1e3354e9731b 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -26,7 +26,18 @@ struct vm_area_struct; * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1UL << 32; + } +} +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3)) @@ -303,6 +314,11 @@ extern inline void update_mmu_cache(struct vm_area_struct * vma, { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) +{ +} + /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that * are !pte_none() && !pte_present(). From patchwork Tue Feb 28 21:37:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62634 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267732wrd; Tue, 28 Feb 2023 13:40:34 -0800 (PST) X-Google-Smtp-Source: AK7set8zmH3I6+I8YHESM00Ln1oydZsfARp1BttvmYjknHW3nC39DBISLuPnO9xW2q75K3ffFvWT X-Received: by 2002:a62:8453:0:b0:5a8:9f71:315b with SMTP id k80-20020a628453000000b005a89f71315bmr3749171pfd.4.1677620434602; Tue, 28 Feb 2023 13:40:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620434; cv=none; d=google.com; s=arc-20160816; b=bc1MRBq9g8jQDQKp69legYNyZViaVTxX2otrA7tvnPwv0APfPK6fSFKppQIg5N5qM2 V4PPrdY7gnsIuGf421OIyfprQRc+lFCcLSq+cZUrI2+ERBSlhzUOWGxbc+9E80OMzsag F7lCxYU8MTTJEXNQj/qAWUYRYrQm33d+gj7T8v39nViaj/XoGJFL+kR72PmYKb0Og0I/ OO6vXEEvitMHoLXjS2wKJSzrzvm2+vGQ/j3+OyeAR4xqyqI4albWRtJTtedtPju0wChU pN9hGQURB9Ew2PNxeEJKeLdKxBeV2YYwc+OBprqYzX8SnP+xhQ2a0qdddBnKHvADQpbn hTcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JBoa7zXrQmG8QTNWjCtLAXsrCH4I2hmYifuV9gp6E+0=; b=0C2R31Z9pmAuoXgF65k+uGQAnmN/jZUZOhoHFqAKt+7LyLHHOuseIyyFnVkwL8jvwj NsN+GKjCPEEJP4V2dyMlDQbrKY1UZliP7E2QMzIqSo34JkDvwX6vlBIXoZqQJuXHQd9T 80/q1cUSDz8KjMh8RkuX7v1nPIOMMc/BCDKE81GvTG6IdH9NgNpBOHsr4SLShJR+8U5c Z4zWIcCN/QV6QLXH19/jNBYNGq+aVIOIpn/KwIxzKWXBd9kh/k+ivwx5a61sGG3evti7 fmKRDAK6bKPYQdAUR15RV8kWGnmuBn0kQ6j8FMkLqhRSXZMrmuqPRM6aG5LXmEuHyqTr J6Ng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=AZmClMwl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j21-20020a632315000000b00502e6eaf594si10039689pgj.866.2023.02.28.13.40.20; Tue, 28 Feb 2023 13:40:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=AZmClMwl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229931AbjB1Vi0 (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229943AbjB1Vhu (ORCPT ); Tue, 28 Feb 2023 16:37:50 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 265EE35265; Tue, 28 Feb 2023 13:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JBoa7zXrQmG8QTNWjCtLAXsrCH4I2hmYifuV9gp6E+0=; b=AZmClMwlS8uUvd//vCb98dBRDg UhgGytb+lpXQMcggSU1tWiIuXqMzRopxHzC1H8TVOKsgy8HCFPSIOQIQ9k/3PZpF2qkmvsKXmgEkl oBJf/N623P59FSRq8mzFh6pcIL/dPn1QLt5EbYFRuQsXvMLyb6smTbw/YBlB+N7vEaTI20KTjHmmh yxHobgk/Wr6UcxndNcfIM1ZJFA29OcU8pVyDlHF1Nu+nwMQUCJ4M0UweX2ZLXwXKFE5ntKGWSps94 0uAzKtyZsi6gM5CqgVHg4+FZknkH7VFwZi3p/8BvsTzQy/v0buyuNUg0D/8Gh+n83VKQiH9E/J/iY zgLcUzDQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018oj-1H; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH v3 06/34] arc: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:09 +0000 Message-Id: <20230228213738.272178-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112524878276670?= X-GMAIL-MSGID: =?utf-8?q?1759112524878276670?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dc_clean flag from being per-page to per-folio (which means it cannot always be set as we don't know that all pages in this folio were cleaned). Enhance the internal flush routines to take the number of pages to flush. Signed-off-by: Matthew Wilcox (Oracle) Cc: Vineet Gupta Cc: linux-snps-arc@lists.infradead.org --- arch/arc/include/asm/cacheflush.h | 7 ++- arch/arc/include/asm/pgtable-bits-arcv2.h | 20 ++++++-- arch/arc/mm/cache.c | 61 ++++++++++++++--------- arch/arc/mm/tlb.c | 18 ++++--- 4 files changed, 68 insertions(+), 38 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index e201b4b1655a..04f65f588510 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -25,17 +25,20 @@ * in update_mmu_cache() */ #define flush_icache_page(vma, page) +#define flush_icache_pages(vma, page, nr) void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len); -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr); -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr); +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio void dma_cache_wback_inv(phys_addr_t start, unsigned long sz); void dma_cache_inv(phys_addr_t start, unsigned long sz); diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 6e9f8ca6d6a1..4a1b2ce204c6 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -100,14 +100,24 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index 55c6de138eae..3c16ee942a5c 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -752,17 +752,17 @@ static inline void arc_slc_enable(void) * There's a corollary case, where kernel READs from a userspace mapped page. * If the U-mapping is not congruent to K-mapping, former needs flushing. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; if (!cache_is_vipt_aliasing()) { - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); return; } /* don't handle anon pages here */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) return; @@ -771,17 +771,27 @@ void flush_dcache_page(struct page *page) * Make a note that K-mapping is dirty */ if (!mapping_mapped(mapping)) { - clear_bit(PG_dc_clean, &page->flags); - } else if (page_mapcount(page)) { - + clear_bit(PG_dc_clean, &folio->flags); + } else if (folio_mapped(folio)) { /* kernel reading from page with U-mapping */ - phys_addr_t paddr = (unsigned long)page_address(page); - unsigned long vaddr = page->index << PAGE_SHIFT; + phys_addr_t paddr = (unsigned long)folio_address(folio); + unsigned long vaddr = folio_pos(folio); + /* + * vaddr is not actually the virtual address, but is + * congruent to every user mapping. + */ if (addr_not_cache_congruent(paddr, vaddr)) - __flush_dcache_page(paddr, vaddr); + __flush_dcache_pages(paddr, vaddr, + folio_nr_pages(folio)); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + return flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); /* @@ -921,18 +931,18 @@ void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len) } /* wrapper to compile time eliminate alignment checks in flush loop */ -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr) +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE); + __ic_line_inv_vaddr(paddr, vaddr, nr * PAGE_SIZE); } /* * wrapper to clearout kernel or userspace mappings of a page * For kernel mappings @vaddr == @paddr */ -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr) +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __dc_line_op(paddr, vaddr & PAGE_MASK, PAGE_SIZE, OP_FLUSH_N_INV); + __dc_line_op(paddr, vaddr & PAGE_MASK, nr * PAGE_SIZE, OP_FLUSH_N_INV); } noinline void flush_cache_all(void) @@ -962,10 +972,10 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long u_vaddr, u_vaddr &= PAGE_MASK; - __flush_dcache_page(paddr, u_vaddr); + __flush_dcache_pages(paddr, u_vaddr, 1); if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, u_vaddr); + __inv_icache_pages(paddr, u_vaddr, 1); } void flush_cache_range(struct vm_area_struct *vma, unsigned long start, @@ -978,9 +988,9 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long u_vaddr) { /* TBD: do we really need to clear the kernel mapping */ - __flush_dcache_page((phys_addr_t)page_address(page), u_vaddr); - __flush_dcache_page((phys_addr_t)page_address(page), - (phys_addr_t)page_address(page)); + __flush_dcache_pages((phys_addr_t)page_address(page), u_vaddr, 1); + __flush_dcache_pages((phys_addr_t)page_address(page), + (phys_addr_t)page_address(page), 1); } @@ -989,6 +999,8 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); + struct folio *dst = page_folio(to); void *kfrom = kmap_atomic(from); void *kto = kmap_atomic(to); int clean_src_k_mappings = 0; @@ -1005,7 +1017,7 @@ void copy_user_highpage(struct page *to, struct page *from, * addr_not_cache_congruent() is 0 */ if (page_mapcount(from) && addr_not_cache_congruent(kfrom, u_vaddr)) { - __flush_dcache_page((unsigned long)kfrom, u_vaddr); + __flush_dcache_pages((unsigned long)kfrom, u_vaddr, 1); clean_src_k_mappings = 1; } @@ -1019,17 +1031,17 @@ void copy_user_highpage(struct page *to, struct page *from, * non copied user pages (e.g. read faults which wire in pagecache page * directly). */ - clear_bit(PG_dc_clean, &to->flags); + clear_bit(PG_dc_clean, &dst->flags); /* * if SRC was already usermapped and non-congruent to kernel mapping * sync the kernel mapping back to physical page */ if (clean_src_k_mappings) { - __flush_dcache_page((unsigned long)kfrom, (unsigned long)kfrom); - set_bit(PG_dc_clean, &from->flags); + __flush_dcache_pages((unsigned long)kfrom, + (unsigned long)kfrom, 1); } else { - clear_bit(PG_dc_clean, &from->flags); + clear_bit(PG_dc_clean, &src->flags); } kunmap_atomic(kto); @@ -1038,8 +1050,9 @@ void copy_user_highpage(struct page *to, struct page *from, void clear_user_page(void *to, unsigned long u_vaddr, struct page *page) { + struct folio *folio = page_folio(page); clear_page(to); - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); } EXPORT_SYMBOL(clear_user_page); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 5f71445f26bd..0a996b65bb4e 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -467,8 +467,8 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * Note that flush (when done) involves both WBACK - so physical page is * in sync as well as INV - so any non-congruent aliases don't remain */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr) { unsigned long vaddr = vaddr_unaligned & PAGE_MASK; phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; @@ -491,15 +491,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, */ if ((vma->vm_flags & VM_EXEC) || addr_not_cache_congruent(paddr, vaddr)) { - - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(page); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); if (dirty) { + unsigned long offset = offset_in_folio(folio, paddr); + nr = folio_nr_pages(folio); + paddr -= offset; + vaddr -= offset; /* wback + inv dcache lines (K-mapping) */ - __flush_dcache_page(paddr, paddr); + __flush_dcache_pages(paddr, paddr, nr); /* invalidate any existing icache lines (U-mapping) */ if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, vaddr); + __inv_icache_pages(paddr, vaddr, nr); } } } @@ -531,7 +535,7 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { pte_t pte = __pte(pmd_val(*pmd)); - update_mmu_cache(vma, addr, &pte); + update_mmu_cache_range(vma, addr, &pte, HPAGE_PMD_NR); } void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, From patchwork Tue Feb 28 21:37:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62626 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267471wrd; Tue, 28 Feb 2023 13:39:50 -0800 (PST) X-Google-Smtp-Source: AK7set862uPzkwytfYlnHYpZWB9KvFgwD648D8vn3Z1+/DqKAVLU7kYYH3+oTPulJXyhaVY1+fip X-Received: by 2002:a17:902:da88:b0:19e:31a3:1a87 with SMTP id j8-20020a170902da8800b0019e31a31a87mr4478470plx.39.1677620389737; Tue, 28 Feb 2023 13:39:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620389; cv=none; d=google.com; s=arc-20160816; b=Mx1CQrfgOck9zzzJs8m4zIib1MHUrCXzWLiCWtfE7vK5OzhKFlg4ZgW+SxJAo8iz6u MG5X30tStkcc6KnK9BraG46BhAOfP5h94N0nQuYWtgEMP5EsNmiwpKCRQ+228j4WD+6U MMh7KP3BYdR0HLRlvRlbcIzx4se5LRqYbB371CBS2+sxjj2wx/eEU2eOKgaRgMyiJs/c WqqDXwO3s8T9z0ytFGpokthPtAxMNImxi4PBjtgMsbCGZsb1nRCSimwTGM/lq/U9dKur 3i6062hMHdyEYc7err9uThQAmjoxbmf3rY8BA8KssdLpma7rGydnu7MFhxeNxQNHyu0D IKYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZpUzSYGDTADlmCBk6vxpalRkkfQBt7PCBFN721ohMyE=; b=JZy8p3fXvjnv9PZL/gNXjPP8M/3xo+ySICK85eYrtFCiOwqNqEw7/J6KCDwuJj+qNI 7eVHTnNKtwPCSnm0V94NeaurEQWgXWwO9/34A8A0zAouZpBll5Xyu6lFL8Tun54uTXE5 oCxVqpxVGYS0mzeBuTroehsahFirA0aAJyd/w+6c5JHttZpUVJUDYfI0nUJQr5WsnI/y R8FaQhk3GdRUgnKyF3rBVC6O1gC22ZZCdXSqLP7/6fQSfdWOUiEIYCA2mEkk2nCsstxk QvOhR6QjDIfQfaAO2Qa9pRJ7qsLdbh+8b57O994fiWntFsPCnTpRq/eIuCiBfBh9quNh vY9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=pG7+TZ+d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kf6-20020a17090305c600b0019ce4e2be99si9619004plb.193.2023.02.28.13.39.36; Tue, 28 Feb 2023 13:39:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=pG7+TZ+d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbjB1VjB (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230002AbjB1ViF (ORCPT ); Tue, 28 Feb 2023 16:38:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B65835270; Tue, 28 Feb 2023 13:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZpUzSYGDTADlmCBk6vxpalRkkfQBt7PCBFN721ohMyE=; b=pG7+TZ+drwivOfE0krvBLZBu/W 660yny4Dcam/Um1wxzlld5G++gN4Xx5PhAO+pUzV25wkiYp/8bwRpfcCNGZQtPP6tZUetjFG/KJAj n6Q5LsTV7GEoXUNbPWWIHaTSSaJV8vq7PhyBIKXKD18d1Bbh38aVASi7z1U7cPez4q8dLpusfRAV+ zFm2OF7GfofICymt4KEkO2WmmXG9Bf2wDg6lwxAtzCxbo6JbGmIdeh2ytkNeXc5rbPtsiwPRUuLTn /xjYFEH1lGvAm8cQELk1PbxG1vShFHP8lp9Rp6BgsXj13WblqeagpYXfPgxWkhF5sc6KzNVeTPqb7 su5foQ+A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018oo-42; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 07/34] arm: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:10 +0000 Message-Id: <20230228213738.272178-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,SUSPICIOUS_RECIPS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112477425549662?= X-GMAIL-MSGID: =?utf-8?q?1759112477425549662?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clear flag from being per-page to per-folio which makes __dma_page_dev_to_cpu() a bit more exciting. Also add flush_cache_pages(), even though this isn't used by generic code (yet?) Signed-off-by: Matthew Wilcox (Oracle) Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org --- arch/arm/include/asm/cacheflush.h | 24 +++++--- arch/arm/include/asm/pgtable.h | 5 +- arch/arm/include/asm/tlbflush.h | 13 ++-- arch/arm/mm/copypage-v4mc.c | 5 +- arch/arm/mm/copypage-v6.c | 5 +- arch/arm/mm/copypage-xscale.c | 5 +- arch/arm/mm/dma-mapping.c | 24 ++++---- arch/arm/mm/fault-armv.c | 14 ++--- arch/arm/mm/flush.c | 99 +++++++++++++++++++------------ arch/arm/mm/mm.h | 2 +- arch/arm/mm/mmu.c | 14 +++-- 11 files changed, 125 insertions(+), 85 deletions(-) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index a094f964c869..841e268d2374 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -231,14 +231,15 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned vma->vm_flags); } -static inline void -vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +static inline void vivt_flush_cache_pages(struct vm_area_struct *vma, + unsigned long user_addr, unsigned long pfn, unsigned int nr) { struct mm_struct *mm = vma->vm_mm; if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { unsigned long addr = user_addr & PAGE_MASK; - __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); + __cpuc_flush_user_range(addr, addr + nr * PAGE_SIZE, + vma->vm_flags); } } @@ -247,15 +248,17 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig vivt_flush_cache_mm(mm) #define flush_cache_range(vma,start,end) \ vivt_flush_cache_range(vma,start,end) -#define flush_cache_page(vma,addr,pfn) \ - vivt_flush_cache_page(vma,addr,pfn) +#define flush_cache_pages(vma, addr, pfn, nr) \ + vivt_flush_cache_pages(vma, addr, pfn, nr) #else -extern void flush_cache_mm(struct mm_struct *mm); -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); -extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn); +void flush_cache_mm(struct mm_struct *mm); +void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, + unsigned long pfn, unsigned int nr); #endif #define flush_cache_dup_mm(mm) flush_cache_mm(mm) +#define flush_cache_page(vma, addr, pfn) flush_cache_pages(vma, addr, pfn, 1) /* * flush_icache_user_range is used when we want to ensure that the @@ -289,7 +292,9 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr * See update_mmu_cache for the user space part. */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -extern void flush_dcache_page(struct page *); +void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 static inline void flush_kernel_vmap_range(void *addr, int size) @@ -321,6 +326,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, * duplicate cache flushing elsewhere performed by flush_dcache_page(). */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index a58ccbb406ad..6525ac82bd50 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -207,8 +207,9 @@ static inline void __sync_icache_dcache(pte_t pteval) extern void __sync_icache_dcache(pte_t pteval); #endif -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr); +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) { diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 0ccc985b90af..7d792e485f4f 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -619,18 +619,21 @@ extern void flush_bp_all(void); * If PG_dcache_clean is not set for the page, we need to ensure that any * cache entries for the kernels virtual memory range are written * back to the page. On ARMv6 and later, the cache coherency is handled via - * the set_pte_at() function. + * the set_ptes() function. */ #if __LINUX_ARM_ARCH__ < 6 -extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr); #else -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { } #endif +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #endif diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c index f1da3b439b96..7ddd82b9fe8b 100644 --- a/arch/arm/mm/copypage-v4mc.c +++ b/arch/arm/mm/copypage-v4mc.c @@ -64,10 +64,11 @@ static void mc_copy_user_page(void *from, void *to) void v4_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c index d8a115de5507..a1a71f36d850 100644 --- a/arch/arm/mm/copypage-v6.c +++ b/arch/arm/mm/copypage-v6.c @@ -69,11 +69,12 @@ static void discard_old_kernel_data(void *kto) static void v6_copy_user_highpage_aliasing(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); unsigned int offset = CACHE_COLOUR(vaddr); unsigned long kfrom, kto; - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); /* FIXME: not highmem safe */ discard_old_kernel_data(page_address(to)); diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c index bcb485620a05..f1e29d3e8193 100644 --- a/arch/arm/mm/copypage-xscale.c +++ b/arch/arm/mm/copypage-xscale.c @@ -84,10 +84,11 @@ static void mc_copy_user_page(void *from, void *to) void xscale_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 8bc01071474a..5ecfde41d70a 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -693,6 +693,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { + struct folio *folio = page_folio(page); phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ @@ -707,19 +708,18 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - unsigned long pfn; - size_t left = size; - - pfn = page_to_pfn(page) + off / PAGE_SIZE; - off %= PAGE_SIZE; - if (off) { - pfn++; - left -= PAGE_SIZE - off; + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); } - while (left >= PAGE_SIZE) { - page = pfn_to_page(pfn++); - set_bit(PG_dcache_clean, &page->flags); - left -= PAGE_SIZE; + + while (left >= (ssize_t)folio_size(folio)) { + set_bit(PG_dcache_clean, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } } diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index 0e49154454a6..e2c869b8f012 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -178,8 +178,8 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, * * Note that the pte lock will be held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); struct address_space *mapping; @@ -192,13 +192,13 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (mapping) { if (cache_is_vivt()) make_coherent(mapping, vma, addr, ptep, pfn); diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 7ff9feea13a6..07ea0ab51099 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -95,10 +95,10 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned __flush_icache_all(); } -void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr) { if (cache_is_vivt()) { - vivt_flush_cache_page(vma, user_addr, pfn); + vivt_flush_cache_pages(vma, user_addr, pfn, nr); return; } @@ -196,29 +196,31 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, #endif } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - if (!PageHighMem(page)) { - __cpuc_flush_dcache_area(page_address(page), page_size(page)); + if (!folio_test_highmem(folio)) { + __cpuc_flush_dcache_area(folio_address(folio), + folio_size(folio)); } else { unsigned long i; if (cache_is_vipt_nonaliasing()) { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_atomic(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_local_folio(folio, + i * PAGE_SIZE); __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_atomic(addr); + kunmap_local(addr); } } else { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_high_get(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_high_get(folio_page(folio, i)); if (addr) { __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_high(page + i); + kunmap_high(folio_page(folio, i)); } } } @@ -230,15 +232,14 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) * userspace colour, which is congruent with page->index. */ if (mapping && cache_is_vipt_aliasing()) - flush_pfn_alias(page_to_pfn(page), - page->index << PAGE_SHIFT); + flush_pfn_alias(folio_pfn(folio), folio_pos(folio)); } -static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) +static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; - pgoff_t pgoff; + struct vm_area_struct *vma; + pgoff_t pgoff, pgoff_end; /* * There are possible user space mappings of this page: @@ -246,21 +247,36 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p * data in the current VM view associated with this page. * - aliasing VIPT: we only need to find one mapping of this page. */ - pgoff = page->index; + pgoff = folio->index; + pgoff_end = pgoff + folio_nr_pages(folio) - 1; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff_end) { + unsigned long start, offset, pfn; + unsigned int nr; /* * If this VMA is not in our MM, we can ignore it. */ - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page)); + + start = vma->vm_start; + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + offset = pgoff - vma->vm_pgoff; + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + start += offset * PAGE_SIZE; + } + if (start + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - start) / PAGE_SIZE; + + flush_cache_pages(vma, start, pfn, nr); } flush_dcache_mmap_unlock(mapping); } @@ -269,7 +285,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p void __sync_icache_dcache(pte_t pteval) { unsigned long pfn; - struct page *page; + struct folio *folio; struct address_space *mapping; if (cache_is_vipt_nonaliasing() && !pte_exec(pteval)) @@ -279,14 +295,14 @@ void __sync_icache_dcache(pte_t pteval) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); if (cache_is_vipt_aliasing()) - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); else mapping = NULL; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (pte_exec(pteval)) __flush_icache_all(); @@ -312,7 +328,7 @@ void __sync_icache_dcache(pte_t pteval) * Note that we disable the lazy flush for SMP configurations where * the cache maintenance operations are not automatically broadcasted. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -320,31 +336,36 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); return; } - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!cache_ops_need_broadcast() && - mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping && cache_is_vivt()) - __flush_dcache_aliases(mapping, page); + __flush_dcache_aliases(mapping, folio); else if (mapping) __flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_page); /* * Flush an anonymous page so that users of get_user_pages() * can safely access the data. The expected sequence is: diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index d7ffccb7fea7..419316316711 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -45,7 +45,7 @@ struct mem_type { const struct mem_type *get_mem_type(unsigned int type); -extern void __flush_dcache_page(struct address_space *mapping, struct page *page); +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio); /* * ARM specific vm_struct->flags bits. diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 463fc2a8448f..9947bbc32b04 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1788,7 +1788,7 @@ void __init paging_init(const struct machine_desc *mdesc) bootmem_init(); empty_zero_page = virt_to_page(zero_page); - __flush_dcache_page(NULL, empty_zero_page); + __flush_dcache_folio(NULL, page_folio(empty_zero_page)); } void __init early_mm_init(const struct machine_desc *mdesc) @@ -1797,8 +1797,8 @@ void __init early_mm_init(const struct machine_desc *mdesc) early_paging_init(mdesc); } -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { unsigned long ext = 0; @@ -1808,5 +1808,11 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, ext |= PTE_EXT_NG; } - set_pte_ext(ptep, pteval, ext); + for (;;) { + set_pte_ext(ptep, pteval, ext); + if (--nr == 0) + break; + ptep++; + pte_val(pteval) += PAGE_SIZE; + } } From patchwork Tue Feb 28 21:37:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62631 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267661wrd; Tue, 28 Feb 2023 13:40:23 -0800 (PST) X-Google-Smtp-Source: AK7set+kue+E3GTXHK+WfR1PqoSTnjVUOK0TneDFHDM8WsKNnA6nQQM87zzBbwkMgXT6yhiOb6/k X-Received: by 2002:a05:6a20:8e04:b0:cd:7fcf:11a6 with SMTP id y4-20020a056a208e0400b000cd7fcf11a6mr5583730pzj.48.1677620423507; Tue, 28 Feb 2023 13:40:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620423; cv=none; d=google.com; s=arc-20160816; b=UIgXu0hbXL+45Xay+LyBFuY0Q2YNak6T3XpB84fEasd6z8svdMJh2QeBYV86kQebpo JPW6+09H0mjGk/5cguCFwWvms7Uv5sdylmrHKbmicLtm4nmGkEWVXCIc+9pkKiBeQlTe aKoBSqVKplbfR/PLB+QX6obX75g+lm+Qnvh9K+Nnecdcqm1HLpIAMNubXjW4sBrp+NKd y4LfsOHGrDl1+eUMIL96+TDXoLEut3mVBy+xxUGWk89T4ozJYqjpxLsPt3YZ5WGpRzH7 V4ayfkbT9O3t1kqU1fy9wvOgsGxgZB0BflxVOolRblRAcvIUR9vgHNY59cGGy3esnf/p AUog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RnM8KszZJiX+sNWI95I2pqTk92H44CKgzXj9Vqv/ocM=; b=k1nQtoKP6m0rBPus10wJfZerAvFbs5T0LoHIFZyWyzK0n2rtbeNRnJm8IGCjoA/dS2 2Jy6zzB6hIJhg/JD0trxsPwiHFCkgbCFW2K3sYWI4JqFcN7sPACmMxekqq90a0URvxG0 7YAstR+7f7pAB4K5gFG42rhq7r7QqI30V7E/Vtd2H7lelpqDpEgTB7CMSvkzI0+G13ra 0M0q+Tjqw/y8nRtChkTils8aP/wOU6FBmcyL7B5lwy5RF7z17LnUSAgDH5llf0VRynzl 481KkSDMw1pGR4TkqCd5MWyQ3r8OiCHuPG9wOzHnvkTy5HjxUbAYquEbLO6D+IFuCLLx Xs2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=p9ziR0tK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h132-20020a636c8a000000b004fd50189889si10129500pgc.29.2023.02.28.13.40.08; Tue, 28 Feb 2023 13:40:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=p9ziR0tK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230054AbjB1Vi2 (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229944AbjB1Vhu (ORCPT ); Tue, 28 Feb 2023 16:37:50 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7945335272; Tue, 28 Feb 2023 13:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RnM8KszZJiX+sNWI95I2pqTk92H44CKgzXj9Vqv/ocM=; b=p9ziR0tK44CRQ5jbhDDZszpsrt S9yommfwa6JzCtuDdkysLn2Uuj3lAfjHkLPplujjSrzPpwUQhwbzaRBp9kGF58fVojPu00NAVxSVS ggMUkC81496mG5o1zhHvyVdZOWQNYqTx04Qn2OGcFXjbRX1vDqUExKQoOWFOG4AqY4UgWyCjDsQg3 RHl5GpFAFd22Aj3PO6zATNVV9+fQbAeYOhhEL84lW1/HFz+zBt/0cmWYHtH+2mgTuOaXJ2tg1iFxx k8zez0eSYtc0PVqhUjCqSgs8mdylgTOEGnPq5YKfGVbkUOCuCNXMp8Xm7/um7BpCyE2T7cek5Mpzh 9ebYv0Dg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018oq-6h; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 08/34] arm64: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:11 +0000 Message-Id: <20230228213738.272178-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112513360787245?= X-GMAIL-MSGID: =?utf-8?q?1759112513360787245?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/cacheflush.h | 4 +++- arch/arm64/include/asm/pgtable.h | 25 ++++++++++++++------ arch/arm64/mm/flush.c | 36 +++++++++++------------------ 3 files changed, 35 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 37185e978aeb..d115451ed263 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -114,7 +114,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, #define copy_to_user_page copy_to_user_page /* - * flush_dcache_page is used when the kernel has written to the page + * flush_dcache_folio is used when the kernel has written to the page * cache page at virtual address page->virtual. * * If this page isn't mapped (ie, page_mapping == NULL), or it might @@ -127,6 +127,8 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio static __always_inline void icache_inval_all_pou(void) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 69765dc697af..4d1b79dbff16 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -355,12 +355,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pte); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - return __set_pte_at(mm, addr, ptep, pte); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* * Huge pte definitions. @@ -1059,8 +1068,8 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { /* * We don't do anything here, so there's a very small chance of @@ -1069,6 +1078,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #ifdef CONFIG_ARM64_PA_BITS_52 diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 5f9379b3c8c8..deb781af0a3a 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -50,20 +50,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, void __sync_icache_dcache(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { - sync_icache_aliases((unsigned long)page_address(page), - (unsigned long)page_address(page) + - page_size(page)); - set_bit(PG_dcache_clean, &page->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + sync_icache_aliases((unsigned long)folio_address(folio), + (unsigned long)folio_address(folio) + + folio_size(folio)); + set_bit(PG_dcache_clean, &folio->flags); } } EXPORT_SYMBOL_GPL(__sync_icache_dcache); @@ -73,17 +66,16 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); * it as dirty for later flushing when mapped in user space (if executable, * see __sync_icache_dcache). */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in __sync_icache_dcache()). - */ - if (PageHuge(page)) - page = compound_head(page); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +EXPORT_SYMBOL(flush_dcache_folio); - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } EXPORT_SYMBOL(flush_dcache_page); From patchwork Tue Feb 28 21:37:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62612 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267171wrd; Tue, 28 Feb 2023 13:39:11 -0800 (PST) X-Google-Smtp-Source: AK7set/K5Gn4aeLdC008ZG/Z5xvEVQPOGVV3TGCz0oNJgf2FGk3f2PMttTIa+f4YcuMr+Pu5tpnu X-Received: by 2002:a05:6a20:6a9e:b0:cc:4058:82bd with SMTP id bi30-20020a056a206a9e00b000cc405882bdmr3964720pzb.13.1677620350718; Tue, 28 Feb 2023 13:39:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620350; cv=none; d=google.com; s=arc-20160816; b=MpA5d0Yqx7UFrH3GSpm1mr+G0Q+f9mOp/UhqdrGYbOWc3az3vXin3qU/GAtv6dguCC henoIsLhzlmmgsF0jH39hyyspmIdfEhFgnHzHd+RkqrbYpQj5MHFlrZozNuIATZ3yKKX g8gi+XoIr9abJ8J5zLiQvft6w9w+kZhFxqVxHC+h+1Yzjc/SbP+9+bAw7468WJXzpvC6 IjjkeCesOqS5UwKSetLUUQ8JJkl+NASAdD/dVzUuhtAtnp44JAWtwPNBiQ9slSTLdoX9 9tPEZb56/7hHecsXkVEdttZsPyLAqIJBZf0bythKkeqORSapAmYafRLMGxv5qTZt9R/4 FjmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=B5j2SOFWmtz+byodXTEtLGjajHY9V7moC1330qT7ZY8=; b=Suw+tUIEe+RvDljZyKRiwHXfVJugJuAlozeOX8kX8/ZazALbS4TC/N9bTD2w3Sb8yd 0pWByXAhKT1xTt02+0pp91Ybgj5ZXB0LR0YLUmcYMVuAGD8RVOv6ulMRffwN62cvqYzC XEnE3ENx6XIymPrFGjMzdSu9SaZ1WJRN/hB+bAeCf+6a+73gNynRjqvr5PlLzFuXSdpR 65OXeKgaLUwPMf1WM7YvueW+h9a8qdjUx9Sw63725/xLTFZns/XXtTXU75slGcieCZnj 7Ha9Ehh/FlMu2IOjniFCLFcP3be+svYjeZa6cb0exSckxFbmEJr2xybWi8KewL59+qGO v3wQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hTwQ3kkA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w8-20020a655348000000b004fb84a15f0fsi9537294pgr.83.2023.02.28.13.38.57; Tue, 28 Feb 2023 13:39:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hTwQ3kkA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229553AbjB1Vhy (ORCPT + 99 others); Tue, 28 Feb 2023 16:37:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229880AbjB1Vhq (ORCPT ); Tue, 28 Feb 2023 16:37:46 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 054BD3430F; Tue, 28 Feb 2023 13:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=B5j2SOFWmtz+byodXTEtLGjajHY9V7moC1330qT7ZY8=; b=hTwQ3kkAP6iIZQTqLZVTocWHRU Ij3tn6LImZ/3nzMEl0hGpChAT3eXCsnlnPct5tN2GBKBcYJJxGmusv0GWljHEUfTTdiLmKHl5/r/A e1YPRGY8lpZlVpJynqExNBNvobacTeIKUky/55E9Nu4xkWqD2oK4Xop0YMzVOQkfM0MT3m83D4raZ CDbD2yw0ifcg4sTX9tBxeMl+2SPPPSpHQGW4VCQOOGfbCjgvwVahkjYzQyw9sMF0z5xp+5oOV95vo fgf/Yv+a30t9tD4bft/P+9AsaFx/62mVSHWM8mzEHdXQ3lZYd0+qgXY50cjIO3qJS6iVvinCLisyw Y0jkOjMA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018ot-92; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v3 09/34] csky: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:12 +0000 Message-Id: <20230228213738.272178-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112436795515957?= X-GMAIL-MSGID: =?utf-8?q?1759112436795515957?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Guo Ren Cc: linux-csky@vger.kernel.org --- arch/csky/abiv1/cacheflush.c | 32 +++++++++++++++++----------- arch/csky/abiv1/inc/abi/cacheflush.h | 2 ++ arch/csky/abiv2/cacheflush.c | 30 +++++++++++++------------- arch/csky/abiv2/inc/abi/cacheflush.h | 10 +++++++-- arch/csky/include/asm/pgtable.h | 21 +++++++++++++++--- 5 files changed, 62 insertions(+), 33 deletions(-) diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c index fb91b069dc69..ba43f6c26b4f 100644 --- a/arch/csky/abiv1/cacheflush.c +++ b/arch/csky/abiv1/cacheflush.c @@ -14,43 +14,49 @@ #define PG_dcache_clean PG_arch_1 -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); - if (mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + if (mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { dcache_wbinv_all(); if (mapping) icache_inv_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) dcache_wbinv_all(); - if (page_mapping_file(page)) { + if (folio_flush_mapping(folio)) { if (vma->vm_flags & VM_EXEC) icache_inv_all(); } diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..0d6cb65624c4 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,8 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 39c51399dd81..c1cf0d55a2a1 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -6,30 +6,30 @@ #include #include -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr) { - unsigned long addr; + unsigned long pfn = pte_pfn(*pte); struct page *page; + unsigned int i; - if (!pfn_valid(pte_pfn(*pte))) + if (!pfn_valid(pfn) || is_zero_pfn(pfn)) return; - page = pfn_to_page(pte_pfn(*pte)); - if (page == ZERO_PAGE(0)) - return; + folio = page_folio(pfn_to_page(pfn)); - if (test_and_set_bit(PG_dcache_clean, &page->flags)) + if (test_and_set_bit(PG_dcache_clean, &folio->flags)) return; - addr = (unsigned long) kmap_atomic(page); - - dcache_wb_range(addr, addr + PAGE_SIZE); + for (i = 0; i < folio_nr_pages(folio); i++) { + unsigned long addr = (unsigned long) kmap_local_folio(folio, + i * PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - - kunmap_atomic((void *) addr); + dcache_wb_range(addr, addr + PAGE_SIZE); + if (vma->vm_flags & VM_EXEC) + icache_inv_range(addr, addr + PAGE_SIZE); + kunmap_local((void *) addr); + } } void flush_icache_deferred(struct mm_struct *mm) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..9c728933a776 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -18,11 +18,17 @@ #define PG_dcache_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..a30ae048233e 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -90,7 +90,20 @@ static inline void set_pte(pte_t *p, pte_t pte) /* prevent out of order excution */ smp_mb(); } -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { @@ -263,8 +276,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot) From patchwork Tue Feb 28 21:37:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62616 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267286wrd; Tue, 28 Feb 2023 13:39:24 -0800 (PST) X-Google-Smtp-Source: AK7set+XtUozY7Bw5BB5+A+pZy6WmctberybY62Sgmkj5q1UwC1lPphcNPBJwaQvgq6UM3YEhljS X-Received: by 2002:a17:903:1c8:b0:19d:2542:96a4 with SMTP id e8-20020a17090301c800b0019d254296a4mr4980149plh.4.1677620364209; Tue, 28 Feb 2023 13:39:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620364; cv=none; d=google.com; s=arc-20160816; b=hC47xjgJ6k0fd4PceJg7q0XtHgg9tAUOoNI5l04k1EGlL0pnNLXwaHCqNY8q/4kV3f c8MT6OReIDUemYmguQ/1/ryGbcSi557pYsRfHBnPkB9UZ0Dk8gApfFNtB7zbL+vGnv5D Ji+hcOnvfX7Ec8FJ1SQ72nTfqs8tMFLN1e8Kjtrg4qVE/LXPEUN//7Hap5OiiOyvT37I M9nef37gimdzg0k5z1kdD8cjagqoqWEdq7+5NtFO3bnZT1tBTi9fTwxllS5gyIZBoYA9 f+5QDLPgmYtMUYvlaK/j/DfmTDugwWmluLRd2B3azFCTOTqsYpjq+tIxs30UEBD+QvpL uwhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=yM8cvfNz5jVfR0nEV77nAvQQBtTw5TotKep/NN0+z/U=; b=xa2MpLMaICcor59CRazGq1R3YVy9WJvKv5qAFDQ7l58HMwHdJtPHxZF9zucEj6Rc5S DTn94YVaB5OF+Nalel2LcPiGQBR0CRG3K6IyR0tnmvr+rlEfAecLNAWQW9z66s/tau2O h2eusF5YICook4iiZgST0Cey7oJRsGMbAuYPFoRnRyYqFqK2AlL0BdA7K/U2Gq4VFoW1 b+4As+SFd8d8LqoE/HiZks1HPEb70czUECLPDVta9ZExvvxFeg1d2gDmi/TgVKhBNykt iJMi8WU6ZsrvDgF7+M9qpQvEXQxS4U8JK7GSiKGMiOkbKJdvm3HlGsly0dfBO29YXw9z c4ag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=fqpsebHD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kd13-20020a17090313cd00b001943e4426a2si10049768plb.431.2023.02.28.13.39.10; Tue, 28 Feb 2023 13:39:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=fqpsebHD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230026AbjB1ViQ (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229732AbjB1Vhr (ORCPT ); Tue, 28 Feb 2023 16:37:47 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3F7334C09; Tue, 28 Feb 2023 13:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yM8cvfNz5jVfR0nEV77nAvQQBtTw5TotKep/NN0+z/U=; b=fqpsebHD5cGcr5m/rhD5tIZunJ mu0FGvCDmW7G8UikT9VBw3YmgHxjYCqR85Yjq1SNB5b8LxY4tD1KESaNk/C/MEE9oMcnZSOFKEQQQ heyVfhw3U68Rb4yS48Ngvf07xGLDqG7dWbrMMPBcsAAZrefTRiEAHNg/h+eLBtJ/l81xnSWNW2w1e 6KYkVIOeY8tHmhIMfpicjm1a3EL9JqeKKYbT7N/dAO5THS+vuwuvH9+Z47+qvE3n9mCx6gjBTWE58 VqdGnnKOt1NRyYe+2+9HLQh9d0ggzJCZ4Bh+7CwzWNiLpqJ7mGYdUigM+boHB29pObhMCXaXBQg2Y oCsKxxEw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018oy-CB; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Brian Cain Subject: [PATCH v3 10/34] hexagon: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:13 +0000 Message-Id: <20230228213738.272178-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112450697326987?= X-GMAIL-MSGID: =?utf-8?q?1759112450697326987?= Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Brian Cain --- arch/hexagon/include/asm/cacheflush.h | 7 +++++-- arch/hexagon/include/asm/pgtable.h | 16 ++++++++++++++-- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 6eff0730e6ef..63ca314ede89 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -58,12 +58,15 @@ extern void flush_cache_all_hexagon(void); * clean the cache when the PTE is set. * */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* generic_ptrace_pokedata doesn't wind up here, does it? */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, void *src, int len); #define copy_to_user_page copy_to_user_page diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..f58f1d920769 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -346,12 +346,24 @@ static inline int pte_exec(pte_t pte) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) /* - * set_pte_at - update page table and do whatever magic may be + * set_ptes - update page table and do whatever magic may be * necessary to make the underlying hardware/firmware take note. * * VM may require a virtual instruction to alert the MMU. */ -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { From patchwork Tue Feb 28 21:37:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62628 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267613wrd; Tue, 28 Feb 2023 13:40:16 -0800 (PST) X-Google-Smtp-Source: AK7set87N9JCUnSnGjei5d23PLjlTmA71z99Utzx/9TtE9Wyi14nJcSo4goVqPWC/2KD2Xoo8kTM X-Received: by 2002:a17:903:2341:b0:19b:2332:18cb with SMTP id c1-20020a170903234100b0019b233218cbmr4783654plh.1.1677620416440; Tue, 28 Feb 2023 13:40:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620416; cv=none; d=google.com; s=arc-20160816; b=vDokrjuRAEm8SHTt3iJDGb2vh+yXd1dyoAm1WDWeOVQfwZ7OLmObfieTSAwTU/lxC6 x2nZxgILnkLUwJBZvDWgNG7AvwTxbUrPYfSVGsMt4YCWBZ+h9d74UyqfuKxGQZr07sUF ijdx1OKuNUan7hUegz0PFi72zl0DtBafHhKGSrVsUXCpCJ9qYcQAS0KavyLYZAXqXBu4 NMC2eB4uANBH2w6sIEtgVGZWibUWAKHo2oyr6Sg+MJXqjXeFkc5c95qJrtmkJfOmTEIc ouC+kPZIrA3ZnFnprJPMSzy7eF6hJp7wBwh/Ai0iiGv9pAxBM6F1veHNyYJHquKc4goK 38hw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=X4gTGIoZDPD05YxRdFQZH/uA/i6Ep+mlgv+7pNzKT5k=; b=iON4/PItA1XL8RkuUjac0dYJ2NV2r4MhNU1F6FVWo+VlRLrsUlRLctGU50PmH6G90T bOuz4TpSXqxRLLXA100NjXxN+0qXvUYcRkwhPmdFzGW6RiBIf8FRNevBLEmwDXgZOJ2J BzOlExOWyqQbevXORARm7/uQvszNmBiPFEiWdSzilpYNWl2pfvCVs0tStw3FIxe+FHWD rbKxfe4Yhx60Jc5jomJgmRHTdGUxX+/ItNoC1XI1AMC0W6NWKJ2WhW3ARYvvgTjNq5ii ve1oJOva+Pcy5PuCcGsoauaLVA44W8aCF5lA8lWO8mqNrLX7O3ja1WFrrceCMaeU5JGR iu0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=d64ZiIZw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c186-20020a6335c3000000b00502e4676ac9si11354541pga.584.2023.02.28.13.40.02; Tue, 28 Feb 2023 13:40:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=d64ZiIZw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229968AbjB1Vh5 (ORCPT + 99 others); Tue, 28 Feb 2023 16:37:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229871AbjB1Vhq (ORCPT ); Tue, 28 Feb 2023 16:37:46 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43B3E34323; Tue, 28 Feb 2023 13:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X4gTGIoZDPD05YxRdFQZH/uA/i6Ep+mlgv+7pNzKT5k=; b=d64ZiIZwbaZAR26epvp/fIC9ZJ 4oAPhYbNdFVbSebf5WTKvAstEIjxipjzZkYQBvxy6T5/cPJHsz6b7cOGTTzsbUMisq/PARR/croyV 386bEC6Y844QeOXW8GtSK4ycdugpUvB/2iIkP/nRGrtHnVXDHhh+GNxFBcFSglXGf7pX9Zi7Me0U+ uCrTAaMmU3tAXFylqB8VCo+bNjwNnbsladeGoCCzjRh7KrI/FWsm3kkPy+sn4LsdjLib5Mo8UNXFt PerbroycxkC0s00okLnb7cwTLHo7VK4uTlB45F5Zh7n8Ydln+h4qJigRBeBi+2jsZAEPcsXcw7hep EivU9tOQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018p8-Ho; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org Subject: [PATCH v3 11/34] ia64: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:14 +0000 Message-Id: <20230228213738.272178-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112505645631704?= X-GMAIL-MSGID: =?utf-8?q?1759112505645631704?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_clean) flag from being per-page to per-folio, which makes arch_dma_mark_clean() and mark_clean() a little more exciting. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-ia64@vger.kernel.org --- arch/ia64/hp/common/sba_iommu.c | 26 +++++++++++++++----------- arch/ia64/include/asm/cacheflush.h | 14 ++++++++++---- arch/ia64/include/asm/pgtable.h | 14 +++++++++++++- arch/ia64/mm/init.c | 29 +++++++++++++++++++---------- 4 files changed, 57 insertions(+), 26 deletions(-) diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index 8ad6946521d8..48d475f10003 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -798,22 +798,26 @@ sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba) #endif #ifdef ENABLE_MARK_CLEAN -/** +/* * Since DMA is i-cache coherent, any (complete) pages that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ -static void -mark_clean (void *addr, size_t size) +static void mark_clean(void *addr, size_t size) { - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page((void *)pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; + struct folio *folio = virt_to_folio(addr); + ssize_t left = size; + size_t offset = offset_in_folio(folio, addr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= folio_size(folio)) { + set_bit(PG_arch_1, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } #endif diff --git a/arch/ia64/include/asm/cacheflush.h b/arch/ia64/include/asm/cacheflush.h index 708c0fa5d975..eac493fa9e0d 100644 --- a/arch/ia64/include/asm/cacheflush.h +++ b/arch/ia64/include/asm/cacheflush.h @@ -13,10 +13,16 @@ #include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) \ -do { \ - clear_bit(PG_arch_1, &(page)->flags); \ -} while (0) +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_arch_1, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_range flush_icache_range diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 21c97e31a28a..0c2be4ea664b 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -303,7 +303,18 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, add, ptep, pte, 1) /* * Make page protection values cacheable, uncacheable, or write- @@ -396,6 +407,7 @@ pte_same (pte_t a, pte_t b) return pte_val(a) == pte_val(b); } +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) #define update_mmu_cache(vma, address, ptep) do { } while (0) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 7f5353e28516..12aef25944aa 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -50,30 +50,39 @@ void __ia64_sync_icache_dcache (pte_t pte) { unsigned long addr; - struct page *page; + struct folio *folio; - page = pte_page(pte); - addr = (unsigned long) page_address(page); + folio = page_folio(pte_page(pte)); + addr = (unsigned long)folio_address(folio); - if (test_bit(PG_arch_1, &page->flags)) + if (test_bit(PG_arch_1, &folio->flags)) return; /* i-cache is already coherent with d-cache */ - flush_icache_range(addr, addr + page_size(page)); - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ + flush_icache_range(addr, addr + folio_size(folio)); + set_bit(PG_arch_1, &folio->flags); /* mark page as clean */ } /* - * Since DMA is i-cache coherent, any (complete) pages that were written via + * Since DMA is i-cache coherent, any (complete) folios that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ void arch_dma_mark_clean(phys_addr_t paddr, size_t size) { - unsigned long pfn = PHYS_PFN(paddr); + struct folio *folio = page_folio(phys_to_page(paddr)); + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); - do { + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= (ssize_t)folio_size(folio)) { set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); - } while (++pfn <= PHYS_PFN(paddr + size - 1)); + left -= folio_size(folio); + folio = folio_next(folio); + } } inline void From patchwork Tue Feb 28 21:37:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62615 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267259wrd; Tue, 28 Feb 2023 13:39:21 -0800 (PST) X-Google-Smtp-Source: AK7set8wg4FAAyVrOL922HXjplpOoPGKl+DEQMmi8DDB7Mv3hRq6YP6+L04jUOGA4XJmh0C+IZyZ X-Received: by 2002:aa7:8428:0:b0:600:cc40:2589 with SMTP id q8-20020aa78428000000b00600cc402589mr3404836pfn.3.1677620361699; Tue, 28 Feb 2023 13:39:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620361; cv=none; d=google.com; s=arc-20160816; b=IcQvL0s6CflYXTxXPh3P+V1O9Cvrqcd/BzexhK2RVs1K6GDy3tDTApobg69MEY3k7j YLZ9M5cdS6SNGzKr9Vl48ni6p9jHVblxQPJf6tNwF1QL83Zk8RDCHa3xsz4C2P24JEAq D6zNc/UYDIYLHH6RLvnjR+zVt6eAlcr9K1RFX++pPV4fWnGWO5ZJAOACRRWteOBmgMnM lw7SeRXlYi/uyVDfbtURHQ5PXi2QiJPPiWTGTbyj6DdU25O1V//uuoUWvjN9IPs9VBEd 0PCKm+QihY7c3DfN94H4Tga6m0KKKBTD6NdcyHTNrvou8+LXc5NUjE+3B/WQ23vYLAXV Nf3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fxYZM+dA5j6swMF6gw/nu6/QS6N0if0H8qzTGtI3Nx4=; b=uu/g/ELnaOMxmgnLUGcsjhSGDZ4OdsYmb/YzesUUaM3T1BRp7z8XVpncl32q+h6zKc +3icOZr01I9pqURpp0JgVhQBcR2WY83Ltk5QXZM48jz1j/L0uLuSjzCPUb2dcyetVuSN mNQ6fWW62UINFm8P3wtPlICKDG2R5/bTYKdjLuKjtQou+Hs1II6SXK+yKW0qNjotkIQc LEkmRZk6TO/wBB/xm/kw+V1EmwbMz3HAjrpOSYlHxGlnEnxGCLmW7jiurFSDLlqykk3t IjZkrOvNRVEZF1L4A5v4VXLyP5aLpFT2bccjwvTpTr6qVeqXp+2811Vijc18SMwO8mUQ 83eg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=t6c3JJ2f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p25-20020a056a000a1900b005939fff6d47si11334347pfh.326.2023.02.28.13.39.08; Tue, 28 Feb 2023 13:39:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=t6c3JJ2f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229607AbjB1ViN (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229915AbjB1Vhs (ORCPT ); Tue, 28 Feb 2023 16:37:48 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B97E34F4B; Tue, 28 Feb 2023 13:37:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fxYZM+dA5j6swMF6gw/nu6/QS6N0if0H8qzTGtI3Nx4=; b=t6c3JJ2fH0x5/grUhxogUZYeOe SShFST+p4XE+MLjmz3zZC+Bb2IuqOE3tNx+RraD0omIWAq8IcyRUrAPXzFKZ1FrTAucOdlBe3xMzE lQ57T91RzbVntY61EzTv9D7IE1ldyJJ45YLjjzYxuzP9II163WbEtWeScny5ZSan37odChHi7yzRD kQLo8dzCNRFj9wvhvvnLj0c2MX9sfST0vtR4aTtBeUJN+KuLRfgPKa0ZU1DIZFA4SRGErx2dWS898 N3GIZC4rqgn3P//+KVhXPknGJ/J+PLr4uzkkj/ZAmK8Kp1dglWSVVGN+44eBlFHrtp7r1RIDhlb+9 5+/0JPsQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018pA-LT; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev Subject: [PATCH v3 12/34] loongarch: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:15 +0000 Message-Id: <20230228213738.272178-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112448008172819?= X-GMAIL-MSGID: =?utf-8?q?1759112448008172819?= Add set_ptes() and update_mmu_cache_range(). It would probably be more efficient to implement __update_tlb() by flushing the entire folio instead of calling it __update_tlb() N times, but I'll leave that for someone who understands the architecture better. Signed-off-by: Matthew Wilcox (Oracle) Cc: Huacai Chen Cc: WANG Xuerui Cc: loongarch@lists.linux.dev Signed-off-by: WANG Xuerui --- arch/loongarch/include/asm/cacheflush.h | 2 ++ arch/loongarch/include/asm/pgtable.h | 30 +++++++++++++++++++------ 2 files changed, 25 insertions(+), 7 deletions(-) diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 0681788eb474..7907eb42bfbd 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -47,8 +47,10 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define flush_icache_page(vma, page) do { } while (0) +#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) +#define flush_dcache_folio(folio) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index d28fb9dbec59..9154d317ffb4 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -334,12 +334,20 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) } } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << _PFN_SHIFT; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { /* Preserve global status for the pair */ @@ -445,11 +453,19 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - __update_tlb(vma, address, ptep); + for (;;) { + __update_tlb(vma, address, ptep); + if (--nr == 0) + break; + address += PAGE_SIZE; + ptep++; + } } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache From patchwork Tue Feb 28 21:37:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62614 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267245wrd; Tue, 28 Feb 2023 13:39:19 -0800 (PST) X-Google-Smtp-Source: AK7set862a9/fX5kgqEZzlGyANITRroJgezKfWohcGlOCbgZNYC5w7ofqjd0T7oKJtliCAA8T6le X-Received: by 2002:a17:902:e74c:b0:19a:7648:512 with SMTP id p12-20020a170902e74c00b0019a76480512mr4388451plf.30.1677620359227; Tue, 28 Feb 2023 13:39:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620359; cv=none; d=google.com; s=arc-20160816; b=u3Ov1ue/AolikMeDzTvsa6IQC4vdJpT+norRDkFl7W6TZfcAFZn+wWRUGu/tn12boU pT4HQAgNAO1Zer/itNHLu6kW4LgbbP4eaEucwRyOCzbG60iEA9HQAQff8NZMrEyx1qXi J3fb/AIoXo1WqUa6+SqYGE46CctTpjBgkwemvgmPl6nQLq7atyxXc6AB6FO/o62iYJ0F t+/JeQjU10PCIG+XK+s9Fk2zE8vFmQo1of5vzCirGbRv9+uc/6o+6O/gj7HQ7vQcyDQg ZjkIsqTqDFAvVmR6VVJOTgYAbPsQIOb7EBo9BcOPn5IKZnA1RTIhhpxYkFVyCFdqvldF wZDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=u7mDkU53gCVgq34r9bUs2tkHplV8+tfY4mkfvunD0YE=; b=pfij6D8JvA9QegSdJxKZeoVvW3GDmdWU1gwhtZoFzN51FgPD+fwy/6fSdILEDdxbry By0g8NbNokxJR27DJlXck8PdnooUQDqpWuwXdKlD1D93V9sfjRHqhOgNUd+4ciHFZEvA nK8arx9x/rZhNH/9A9DSI8KFxkBv8VAO8WrG23fHPloKtLAMZ9hoSwDussB3PhHnFYBW hqympv4dXyFGS66fci2v6Huy+08w8IZOOIwiZIrLlkMkaOyksq8tKv1ZkGHjGGAthxeP U2N0wq0TwX0ydaoMHcNEyTXUvuwE38KGnejGC7SC6QDu3kq0UZKz6GNdIHgWv4mvlbiI lFtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=EVNR24DZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w64-20020a636243000000b00502e7016655si10942650pgb.377.2023.02.28.13.39.05; Tue, 28 Feb 2023 13:39:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=EVNR24DZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230018AbjB1ViK (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229910AbjB1Vhr (ORCPT ); Tue, 28 Feb 2023 16:37:47 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C299C34334; Tue, 28 Feb 2023 13:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=u7mDkU53gCVgq34r9bUs2tkHplV8+tfY4mkfvunD0YE=; b=EVNR24DZpArKnCSCBaresow5yA Vyhm29cOyzaXLJUNOI8O8YcxcgphzbnZewhX959jro9fKmyPdCaD1Oq0kKZ1Q806YXmVGYjcNEIy7 j/QDNkPxrP//0P8P17QBzkuNGdOFIpJu8gy2eKCOMXRoxY6AsVW5po/gsjXR8jB2Ua1Ug+kyMONrP 0guR4FHrIqyTVjvVe7ol+0GiG8ks9sAJ4LoO1y2HNSTF8qnVwS8P0bxOuz1GKc9V4+Z7XBjEYQexR pNS2aXLc55KzCKM8kBZsInzUsBDUtPJ0F8fb2/FsnR+OiZKi2QHZqsPzS6JFvql9POSWICWWHcSGP obOFf91Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018pG-Q0; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Geert Uytterhoeven , linux-m68k@lists.linux-m68k.org Subject: [PATCH v3 13/34] m68k: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:16 +0000 Message-Id: <20230228213738.272178-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112445964106670?= X-GMAIL-MSGID: =?utf-8?q?1759112445964106670?= Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Geert Uytterhoeven Cc: linux-m68k@lists.linux-m68k.org --- arch/m68k/include/asm/cacheflush_mm.h | 26 +++++++++++++++++--------- arch/m68k/include/asm/pgtable_mm.h | 21 ++++++++++++++++++--- arch/m68k/mm/motorola.c | 2 +- 3 files changed, 36 insertions(+), 13 deletions(-) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 1ac55e7b47f0..d43c8bce149b 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -220,24 +220,28 @@ static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vm /* Push the page at kernel virtual address and clear the icache */ /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ -static inline void __flush_page_to_ram(void *vaddr) +static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) { if (CPU_IS_COLDFIRE) { unsigned long addr, start, end; addr = ((unsigned long) vaddr) & ~(PAGE_SIZE - 1); start = addr & ICACHE_SET_MASK; - end = (addr + PAGE_SIZE - 1) & ICACHE_SET_MASK; + end = (addr + nr * PAGE_SIZE - 1) & ICACHE_SET_MASK; if (start > end) { flush_cf_bcache(0, end); end = ICACHE_MAX_ADDR; } flush_cf_bcache(start, end); } else if (CPU_IS_040_OR_060) { - __asm__ __volatile__("nop\n\t" - ".chip 68040\n\t" - "cpushp %%bc,(%0)\n\t" - ".chip 68k" - : : "a" (__pa(vaddr))); + unsigned long paddr = __pa(vaddr); + + while (nr--) { + __asm__ __volatile__("nop\n\t" + ".chip 68040\n\t" + "cpushp %%bc,(%0)\n\t" + ".chip 68k" + : : "a" (paddr + nr * PAGE_SIZE)); + } } else { unsigned long _tmp; __asm__ __volatile__("movec %%cacr,%0\n\t" @@ -249,10 +253,14 @@ static inline void __flush_page_to_ram(void *vaddr) } #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) +#define flush_dcache_page(page) __flush_pages_to_ram(page_address(page), 1) +#define flush_dcache_folio(folio) \ + __flush_pages_to_ram(folio_address(folio), folio_nr_pages(folio)) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) +#define flush_icache_pages(vma, page, nr) \ + __flush_pages_to_ram(page_address(page), nr) +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h index b93c41fe2067..400206c17c97 100644 --- a/arch/m68k/include/asm/pgtable_mm.h +++ b/arch/m68k/include/asm/pgtable_mm.h @@ -31,8 +31,20 @@ do{ \ *(pteptr) = (pteval); \ } while(0) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #if CONFIG_PGTABLE_LEVELS == 3 @@ -138,11 +150,14 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); * tables contain all the necessary information. The Sun3 does, but * they are updated on demand. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #endif /* !__ASSEMBLY__ */ /* MMU-specific headers */ diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 2a375637e007..7784d0fcdf6e 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -81,7 +81,7 @@ static inline void cache_page(void *vaddr) void mmu_page_ctor(void *page) { - __flush_page_to_ram(page); + __flush_pages_to_ram(page, 1); flush_tlb_kernel_page(page); nocache_page(page); } From patchwork Tue Feb 28 21:37:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62619 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267326wrd; Tue, 28 Feb 2023 13:39:31 -0800 (PST) X-Google-Smtp-Source: AK7set/fkW4RAvO7B8ODBhnz2K/A6FZKOVZ+G1Td5WSR2jcCt86BDVwIkWnV43PdUk4pYApjkOMt X-Received: by 2002:a17:903:2305:b0:19a:a673:4ee2 with SMTP id d5-20020a170903230500b0019aa6734ee2mr5332359plh.31.1677620370769; Tue, 28 Feb 2023 13:39:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620370; cv=none; d=google.com; s=arc-20160816; b=R17Y7DXMmmQebT2ymAGN9XX+dIN918baRAG8X3xgfERapLHopDnzGJ6bwJM7Mni+cL gqJY8Pz6PH2IW0yRkzMip57D+bvAEvYENZ1qKUxiaX6NmKQP9Hp7kVRQ8DkGQh6Vitik bfxgFj2UTOFAuW1casOLvKDhncpBOEUPo+/4Ouhp3MsVy1uv0X/Mn5/F0kGn36fbT6JL OC8HvdpWUqUUdIOnCRtprAqQhYkT7i7+XzykVTLEXLluHITvSCYgoS/YDtsf6zHwf6M8 r84AVb+OamHesQM+WqmJDxpaqJkkXZcwjuhsxwZG1drswOgaosbqFYIvRikEiK9OIRJ+ 9Llw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jco2/brYdWe0hU6YkAHAWSfRYtcsxYQg52X9PqtqDX0=; b=duKx18Dh76D0Yhd6sVQ93C+wRQjUBTLfVgwovMxbNki4AyDH4xEy/CIK+RhHLXCW7s dh6BBTWNY1nPKXFx7nzTd/md5GP/TH/y7O2UTuS67JzRicU75zeVXLSuN0DQggNA9hgY bBQdo5fvhfyQy97HrVRqL2N+zLzZpTYt4gtWHJpbG3gqWgPh5IAScmqYb+ECpsF5gIIg iq4aOZ7ZcmYFdPFI4qKrLmN+UrwGl2Dq7QC+zi0QxgJNoTyHulUtC82dmrVf9Fu+npHl lOAZGUxC2HCL2sfM2D9kVHCinOTVV3yGRCiL5uavR4oqK3JM0cKnaNfDDLFFPBhpcZbk 1Jkw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=MylEPx10; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b8-20020a633408000000b004fbcd0785casi10597061pga.207.2023.02.28.13.39.17; Tue, 28 Feb 2023 13:39:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=MylEPx10; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230015AbjB1ViH (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229827AbjB1Vhr (ORCPT ); Tue, 28 Feb 2023 16:37:47 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C035B34332; Tue, 28 Feb 2023 13:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jco2/brYdWe0hU6YkAHAWSfRYtcsxYQg52X9PqtqDX0=; b=MylEPx10D9cki0lh9U/HhXaSjh 5VXaWVx/v0ZIYexIBTVZVd4Dj6AHZ3hAgeXILq/jVFyUoUXkSff4Pmo799mtlnuZ0suzfkXCafxG8 nzpSeo6aYtWeN0Oecw+5ERf9nBoatUgG8DvKR73bfIwWILec/CNHG3S552x7G5Xe2282okADGciuV NoISRFOqmdfyLi6xuOq6nfQHWs66T+MpGVc0tQ3PdjyjUaDP1L8Vi6afNGu4eFFgnkOuUOKAgSkVi F6oRG/BpdUZiixspN/rpIE6HbXVFicUqaxpOtNAakyxly5l1gLJlVLaMSEsZts0QfwNmUnsxp6vgV C/jNhgTw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018pJ-SN; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Michal Simek Subject: [PATCH v3 14/34] microblaze: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:17 +0000 Message-Id: <20230228213738.272178-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112457939333844?= X-GMAIL-MSGID: =?utf-8?q?1759112457939333844?= Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Also change the calling convention for set_pte() to be the same as other architectures. Signed-off-by: Matthew Wilcox (Oracle) Cc: Michal Simek --- arch/microblaze/include/asm/cacheflush.h | 8 ++++++++ arch/microblaze/include/asm/pgtable.h | 17 ++++++++++++----- arch/microblaze/include/asm/tlbflush.h | 4 +++- 3 files changed, 23 insertions(+), 6 deletions(-) diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h index 39f8fb6768d8..e6641ff98cb3 100644 --- a/arch/microblaze/include/asm/cacheflush.h +++ b/arch/microblaze/include/asm/cacheflush.h @@ -74,6 +74,14 @@ do { \ flush_dcache_range((unsigned) (addr), (unsigned) (addr) + PAGE_SIZE); \ } while (0); +static void flush_dcache_folio(struct folio *folio) +{ + unsigned long addr = folio_pfn(folio) << PAGE_SHIFT; + + flush_dcache_range(addr, addr + folio_size(folio)); +} +#define flush_dcache_folio flush_dcache_folio + #define flush_cache_page(vma, vmaddr, pfn) \ flush_dcache_range(pfn << PAGE_SHIFT, (pfn << PAGE_SHIFT) + PAGE_SIZE); diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h index d1b8272abcd9..a01e1369b486 100644 --- a/arch/microblaze/include/asm/pgtable.h +++ b/arch/microblaze/include/asm/pgtable.h @@ -330,18 +330,25 @@ static inline unsigned long pte_update(pte_t *p, unsigned long clr, /* * set_pte stores a linux PTE into the linux page table. */ -static inline void set_pte(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_pte(pte_t *ptep, pte_t pte) { *ptep = pte; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - *ptep = pte; + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_SHIFT_OFFSET; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) diff --git a/arch/microblaze/include/asm/tlbflush.h b/arch/microblaze/include/asm/tlbflush.h index 2038168ed128..1b179e5e9062 100644 --- a/arch/microblaze/include/asm/tlbflush.h +++ b/arch/microblaze/include/asm/tlbflush.h @@ -33,7 +33,9 @@ static inline void local_flush_tlb_range(struct vm_area_struct *vma, #define flush_tlb_kernel_range(start, end) do { } while (0) -#define update_mmu_cache(vma, addr, ptep) do { } while (0) +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0) +#define update_mmu_cache(vma, addr, pte) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define flush_tlb_all local_flush_tlb_all #define flush_tlb_mm local_flush_tlb_mm From patchwork Tue Feb 28 21:37:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62617 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267291wrd; Tue, 28 Feb 2023 13:39:25 -0800 (PST) X-Google-Smtp-Source: AK7set/CUwJp4ztXJi5fdR/fvuGAKUGWBsQku9Hru+V6LpB0l1puFMUjV5dDDCHbm+FA3tPHhksa X-Received: by 2002:a17:902:e851:b0:19a:839f:435 with SMTP id t17-20020a170902e85100b0019a839f0435mr13623275plg.3.1677620365118; Tue, 28 Feb 2023 13:39:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620365; cv=none; d=google.com; s=arc-20160816; b=ULm42yM7ThzG1J09WbJixUzZSUp4dawCrRdIZf5ohifsKMvWB4QHPC+DJQUol35Wh7 MkQ41CM64vN9iBUwYKQh8evUba6WevEtWC6CdGilZaZ8t9xvqglZ9jH5wA5NIIFn8LIu yMl4MTcVVpbcgV+JJwkR40pCjaxijLrSqbASe+YWJngeKon8/1Vn+wlF4m6Z+IEV7GxM YTjDP1Et7qUCWHij7nxwwwkfYQwmAUyj+4vS5BJEkxHk6z1lquvWyyxdPABt4DdZp5IT lhPN+zhAlwec5bsLTVBMWSaiQ62lFQTY5Dqfa1eqGw1yYhh8Rz92cYfXgNNHUTm0LyEh hLhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YFkv649gCM/AwFXo2f8Ac1lw9lI40kQCtDNPnXRRbZA=; b=cT0SoojggNC03d2IMhnY75VbBNWzuqCpiQg412PuBxbibps01sL7OxVKGkPgYuWd2F lpWbEAxrVwImRJo00JwMIxAu4yNW224KB0nNhwVslMDRmbz4XnqirJdB2ST5PP7Pcivu 97lal1vJtTfllMZje0/JJN6lQoGIo/PWqzf/rcbNnUOiF6zFdmjfjR27L1jnUt13+d0K 7o5k75iIADFlMDejMvfD1q54FbKeyRGgdDYO3RQwL7RvPDHl0g2QNNry1YYnIuHBui/K t1R2yPPHGUPfECL7Yc3Ci7X9poXu8voVo/lbVD0xIN81piargGtJhIlga4kzsk2cRf9D 7JGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HBX2kOrQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q5-20020a170902b10500b0019d20f88171si4671816plr.54.2023.02.28.13.39.11; Tue, 28 Feb 2023 13:39:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HBX2kOrQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229939AbjB1ViT (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229919AbjB1Vhs (ORCPT ); Tue, 28 Feb 2023 16:37:48 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1562734F40; Tue, 28 Feb 2023 13:37:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YFkv649gCM/AwFXo2f8Ac1lw9lI40kQCtDNPnXRRbZA=; b=HBX2kOrQCeD5vBphiVPX6Gy7ii QLZwyGb/2kqUPDSXRCJ6pZ3pdgJZ4aetej5KZE7T+61evE1yqGSKNM4/CFSXpBO4mfNyR4oL0yXGp l5WUNQgynWZmvbj3LMClMj0L4gTQQj7WETVZ6cdDxJ/vuUGloIbW1UKfxFp/pO+PDEEL7arHGlHLh 9ijBQtXZ1Bbkr5RbQZdOYv7n7jsnFxU3CymXIJDvpoZOVPZeeYC3IML6da2ib8lp/zu+dc8zOAQ4S JBB8UDOGqf0j/p0HM7Qy8i5c9g5lB6Tma9BqEnZlMUPuSqH1HVBx3UXitNx1lhwLKaVXdtQgsOg/3 TzT+FNMw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018pO-Ub; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH v3 15/34] mips: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:18 +0000 Message-Id: <20230228213738.272178-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112451787458138?= X-GMAIL-MSGID: =?utf-8?q?1759112451787458138?= Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org --- arch/mips/include/asm/cacheflush.h | 32 +++++++++++------ arch/mips/include/asm/pgtable.h | 36 +++++++++++++------ arch/mips/mm/c-r4k.c | 5 +-- arch/mips/mm/cache.c | 56 +++++++++++++++--------------- arch/mips/mm/init.c | 17 +++++---- 5 files changed, 88 insertions(+), 58 deletions(-) diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index b3dc9c589442..2683cade42ef 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -36,12 +36,12 @@ */ #define PG_dcache_dirty PG_arch_1 -#define Page_dcache_dirty(page) \ - test_bit(PG_dcache_dirty, &(page)->flags) -#define SetPageDcacheDirty(page) \ - set_bit(PG_dcache_dirty, &(page)->flags) -#define ClearPageDcacheDirty(page) \ - clear_bit(PG_dcache_dirty, &(page)->flags) +#define folio_test_dcache_dirty(folio) \ + test_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_set_dcache_dirty(folio) \ + set_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_clear_dcache_dirty(folio) \ + clear_bit(PG_dcache_dirty, &(folio)->flags) extern void (*flush_cache_all)(void); extern void (*__flush_cache_all)(void); @@ -50,15 +50,24 @@ extern void (*flush_cache_mm)(struct mm_struct *mm); extern void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn); -extern void __flush_dcache_page(struct page *page); +extern void __flush_dcache_pages(struct page *page, unsigned int nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (cpu_has_dc_aliases) + __flush_dcache_pages(&folio->page, folio_nr_pages(folio)); + else if (!cpu_has_ic_fills_f_dc) + folio_set_dcache_dirty(folio); +} +#define flush_dcache_folio flush_dcache_folio + static inline void flush_dcache_page(struct page *page) { if (cpu_has_dc_aliases) - __flush_dcache_page(page); + __flush_dcache_pages(page, 1); else if (!cpu_has_ic_fills_f_dc) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) @@ -73,10 +82,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) { } +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 791389bf3c12..0cf0455e6ae8 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -105,8 +105,10 @@ do { \ } \ } while(0) -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) @@ -204,19 +206,31 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt } #endif -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + bool do_sync = false; - if (!pte_present(pteval)) - goto cache_sync_done; + for (i = 0; i < nr; i++) { + if (!pte_present(pte)) + continue; + if (pte_present(ptep[i]) && + (pte_pfn(ptep[i]) == pte_pfn(pte))) + continue; + do_sync = true; + } - if (pte_present(*ptep) && (pte_pfn(*ptep) == pte_pfn(pteval))) - goto cache_sync_done; + if (do_sync) + __update_cache(addr, pte); - __update_cache(addr, pteval); -cache_sync_done: - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << _PFN_SHIFT; + } } /* diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index a549fa98c2f4..7d2a42f0cffd 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -679,13 +679,14 @@ static inline void local_r4k_flush_cache_page(void *args) if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) vaddr = NULL; else { + struct folio *folio = page_folio(page); /* * Use kmap_coherent or kmap_atomic to do flushes for * another ASID than the current one. */ map_coherent = (cpu_has_dc_aliases && - page_mapcount(page) && - !Page_dcache_dirty(page)); + folio_mapped(folio) && + !folio_test_dcache_dirty(folio)); if (map_coherent) vaddr = kmap_coherent(page, addr); else diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 11b3e7ddafd5..0668435521fc 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -82,13 +82,15 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes, return 0; } -void __flush_dcache_page(struct page *page) +void __flush_dcache_pages(struct page *page, unsigned int nr) { - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = page_folio(page); + struct address_space *mapping = folio_flush_mapping(folio); unsigned long addr; + unsigned int i; if (mapping && !mapping_mapped(mapping)) { - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); return; } @@ -97,25 +99,21 @@ void __flush_dcache_page(struct page *page) * case is for exec env/arg pages and those are %99 certainly going to * get faulted into the tlb (and thus flushed) anyways. */ - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - flush_data_cache_page(addr); - - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + for (i = 0; i < nr; i++) { + addr = (unsigned long)kmap_local_page(page + i); + flush_data_cache_page(addr); + kunmap_local((void *)addr); + } } - -EXPORT_SYMBOL(__flush_dcache_page); +EXPORT_SYMBOL(__flush_dcache_pages); void __flush_anon_page(struct page *page, unsigned long vmaddr) { unsigned long addr = (unsigned long) page_address(page); + struct folio *folio = page_folio(page); if (pages_do_alias(addr, vmaddr)) { - if (page_mapcount(page) && !Page_dcache_dirty(page)) { + if (folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -130,27 +128,29 @@ EXPORT_SYMBOL(__flush_anon_page); void __update_cache(unsigned long address, pte_t pte) { - struct page *page; + struct folio *folio; unsigned long pfn, addr; int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc; + unsigned int i; pfn = pte_pfn(pte); if (unlikely(!pfn_valid(pfn))) return; - page = pfn_to_page(pfn); - if (Page_dcache_dirty(page)) { - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - if (exec || pages_do_alias(addr, address & PAGE_MASK)) - flush_data_cache_page(addr); - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + folio = page_folio(pfn_to_page(pfn)); + address &= PAGE_MASK; + address -= offset_in_folio(folio, pfn << PAGE_SHIFT); + + if (folio_test_dcache_dirty(folio)) { + for (i = 0; i < folio_nr_pages(folio); i++) { + addr = (unsigned long)kmap_local_folio(folio, i); - ClearPageDcacheDirty(page); + if (exec || pages_do_alias(addr, address)) + flush_data_cache_page(addr); + kunmap_local((void *)addr); + address += PAGE_SIZE; + } + folio_clear_dcache_dirty(folio); } } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5a8002839550..19d4ca3b3fbd 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -88,7 +88,7 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot) pte_t pte; int tlbidx; - BUG_ON(Page_dcache_dirty(page)); + BUG_ON(folio_test_dcache_dirty(page_folio(page))); preempt_disable(); pagefault_disable(); @@ -169,11 +169,12 @@ void kunmap_coherent(void) void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); if (cpu_has_dc_aliases && - page_mapcount(from) && !Page_dcache_dirty(from)) { + folio_mapped(src) && !folio_test_dcache_dirty(src)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(); @@ -194,15 +195,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } if (vma->vm_flags & VM_EXEC) flush_cache_page(vma, vaddr, page_to_pfn(page)); @@ -212,15 +215,17 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } } EXPORT_SYMBOL_GPL(copy_from_user_page); From patchwork Tue Feb 28 21:37:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62618 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267294wrd; Tue, 28 Feb 2023 13:39:25 -0800 (PST) X-Google-Smtp-Source: AK7set/1gpzC3b9PqHdUcEn9Cd1wfDHPm9MHhryNmCv1Ga6PRt4SC8SOemA6CA0IHJOPCGGypbBb X-Received: by 2002:a05:6a20:7f8c:b0:c7:8779:416d with SMTP id d12-20020a056a207f8c00b000c78779416dmr6660738pzj.58.1677620365591; Tue, 28 Feb 2023 13:39:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620365; cv=none; d=google.com; s=arc-20160816; b=hLwlfKemITvFC9n/oohzqkBXu5s+vo4nC+b7KHXUsupMT+hrfwPi8UypdtiT3YAdiU IjEtYxtbe8NBQQQwwaJAUWh8XTcwSXzjRn+iAJ19IClPHakGFOZIm6kBORhNEoLUTGt3 XcGKyt0P2nhlHvXs+n+Z4z82tV/5szAiLzfSqhpPC4Pt+n+vfReIaNaJ0LQ5Zl7HMvfc Lov/9wczCPcgUrC6JgyKWHIyQ/PbU/wnLTHHevKQoWMnZm5BuXNaIcJJGJW8EnOzHLBZ ws6JWhYZhpRpyoyhuruoEypXy89s6AcGS1YJMKGV3v2mzoYT7s6kSM1zynXvFHfWFFT7 4hRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=E51MTWtc4kvJF42CxWT2/h/jjoo3XaciyQ6YJuFHeTk=; b=GufiFkKmvlhkJGCnex3o/lDlVFgZnYRNENzJs33XqZmLqa0XysBzyYhQTZBnrUaGT9 uMDn+z451Mz/n/+hUBLOZ2L5xwTzXXi1gMoqB05cQ4+zgpNwJvCM8LMcQL4vUv7LhipU D+ylezqArTEPZyqHjSyPFSegRTWHSXs7/NxxGABLYY2VZtTutZTFHS8bhqLv1YjphhYU F3cBFa5M+sFbE3y22cQSWZCqYDeU+ZRIY460Dcgiozmnz/JbJqrdufl3H62ZyPAkMaWS faMK9WyRUdIcSZncfK4k3fnvjG/icvGwAKIDQEHlheLzM0bkOx4Rf6FzL0Ozfkzh6lsn X06Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=iCk2NHUn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 135-20020a63008d000000b0050383bbb6b2si1227011pga.225.2023.02.28.13.39.11; Tue, 28 Feb 2023 13:39:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=iCk2NHUn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229982AbjB1ViE (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229510AbjB1Vhr (ORCPT ); Tue, 28 Feb 2023 16:37:47 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABA3234330; Tue, 28 Feb 2023 13:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E51MTWtc4kvJF42CxWT2/h/jjoo3XaciyQ6YJuFHeTk=; b=iCk2NHUn77YUnj8VUtpTSTBZhH Q+w0GQqlfXZtxx1tG5dX9wrAN716AEv1qO1lhHaksuKQ2gDTBGbxLmjxh62/lBBBXQtn9d4qUbWg5 NkRL7wrzl5j6R3kWw+oBufEmIYr22w/5Q+KKN8y60jpxgbSFuSNNLfdpdsJF7s9tegSSzI4UORnak fhhuyGUt0d2zOT0wUKY2eUdpx+IXa/eFOY7iQF1aEmB8vAyULmZFwZkbsliWKYWUgMc3+1sT8AUVb bLhDPoZtuF8OvsC4dcr86rSV4aaYqAq6vF+Tn6MItEVVzeAqwYvbvTCpqWMr4K1y4eUUCpXypNmLZ AVvCuoCg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pQ-0n; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Dinh Nguyen Subject: [PATCH v3 16/34] nios2: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:19 +0000 Message-Id: <20230228213738.272178-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112452573648503?= X-GMAIL-MSGID: =?utf-8?q?1759112452573648503?= Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Dinh Nguyen --- arch/nios2/include/asm/cacheflush.h | 6 ++- arch/nios2/include/asm/pgtable.h | 27 +++++++++---- arch/nios2/mm/cacheflush.c | 61 ++++++++++++++++------------- 3 files changed, 58 insertions(+), 36 deletions(-) diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index d0b71dd71287..8624ca83cffe 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -29,9 +29,13 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio extern void flush_icache_range(unsigned long start, unsigned long end); -extern void flush_icache_page(struct vm_area_struct *vma, struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 0f5c2564e9f5..8a77821a17a5 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -178,15 +178,23 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - unsigned long paddr = (unsigned long)page_to_virt(pte_page(pteval)); - - flush_dcache_range(paddr, paddr + PAGE_SIZE); - set_pte(ptep, pteval); + unsigned long paddr = (unsigned long)page_to_virt(pte_page(pte)); + + flush_dcache_range(paddr, paddr + nr * PAGE_SIZE); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline int pmd_none(pmd_t pmd) { return (pmd_val(pmd) == @@ -273,7 +281,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) extern void __init paging_init(void); extern void __init mmu_init(void); -extern void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #endif /* _ASM_NIOS2_PGTABLE_H */ diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 6aa9257c3ede..471485a84b2c 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -138,10 +138,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, __flush_icache(start, end); } -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { unsigned long start = (unsigned long) page_address(page); - unsigned long end = start + PAGE_SIZE; + unsigned long end = start + nr * PAGE_SIZE; __flush_dcache(start, end); __flush_icache(start, end); @@ -158,19 +159,19 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, __flush_icache(start, end); } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - unsigned long start = (unsigned long)page_address(page); + unsigned long start = (unsigned long)folio_address(folio); - __flush_dcache(start, start + PAGE_SIZE); + __flush_dcache(start, start + folio_size(folio)); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -178,32 +179,38 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); /* Flush this page if there are aliases. */ if (mapping && !mapping_mapped(mapping)) { - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping) { - unsigned long start = (unsigned long)page_address(page); - flush_aliases(mapping, page); - flush_icache_range(start, start + PAGE_SIZE); + unsigned long start = (unsigned long)folio_address(folio); + flush_aliases(mapping, folio); + flush_icache_range(start, start + folio_size(folio)); } - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_folio); -void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; struct address_space *mapping; reload_tlb_page(vma, address, pte); @@ -215,19 +222,19 @@ void update_mmu_cache(struct vm_area_struct *vma, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(folio); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); - if(mapping) - { - flush_aliases(mapping, page); + if (mapping) { + flush_aliases(mapping, folio); if (vma->vm_flags & VM_EXEC) - flush_icache_page(vma, page); + flush_icache_pages(vma, &folio->page, + folio_nr_pages(folio)); } } From patchwork Tue Feb 28 21:37:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62630 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267649wrd; Tue, 28 Feb 2023 13:40:22 -0800 (PST) X-Google-Smtp-Source: AK7set/T70SfmewVJflEdYrFF7FyiKn57FwoYO7C+xzMcvQK+f/9x7xMr7Etx7xXhuvA8aQxkPBb X-Received: by 2002:a17:902:c941:b0:196:5787:d73a with SMTP id i1-20020a170902c94100b001965787d73amr5107726pla.53.1677620421915; Tue, 28 Feb 2023 13:40:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620421; cv=none; d=google.com; s=arc-20160816; b=Qv2vSHq3xY0fhjI0Ot/XyqHh29+Rdk0hFM/7pFjGh3gfRUsx/PwgHiOQjpLtd0n3fj mfsv783tNekuIGjaWY8h49Kqre+qajm3ZlkQVy/8fAGd8VajsvakLqXRn0kJjOh9LPRz jvI4kiC1A3hEVfDrUOdTxE7T0PStVP9y+zmoXj+DMEnuTQX8hJq+t51Cw07M4IFJLB0g gKBpmVgxXuXgvELN8CaSgjQHDICGRRA24ACAz43iV3UyyJLmvZHD/FtjhyWBUx3duuuH tsX8kla1j3AA7cc65qPzYuytsNeveaoPdQgrpg2d/AzqB/PcgiB7SaIwb1mJdEa/mj6l bZdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Nqz7AKc3hPtY/ZHqOcsiEiY49KCNRExEKCVBLCnS/x4=; b=A3vsENFuRQIa1jdeO0yxMGcIE/TrdnFvdHlHq5i4RWO4J5tPU96vSGCpC+MHLo/rZa CUUPXjKNxlHl65e+nLtGSl64RG9PIOqcB6/H76PSdSpxw0vSM2Gumxyp7sYAZm9rK3d4 Fasl7g6ZxHnB3TThhGoDftXros9r2V9OratC9r083Pmh08UN2o89IavcwyTB1GmfQSqC PDwM5cI3Sm4ParrMzLGBStOypWOMEOGKbzahiNMvsJVbzrhwp5Xp/UQp3rL9ib3QbOJY 4aDPE9UpMFWYTxMGmBH4YRJt00I+T7KWaybC2DdTEMWWV7KkLbTnfj1PljOM5SXxXTe9 eEqQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=bhOA4QdZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c20-20020a6566d4000000b004cc96852d19si10396755pgw.63.2023.02.28.13.40.07; Tue, 28 Feb 2023 13:40:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=bhOA4QdZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230166AbjB1VjT (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229544AbjB1ViG (ORCPT ); Tue, 28 Feb 2023 16:38:06 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3A2934323; Tue, 28 Feb 2023 13:37:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Nqz7AKc3hPtY/ZHqOcsiEiY49KCNRExEKCVBLCnS/x4=; b=bhOA4QdZIDUyBWXUwB66ZVmlbr 92rE+G6ulRW8KUKHLmFDR8kXMC7/eBb2RCNg+w5ZtNI8ADvv85hR8wSba7m5N27fhtl12+ATq7y5t uXcsSVxgCwaptDvWpzVPhKX/Wj/f3XVq8bz1OLP4YPeCMOAtSr73WmdSjQtQNuNryXQrAL63ETzKr C7i7BvTiGxNROIu5JebX2Kq7sMDyBmgqQTr7m9UOOdbMsB7zUvyR01XeUTFME4hkKmN8NPMaLW2TM w5hsEzQzeoW1yXcAvS79gvRQJClLwwWAnsSD6SdD6S8gqpYU1pYrazUsOYZ7ZrWAZGYnIyBy9kMxx zyQX0MZw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pS-45; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Jonas Bonn , Stefan Kristiansson , Stafford Horne , linux-openrisc@vger.kernel.org Subject: [PATCH v3 17/34] openrisc: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:20 +0000 Message-Id: <20230228213738.272178-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112511369777095?= X-GMAIL-MSGID: =?utf-8?q?1759112511369777095?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Jonas Bonn Cc: Stefan Kristiansson Cc: Stafford Horne Cc: linux-openrisc@vger.kernel.org --- arch/openrisc/include/asm/cacheflush.h | 8 +++++++- arch/openrisc/include/asm/pgtable.h | 27 +++++++++++++++++++++----- arch/openrisc/mm/cache.c | 12 ++++++++---- 3 files changed, 37 insertions(+), 10 deletions(-) diff --git a/arch/openrisc/include/asm/cacheflush.h b/arch/openrisc/include/asm/cacheflush.h index eeac40d4a854..984c331ff5f4 100644 --- a/arch/openrisc/include/asm/cacheflush.h +++ b/arch/openrisc/include/asm/cacheflush.h @@ -56,10 +56,16 @@ static inline void sync_icache_dcache(struct page *page) */ #define PG_dc_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_dc_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - clear_bit(PG_dc_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_icache_user_page(vma, page, addr, len) \ diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h index 3eb9b9555d0d..1a7077150d7b 100644 --- a/arch/openrisc/include/asm/pgtable.h +++ b/arch/openrisc/include/asm/pgtable.h @@ -46,7 +46,21 @@ extern void paging_init(void); * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) @@ -379,13 +393,16 @@ static inline void update_tlb(struct vm_area_struct *vma, extern void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - update_tlb(vma, address, pte); - update_cache(vma, address, pte); + update_tlb(vma, address, ptep); + update_cache(vma, address, ptep); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + /* __PHX__ FIXME, SWAP, this probably doesn't work */ /* diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c index 534a52ec5e66..eb43b73f3855 100644 --- a/arch/openrisc/mm/cache.c +++ b/arch/openrisc/mm/cache.c @@ -43,15 +43,19 @@ void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte) { unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT; - struct page *page = pfn_to_page(pfn); - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); /* * Since icaches do not snoop for updated data on OpenRISC, we * must write back and invalidate any dirty pages manually. We * can skip data pages, since they will not end up in icaches. */ - if ((vma->vm_flags & VM_EXEC) && dirty) - sync_icache_dcache(page); + if ((vma->vm_flags & VM_EXEC) && dirty) { + unsigned int nr = folio_nr_pages(folio); + + while (nr--) + sync_icache_dcache(folio_page(folio, nr)); + } } From patchwork Tue Feb 28 21:37:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62624 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267429wrd; Tue, 28 Feb 2023 13:39:44 -0800 (PST) X-Google-Smtp-Source: AK7set/nirvIny6Yvp7CUuumgJE/SLNeYNFLD9Q1kDDEdwq91B2fcPFqirKgJWy/dnUk287y8T25 X-Received: by 2002:a05:6a20:8c02:b0:bf:7b3a:fd46 with SMTP id j2-20020a056a208c0200b000bf7b3afd46mr4253292pzh.20.1677620384041; Tue, 28 Feb 2023 13:39:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620384; cv=none; d=google.com; s=arc-20160816; b=NgjQnc2pyMbpiqlYeNXCAv8/vNL+sKKaOLoGLpkSAi67MzEuJvvkxIltEvLsb69Vm1 5X0mO79iTL/Wbh4b0xkA8tBiG0rcmuBDwmromGW/4uThPqvTF4zrZhrRlxcq+AjXVwZN J4APdORdzKLadZTbTnjNwRTC+NbAo8AUhbNNQlfhbDnVxr2FNo9li6LwXDVEJzzvvis8 PKJRftaZvYtghf9BF4XHO3rHxMaZL+5iqOBGFiInUUg1nvFixm/asHLtUkg1CTxMcBKR aMFtN+cn4giUykIrSTD7SBjH9IvDHytOcQE+Ivf4wyaLPjno1ELNBbrhzOOOEsThbdCR +nxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=f4iId2jx8WHA9aYmBaR2cIQrZXvmhrhHuZFPGqlcK5c=; b=ZAvEAnkYJSByTQbAH8NMKjDhmuosr7adIUXKwmEgybWkTILaeNMi4Z5JeFzQiSjc7m gLSU+vYndUFU/d3ZtU2IG90fEoGrbVX+/WcHR0zQAZvp0N7OhGggGGdKXwxyk16ZZrKR RzP1oEfVVcN7nX+KX3XeuNBuWyhYYJYQBJcUZJVC3NpK1D9iaz6plhxBV3Mn425WA1qJ 6iOMt4Tv41exEPB589aSm4tPA0U4eONzanDhcob42VfQmVeOEhyqjmoc8M34OnCmWn8z JmA+tuIDAVG9SaCQb91B/tIcawMK8rOEjmxmFWnmmZRDL8/Q8DkUM+4tHhkBSMfi8pck resA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=vP9MT4CY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bt26-20020a63291a000000b004fc264c3a43si409123pgb.397.2023.02.28.13.39.31; Tue, 28 Feb 2023 13:39:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=vP9MT4CY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230044AbjB1ViW (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229931AbjB1Vht (ORCPT ); Tue, 28 Feb 2023 16:37:49 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 663D934F4D; Tue, 28 Feb 2023 13:37:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=f4iId2jx8WHA9aYmBaR2cIQrZXvmhrhHuZFPGqlcK5c=; b=vP9MT4CYFjnnL/8DNjZmLaLR+B AQFbv3zu/rYwMRDoAYahHuoPHudDkC2YpNuwyb/kFfu7wr4IeIszeNzhekKsYM4iIauiNg4BOAYAZ bjOrgyfv4xnULOaD9ypWLqvJ0WKF1mW9nULCcOpvzV9ZQk3R2tiIMG4eTCbafm3RCl0zLv2LuMp/h S+lWGIlyZhgJ6MZu0byuDa9HsVxgBeNYtgolqafsS8lsLN6JIpK+S1MVwOqGSEZ5qRX8HlSDesTQW pXuwKiQAia0i14XRrzJm7nAUsxld01RDBrw2lKKcw/VtR33Tq5l6NEne4lI1LYIG5hsxyb38Q93RZ J/3yLwGQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pa-8U; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH v3 18/34] parisc: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:21 +0000 Message-Id: <20230228213738.272178-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112471984964817?= X-GMAIL-MSGID: =?utf-8?q?1759112471984964817?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org --- arch/parisc/include/asm/cacheflush.h | 14 ++-- arch/parisc/include/asm/pgtable.h | 28 +++++--- arch/parisc/kernel/cache.c | 101 +++++++++++++++++++-------- 3 files changed, 99 insertions(+), 44 deletions(-) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index ff07c509e04b..0bf8b69d086b 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -46,16 +46,20 @@ void invalidate_kernel_vmap_range(void *vaddr, int size); #define flush_cache_vmap(start, end) flush_cache_all() #define flush_cache_vunmap(start, end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -#define flush_icache_page(vma,page) do { \ - flush_kernel_dcache_page_addr(page_address(page)); \ - flush_kernel_icache_page(page_address(page)); \ -} while (0) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index e2950f5db7c9..78ee9816f423 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -73,14 +73,7 @@ extern void __update_cache(pte_t pte); mb(); \ } while(0) -#define set_pte_at(mm, addr, pteptr, pteval) \ - do { \ - if (pte_present(pteval) && \ - pte_user(pteval)) \ - __update_cache(pteval); \ - *(pteptr) = (pteval); \ - purge_tlb_entries(mm, addr); \ - } while (0) +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) #endif /* !__ASSEMBLY__ */ @@ -391,11 +384,28 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) extern void paging_init (void); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + if (pte_present(pte) && pte_user(pte)) + __update_cache(pte); + for (;;) { + *ptep = pte; + purge_tlb_entries(mm, addr); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_PTE_SHIFT; + addr += PAGE_SIZE; + } +} + /* Used for deferring calls to flush_dcache_page() */ #define PG_dcache_dirty PG_arch_1 -#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep) +#define update_mmu_cache_range(vma, addr, ptep, nr) __update_cache(*ptep) +#define update_mmu_cache(vma, addr, ptep) __update_cache(*ptep) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index 984d3a1b3828..16057812103b 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -92,11 +92,11 @@ static inline void flush_data_cache(void) /* Kernel virtual address of pfn. */ #define pfn_va(pfn) __va(PFN_PHYS(pfn)) -void -__update_cache(pte_t pte) +void __update_cache(pte_t pte) { unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; + unsigned int nr; /* We don't have pte special. As a result, we can be called with an invalid pfn and we don't need to flush the kernel dcache page. @@ -104,13 +104,17 @@ __update_cache(pte_t pte) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page_mapping_file(page) && - test_bit(PG_dcache_dirty, &page->flags)) { - flush_kernel_dcache_page_addr(pfn_va(pfn)); - clear_bit(PG_dcache_dirty, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + if (folio_flush_mapping(folio) && + test_bit(PG_dcache_dirty, &folio->flags)) { + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); + clear_bit(PG_dcache_dirty, &folio->flags); } else if (parisc_requires_coherency()) - flush_kernel_dcache_page_addr(pfn_va(pfn)); + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); } void @@ -365,6 +369,20 @@ static void flush_user_cache_page(struct vm_area_struct *vma, unsigned long vmad preempt_enable(); } +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) +{ + void *kaddr = page_address(page); + + for (;;) { + flush_kernel_dcache_page_addr(kaddr); + flush_kernel_icache_page(kaddr); + if (--nr == 0) + break; + page += PAGE_SIZE; + } +} + static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) { pte_t *ptep = NULL; @@ -393,26 +411,30 @@ static inline bool pte_needs_flush(pte_t pte) == (_PAGE_PRESENT | _PAGE_ACCESSED); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); - struct vm_area_struct *mpnt; - unsigned long offset; + struct address_space *mapping = folio_flush_mapping(folio); + struct vm_area_struct *vma; unsigned long addr, old_addr = 0; + void *kaddr; unsigned long count = 0; + unsigned long i, nr; pgoff_t pgoff; if (mapping && !mapping_mapped(mapping)) { - set_bit(PG_dcache_dirty, &page->flags); + set_bit(PG_dcache_dirty, &folio->flags); return; } - flush_kernel_dcache_page_addr(page_address(page)); + nr = folio_nr_pages(folio); + kaddr = folio_address(folio); + for (i = 0; i < nr; i++) + flush_kernel_dcache_page_addr(kaddr + i * PAGE_SIZE); if (!mapping) return; - pgoff = page->index; + pgoff = folio->index; /* * We have carefully arranged in arch_get_unmapped_area() that @@ -422,15 +444,29 @@ void flush_dcache_page(struct page *page) * on machines that support equivalent aliasing */ flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - addr = mpnt->vm_start + offset; - if (parisc_requires_coherency()) { - pte_t *ptep; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + nr - 1) { + unsigned long offset = pgoff - vma->vm_pgoff; + unsigned long pfn = folio_pfn(folio); + + addr = vma->vm_start; + nr = folio_nr_pages(folio); + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + addr += offset * PAGE_SIZE; + } + if (addr + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - addr) / PAGE_SIZE; - ptep = get_ptep(mpnt->vm_mm, addr); - if (ptep && pte_needs_flush(*ptep)) - flush_user_cache_page(mpnt, addr); + if (parisc_requires_coherency()) { + for (i = 0; i < nr; i++) { + pte_t *ptep = get_ptep(vma->vm_mm, + addr + i * PAGE_SIZE); + if (ptep && pte_needs_flush(*ptep)) + flush_user_cache_page(vma, + addr + i * PAGE_SIZE); + } } else { /* * The TLB is the engine of coherence on parisc: @@ -443,27 +479,32 @@ void flush_dcache_page(struct page *page) * in (until the user or kernel specifically * accesses it, of course) */ - flush_tlb_page(mpnt, addr); + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); if (old_addr == 0 || (old_addr & (SHM_COLOUR - 1)) != (addr & (SHM_COLOUR - 1))) { - __flush_cache_page(mpnt, addr, page_to_phys(page)); + for (i = 0; i < nr; i++) + __flush_cache_page(vma, + addr + i * PAGE_SIZE, + (pfn + i) * PAGE_SIZE); /* * Software is allowed to have any number * of private mappings to a page. */ - if (!(mpnt->vm_flags & VM_SHARED)) + if (!(vma->vm_flags & VM_SHARED)) continue; if (old_addr) pr_err("INEQUIVALENT ALIASES 0x%lx and 0x%lx in file %pD\n", - old_addr, addr, mpnt->vm_file); - old_addr = addr; + old_addr, addr, vma->vm_file); + if (nr == folio_nr_pages(folio)) + old_addr = addr; } } WARN_ON(++count == 4096); } flush_dcache_mmap_unlock(mapping); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* Defined in arch/parisc/kernel/pacache.S */ EXPORT_SYMBOL(flush_kernel_dcache_range_asm); From patchwork Tue Feb 28 21:37:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62633 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267691wrd; Tue, 28 Feb 2023 13:40:28 -0800 (PST) X-Google-Smtp-Source: AK7set/kI7BP1vFYRHY0PLD9+kWCwbLo7hDn3ctbrs9fyfQfkifXmB8Nnnogh4Ap2yzrD86FlAgN X-Received: by 2002:a17:90b:4c45:b0:237:9a37:d86f with SMTP id np5-20020a17090b4c4500b002379a37d86fmr4754255pjb.36.1677620428037; Tue, 28 Feb 2023 13:40:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620428; cv=none; d=google.com; s=arc-20160816; b=ygjq6CUQh5QrWLB1ubTFMAWjUeZ1e6x2+v1M+4aIKSo/2AUVgdlz6LIx97kI6DiXBq A0S7jy3wl9cSuDbSxN0US4XSeZvWmFYyB7dm6CUkfdGkNU9lMCtETtzmw1ul4wdkpSMg oQiBU89PUmABNSYdvV8tZOrlJj7y5HiI0pzNdyq8tDgYlwz4TV4Oz5OppzlIwKS70H6/ fFwYCeNNo2E9z4ObaJrGIo4R+QODuTvXF1KAD0jV96KXHGjRt20vGO4Bcpbg6yX4vhTR 83+GVmundzNWy/9KfJDVigJhJkDr8WYA+Py1QrCrkS1csHHSdL8l0jJlnzjfDSWwTB2l dFHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=M4ny1muYlvcClbRg5WQTO/Vts3FXL6q850CUXICG6ck=; b=Em3L8x0OqD1VpBnWZRdknY0FbBHJsNyN0CEVK8f/Gr/C2pJKfOwIvgmE/ZHmWFkoIH WyZF5LhFKvboS9J1oQbBcqmQmgb+1Xd7qke17+aZQPUohHpTew7kknbflWy0K5eCOdvE VGU1e5YJg0mrWyXzDVzG4kuHB1O309PqZdBEJQ4GqWLsBG+O7U6w4XulpfdMf9/SiUXc 7hsnto95g+ul2aTdFKV9Ytw6UoDaUM+0G2hVADUYB2c/26Mxv5QHyiD/XamBmL70YWin r+DFpAPcIKFhrEqSMnI1LSNbKKiaxLbUQnsd6TylpBFC4Bu/+0wsvOwHtl1zoU/cQ7Hk CTsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="fec0F/Xo"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id on4-20020a17090b1d0400b00223facc5d2csi12369654pjb.162.2023.02.28.13.40.13; Tue, 28 Feb 2023 13:40:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="fec0F/Xo"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230175AbjB1VjY (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230036AbjB1ViV (ORCPT ); Tue, 28 Feb 2023 16:38:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB3E23431C; Tue, 28 Feb 2023 13:37:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=M4ny1muYlvcClbRg5WQTO/Vts3FXL6q850CUXICG6ck=; b=fec0F/XoUT3otKNoa22zMoIuPS DzAVFaFnpuH/ZtFlnUThyhqIp+LsdaPcVYQx2l4U3pjAwSgpF2HPhIbxnToJIE9npYCR/L8Mm9U4D 2MOCixX3GL8mOkX/T1gfi/6aMdWXhFHxscKh4u+uammQk0R5wNdC4WGxWJp+FO97pGAAQe0MQOMDf iGoIW9d86Ac63VneTen8KNqkRrnFeU5BWszsYjhEwR1YPY8+EvMfg6tM4b5/z51myppBtmqHAPmmZ QpsUdJjeHS9QzwJYjQ94SNhBdFsSIzCKYzY+2zXcVhble+NLp+Kl/i7d1GyB54072AKtith2exzBV T8g1DiDw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pk-Cl; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v3 19/34] powerpc: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:22 +0000 Message-Id: <20230228213738.272178-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112517886409242?= X-GMAIL-MSGID: =?utf-8?q?1759112517886409242?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: linuxppc-dev@lists.ozlabs.org --- arch/powerpc/include/asm/book3s/pgtable.h | 10 +---- arch/powerpc/include/asm/cacheflush.h | 14 +++++-- arch/powerpc/include/asm/kvm_ppc.h | 10 ++--- arch/powerpc/include/asm/nohash/pgtable.h | 13 ++---- arch/powerpc/include/asm/pgtable.h | 6 +++ arch/powerpc/mm/book3s64/hash_utils.c | 11 ++--- arch/powerpc/mm/cacheflush.c | 40 ++++++------------ arch/powerpc/mm/nohash/e500_hugetlbpage.c | 3 +- arch/powerpc/mm/pgtable.c | 51 +++++++++++++---------- 9 files changed, 77 insertions(+), 81 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h index d18b748ea3ae..c2ef811505b0 100644 --- a/arch/powerpc/include/asm/book3s/pgtable.h +++ b/arch/powerpc/include/asm/book3s/pgtable.h @@ -9,13 +9,6 @@ #endif #ifndef __ASSEMBLY__ -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - - #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty); @@ -36,7 +29,8 @@ void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * corresponding HPTE into the hash table ahead of time, instead of * waiting for the inevitable extra hash-table miss exception. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { if (IS_ENABLED(CONFIG_PPC32) && !mmu_has_feature(MMU_FTR_HPTE_TABLE)) return; diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h index 7564dd4fd12b..ef7d2de33b89 100644 --- a/arch/powerpc/include/asm/cacheflush.h +++ b/arch/powerpc/include/asm/cacheflush.h @@ -35,13 +35,19 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) * It just marks the page as not i-cache clean. We do the i-cache * flush later when the page is given to a user process, if necessary. */ -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) return; /* avoid an atomic op if possible */ - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } void flush_icache_range(unsigned long start, unsigned long stop); @@ -51,7 +57,7 @@ void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); #define flush_icache_user_page flush_icache_user_page -void flush_dcache_icache_page(struct page *page); +void flush_dcache_icache_folio(struct folio *folio); /** * flush_dcache_range(): Write any modified data cache blocks out to memory and diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 6bef23d6d0e3..e91dd8e88bb7 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -868,7 +868,7 @@ void kvmppc_init_lpid(unsigned long nr_lpids); static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) { - struct page *page; + struct folio *folio; /* * We can only access pages that the kernel maps * as memory. Bail out for unmapped ones. @@ -877,10 +877,10 @@ static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) return; /* Clear i-cache for new pages */ - page = pfn_to_page(pfn); - if (!test_bit(PG_dcache_clean, &page->flags)) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index a6caaaab6f92..69a7dd47a9f0 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -166,12 +166,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return __pte(pte_val(pte) & ~_PAGE_SWP_EXCLUSIVE); } -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - /* This low level function performs the actual PTE insertion * Setting the PTE depends on the MMU type and other factors. It's * an horrible mess that I'm not going to try to clean up now but @@ -282,10 +276,11 @@ static inline int pud_huge(pud_t pud) * for the page which has just been mapped in. */ #if defined(CONFIG_PPC_E500) && defined(CONFIG_HUGETLB_PAGE) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); #else -static inline -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) {} +static inline void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) {} #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 9972626ddaf6..bf1263ff7e67 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -41,6 +41,12 @@ struct mm_struct; #ifndef __ASSEMBLY__ +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr); +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1); + #ifndef MAX_PTRS_PER_PGD #define MAX_PTRS_PER_PGD PTRS_PER_PGD #endif diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index fedffe3ae136..ad2afa08e62e 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -1307,18 +1307,19 @@ void hash__early_init_mmu_secondary(void) */ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap) { - struct page *page; + struct folio *folio; if (!pfn_valid(pte_pfn(pte))) return pp; - page = pte_page(pte); + folio = page_folio(pte_page(pte)); /* page is dirty */ - if (!test_bit(PG_dcache_clean, &page->flags) && !PageReserved(page)) { + if (!test_bit(PG_dcache_clean, &folio->flags) && + !folio_test_reserved(folio)) { if (trap == INTERRUPT_INST_STORAGE) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } else pp |= HPTE_R_N; } diff --git a/arch/powerpc/mm/cacheflush.c b/arch/powerpc/mm/cacheflush.c index 0e9b4879c0f9..8760d2223abe 100644 --- a/arch/powerpc/mm/cacheflush.c +++ b/arch/powerpc/mm/cacheflush.c @@ -148,44 +148,30 @@ static void __flush_dcache_icache(void *p) invalidate_icache_range(addr, addr + PAGE_SIZE); } -static void flush_dcache_icache_hugepage(struct page *page) +void flush_dcache_icache_folio(struct folio *folio) { - int i; - int nr = compound_nr(page); + unsigned int i, nr = folio_nr_pages(folio); - if (!PageHighMem(page)) { + if (flush_coherent_icache()) + return; + + if (!folio_test_highmem(folio)) { + void *addr = folio_address(folio); for (i = 0; i < nr; i++) - __flush_dcache_icache(lowmem_page_address(page + i)); - } else { + __flush_dcache_icache(addr + i * PAGE_SIZE); + } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { for (i = 0; i < nr; i++) { - void *start = kmap_local_page(page + i); + void *start = kmap_local_folio(folio, i * PAGE_SIZE); __flush_dcache_icache(start); kunmap_local(start); } - } -} - -void flush_dcache_icache_page(struct page *page) -{ - if (flush_coherent_icache()) - return; - - if (PageCompound(page)) - return flush_dcache_icache_hugepage(page); - - if (!PageHighMem(page)) { - __flush_dcache_icache(lowmem_page_address(page)); - } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { - void *start = kmap_local_page(page); - - __flush_dcache_icache(start); - kunmap_local(start); } else { - flush_dcache_icache_phys(page_to_phys(page)); + unsigned long pfn = folio_pfn(folio); + for (i = 0; i < nr; i++) + flush_dcache_icache_phys((pfn + i) * PAGE_SIZE); } } -EXPORT_SYMBOL(flush_dcache_icache_page); void clear_user_page(void *page, unsigned long vaddr, struct page *pg) { diff --git a/arch/powerpc/mm/nohash/e500_hugetlbpage.c b/arch/powerpc/mm/nohash/e500_hugetlbpage.c index 58c8d9849cb1..f3cb91107a47 100644 --- a/arch/powerpc/mm/nohash/e500_hugetlbpage.c +++ b/arch/powerpc/mm/nohash/e500_hugetlbpage.c @@ -178,7 +178,8 @@ book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte) * * This must always be called with the pte lock held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { if (is_vm_hugetlb_page(vma)) book3e_hugetlb_preload(vma, address, *ptep); diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index cb2dcdb18f8e..b3c7b874a7a2 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -58,7 +58,7 @@ static inline int pte_looks_normal(pte_t pte) return 0; } -static struct page *maybe_pte_to_page(pte_t pte) +static struct folio *maybe_pte_to_folio(pte_t pte) { unsigned long pfn = pte_pfn(pte); struct page *page; @@ -68,7 +68,7 @@ static struct page *maybe_pte_to_page(pte_t pte) page = pfn_to_page(pfn); if (PageReserved(page)) return NULL; - return page; + return page_folio(page); } #ifdef CONFIG_PPC_BOOK3S @@ -84,12 +84,12 @@ static pte_t set_pte_filter_hash(pte_t pte) pte = __pte(pte_val(pte) & ~_PAGE_HPTEFLAGS); if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE) || cpu_has_feature(CPU_FTR_NOEXECUTE))) { - struct page *pg = maybe_pte_to_page(pte); - if (!pg) + struct folio *folio = maybe_pte_to_folio(pte); + if (!folio) return pte; - if (!test_bit(PG_dcache_clean, &pg->flags)) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } return pte; @@ -107,7 +107,7 @@ static pte_t set_pte_filter_hash(pte_t pte) { return pte; } */ static inline pte_t set_pte_filter(pte_t pte) { - struct page *pg; + struct folio *folio; if (radix_enabled()) return pte; @@ -120,18 +120,18 @@ static inline pte_t set_pte_filter(pte_t pte) return pte; /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) return pte; /* If the page clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) return pte; /* If it's an exec fault, we flush the cache and make it clean */ if (is_exec_fault()) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); return pte; } @@ -142,7 +142,7 @@ static inline pte_t set_pte_filter(pte_t pte) static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, int dirty) { - struct page *pg; + struct folio *folio; if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) return pte; @@ -168,17 +168,17 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, #endif /* CONFIG_DEBUG_VM */ /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) goto bail; /* If the page is already clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) goto bail; /* Clean the page and set PG_dcache_clean */ - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); bail: return pte_mkexec(pte); @@ -187,8 +187,8 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, /* * set_pte stores a linux PTE into the linux page table. */ -void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr) { /* * Make sure hardware valid bit is not set. We don't do @@ -203,7 +203,14 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte = set_pte_filter(pte); /* Perform the setting of the PTE */ - __set_pte_at(mm, addr, ptep, pte, 0); + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + addr += PAGE_SIZE; + } } void unmap_kernel_page(unsigned long va) From patchwork Tue Feb 28 21:37:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62635 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267940wrd; Tue, 28 Feb 2023 13:41:07 -0800 (PST) X-Google-Smtp-Source: AK7set9FtmOB9oz8swT3UoQ4d/WkuCI8J0tEmdwwiadbRLhxSO42SZ+C4PlplOoDPlnNZbMT1Rnf X-Received: by 2002:a17:906:fcc5:b0:8b1:820a:7b60 with SMTP id qx5-20020a170906fcc500b008b1820a7b60mr4201227ejb.6.1677620467599; Tue, 28 Feb 2023 13:41:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620467; cv=none; d=google.com; s=arc-20160816; b=OcAqIoIsNNzHtptHGtGqXGBkWfCv49dQo3G0sb8gddLLY8gYVBItTWXwtMUh+KWQTu xhVpccdCUyeTbc/fCDgLCtZlGGU9W6sjuRnxH6yDrIeSZ836bASCFZxIB5owzHTz/EFW QcfM3swX6fz8t21MCv0FJGAKyB71a4IGSk6HP7yJS2O2XalZn/BFvRRlH5nP0G+rg6is yC6Yl/tqpMRWAyV6aXKO58RYxLbImwmXjHKoxl38VgCkBpo4Xx+SAMH11en5ElfSNeGu 2HoDN0H1funpddhZE/VMoLw/2hpsmVmLkAyec4fz7L5mrXzbra7ocPcMzEGp91XVFHSR KGfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bgDXt1j338PLwuReLhtbxzMM/LqJBLyZGUvNk00hFLE=; b=LU6Fo4LEHXlKGX/zZpDfkzHpSSJTm0nPghGy5sn+p+Lyp1GjP2H6EYn9U5pl0Dw/vK fMWuF/FKhx2LvgSEh4plHJnmtPxK8Z0z07C+qVLksIpk6yMn6e7kvhUKUdxCa79AJvdI HNnlouXkPNS1I6830ECDrzLJQC/qZtltazAnBvomL8nDirE9JCH4F7N9bbUqts8+5+kB pgyv/HzVgHbaPaLFE8yPecyzrlmbayY8ZXvlY7AACAX1tmcsh7wUugeQJCN0a/GKYsxp nMj3a7VIF8kq759tl9MMefEKycBEhe3dAFXBOGsP0DdWQmSke8jV2iD82uPqarKFqeSN H3nw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=MZG8P9QC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i20-20020a1709063c5400b008dd8491f9f8si11566724ejg.118.2023.02.28.13.40.44; Tue, 28 Feb 2023 13:41:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=MZG8P9QC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230151AbjB1VjQ (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230006AbjB1ViG (ORCPT ); Tue, 28 Feb 2023 16:38:06 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3547135275; Tue, 28 Feb 2023 13:37:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bgDXt1j338PLwuReLhtbxzMM/LqJBLyZGUvNk00hFLE=; b=MZG8P9QCVk6X6QUeTHsxN4ljba XHTJvll6JiCHqkrFufD6FkOaPVdMfdChmuIsYG6gPN/C0EtF8ZsJOFYFEH1aeBcwuCVsvT6cc6zYC kERWqlZqxcgCWvBS70G4o63hwFCq5a5JLvfaDSP47cYHd1mFAk1fwtqJ/1In2d8HC2ySTJuVnuQNW lGzpIRExrBET51YPs0VKertzPTTKR+WXGDup5hZEHoBb9ayQxwVwro9LZ0dfPFqtLQK6Q85qK5e7b Aj8Top+afwoUhwQOh16Hnac/FNf0S0cYWNl7KZwibvSYNIWcbo2rngPWYZgxF35kJkqJ6zw0rggUf HdbXC1nQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pu-IG; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Alexandre Ghiti , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [PATCH v3 20/34] riscv: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:23 +0000 Message-Id: <20230228213738.272178-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112559035371620?= X-GMAIL-MSGID: =?utf-8?q?1759112559035371620?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Alexandre Ghiti Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: linux-riscv@lists.infradead.org Acked-by: Palmer Dabbelt --- arch/riscv/include/asm/cacheflush.h | 19 +++++++++---------- arch/riscv/include/asm/pgtable.h | 26 +++++++++++++++++++------- arch/riscv/mm/cacheflush.c | 11 ++--------- 3 files changed, 30 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 03e3b95ae6da..10e5e96f09b5 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -15,20 +15,19 @@ static inline void local_flush_icache_all(void) #define PG_dcache_clean PG_arch_1 -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in flush_icache_pte()). - */ - if (PageHuge(page)) - page = compound_head(page); - - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); } +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + /* * RISC-V doesn't have an instruction to flush parts of the instruction cache, * so instead we just flush the whole thing. diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index b516f3b59616..3a3a776fc047 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -405,8 +405,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) /* Commit new configuration to MMU hardware */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* * The kernel assumes that TLBs don't cache invalid entries, but @@ -415,8 +415,11 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, * Relying on flush_tlb_fix_spurious_fault would suffice, but * the extra traps reduce performance. So, eagerly SFENCE.VMA. */ - local_flush_tlb_page(address); + while (nr--) + local_flush_tlb_page(address + nr * PAGE_SIZE); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -456,12 +459,21 @@ static inline void __set_pte_at(struct mm_struct *mm, set_pte(ptep, pteval); } -static inline void set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pteval, 1); - __set_pte_at(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pteval); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pteval) += 1 << _PAGE_PFN_SHIFT; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c index fcd6145fbead..e36a851e5788 100644 --- a/arch/riscv/mm/cacheflush.c +++ b/arch/riscv/mm/cacheflush.c @@ -81,16 +81,9 @@ void flush_icache_mm(struct mm_struct *mm, bool local) #ifdef CONFIG_MMU void flush_icache_pte(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { + if (!test_bit(PG_dcache_clean, &folio->flags)) { flush_icache_all(); set_bit(PG_dcache_clean, &page->flags); } From patchwork Tue Feb 28 21:37:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62636 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267943wrd; Tue, 28 Feb 2023 13:41:08 -0800 (PST) X-Google-Smtp-Source: AK7set9RVQRAYytLxyrwAjoETeN7T4guiIZ1WvHZAxRO9djSIKN05FNZ54T80uvDC7WJdG4EpYWV X-Received: by 2002:a05:6402:134b:b0:4ab:4c5e:b0ed with SMTP id y11-20020a056402134b00b004ab4c5eb0edmr5262033edw.21.1677620467910; Tue, 28 Feb 2023 13:41:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620467; cv=none; d=google.com; s=arc-20160816; b=ZvvtwkxUbQVbdc7LCgZzDBxZxFdj/mHmuv/19Izgx/KlhKi/htST/UaHnMPUCefvzW +uVDJlrLzDTHEnIKWFF88Nbqo9RkuMQkQbJJKfFRRL0+kPsJ+ENvjs8jaYTMTMvrF++s pJ6odn76ryLBXhsyB+Ot0zWWbef39NnypixsZ9l4TUQ3/w/mJq6nuPFZ+kIZbx/aQnOn OvFkB3Gr2akQhoJFcJM7GMralYFR4M6fUXNrLhsiNC28gy1WzxCzTx6nWKeVY+4bE8IK cb7lB5SSVrxUTC+PTcweLdMPFpgSETPBeZbW1l8EWzAJtvaTBA7MpjX5FGynxiFwLvCj w8Zw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OsGw+Kta3YWjQfJb2PytPQ2Q1ww6/aja6t5dfRe0Qq4=; b=dDUJi419wKNojU4MGbEAFuR9MGGjWpqiuZoEbFMnBqsU0E3tmIeWFsnsSAmg00L9d5 wdlVhnLGo2a7RF/V5FpxMCxX8I9nH0AT3Nmjei8cDmT5AOcnZPRyD1c7dDFGloqDOuF/ nWl900jEGBG1HlTnvBPu3U9F9QUQdnJhkrJWx4IRxf6vSc9bC1nmUptAyCweL/jeCuxl zVIBFrI9EA2ffEkA+uaMx2SbHTLZVaMJaBWzhKK+jUJmddyMVAYD8PR4pWpeLP3Z2lFs P/BbRpB0VGDnOUFwm4qkeuIGQ5NifzCbwGLBml9DMOq8YDZqVK/FlPSoE/2LCoOP7YmN mUBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=rMDBeLPm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l12-20020a056402124c00b004acbdf1f903si4458165edw.374.2023.02.28.13.40.45; Tue, 28 Feb 2023 13:41:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=rMDBeLPm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230145AbjB1VjN (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230003AbjB1ViF (ORCPT ); Tue, 28 Feb 2023 16:38:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B59F33401F; Tue, 28 Feb 2023 13:37:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OsGw+Kta3YWjQfJb2PytPQ2Q1ww6/aja6t5dfRe0Qq4=; b=rMDBeLPmVy/kg7VjEW0+V+mfrZ lKyuBvqhfx1BjW9Zuwx44oDQcbTer4re2OMXNroGqqGxQhEAf2SSYunPi/vpenpVcDBDCoVE1Snen 1eFC9Wns0R6xq9fq2jKRtj6r6IO0MxL6T/YimB5wQY8ItbD2uI7GphlMWfK5Y7sVrwoOF/1WhR34z Y8HnMMK0+lAeaswImqwwHTbCM4jjuUw+7MST7gy/EVKYwTvAaLoR6N7jGY3p+YwvhpVCH5lyC7B+U MnToir81ROGosZPKkD/HG2y3Q9I/Jag2e1EmeqAYvNGojQTho3bVX8Zp8n8qvubagR8ZVl/B+IUhB kfLELGYA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018q2-M1; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org Subject: [PATCH v3 21/34] s390: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:24 +0000 Message-Id: <20230228213738.272178-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112559875043222?= X-GMAIL-MSGID: =?utf-8?q?1759112559875043222?= Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: linux-s390@vger.kernel.org Reviewed-by: Gerald Schaefer --- arch/s390/include/asm/pgtable.h | 34 ++++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 2c70b4d1263d..46bf475116f1 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -50,6 +50,7 @@ void arch_report_meminfo(struct seq_file *m); * tables contain all the necessary information. */ #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0) #define update_mmu_cache_pmd(vma, address, ptep) do { } while (0) /* @@ -1317,21 +1318,36 @@ pgprot_t pgprot_writecombine(pgprot_t prot); pgprot_t pgprot_writethrough(pgprot_t prot); /* - * Certain architectures need to do special things when PTEs - * within a page table are directly modified. Thus, the following - * hook is made available. + * Set multiple PTEs to consecutive pages with a single call. All PTEs + * are within the same folio, PMD and VMA. */ -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t entry) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t entry, unsigned int nr) { if (pte_present(entry)) entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED)); - if (mm_has_pgste(mm)) - ptep_set_pte_at(mm, addr, ptep, entry); - else - set_pte(ptep, entry); + if (mm_has_pgste(mm)) { + for (;;) { + ptep_set_pte_at(mm, addr, ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + addr += PAGE_SIZE; + } + } else { + for (;;) { + set_pte(ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + } + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + /* * Conversion functions: convert a page and protection to a page entry, * and a page entry and page directory to the page they refer to. From patchwork Tue Feb 28 21:37:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62637 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3268062wrd; Tue, 28 Feb 2023 13:41:24 -0800 (PST) X-Google-Smtp-Source: AK7set9Fy42nAUUR95aivouB9z+Jm+TBlm5ZprskTs0c8Rz+aWEQj47FTbrQ06/63nKInBuWcPb6 X-Received: by 2002:a17:906:7146:b0:881:44e3:baae with SMTP id z6-20020a170906714600b0088144e3baaemr4855711ejj.54.1677620484255; Tue, 28 Feb 2023 13:41:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620484; cv=none; d=google.com; s=arc-20160816; b=K41gs4uiwyC4SPzlM7TtkzGatObzongsmsVDv3oMDvKBX4gwDymyFSjo0z+VXEav8z EAEGCazwnu5U9Jp3IsGhMJGs9cm08V5CxPHOY2qO5NbusY/ZMKyHMciBTAaGB5RkZxVg 4UXOvOl1tgoCb7+KYkrj7tLSIfbQnk8agi6EiscJyO8rNkOc3f67Pf7V+/14sbsB9Oq9 6DU9ThGh2lssrlur4AIihQwXLCe0csR4ZQHcd+TI/rNVVSnvuzdZTH8+VC0TTBnfo+XE DNkxNrF4iaEOTu1bpjAuTUqYjSjcftIcpG0C92ejYT1BeF7k1wtsrX9M9Gokh29Eu/Yt CqpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bHG0cAAB5y+5Fy2o+LQ/LT1veDA41EiEXJDm/yoHE4w=; b=ftpt7EMPFuyYEPle2yrN0RkuMFZDaNqkJC8KXsvC0QPXvQY6pcJog9bB8jCbwQYJpX ro3zbzMm41EmmGQwDzL96S7ISFhvDJGxwoxuOF5kTHh91LhwHBp0Ex3lTUGuauRtfTjY wZ1q+wJ5OiAwZoRDhYvHRg3fwPg0SEy6hPTYpoxIiGY6rN/ZsQvb+51b+j7wCYvCH4i1 woS1wc5gZQuW1xXOm4CgOvOQ0dvlDFVlfKUPsYkGAF3GGxsIcTbXSl4t/CF7/citL5Pw TTiZS61GNqPgBYZAa3uNOJxt+jqDAa4ewo4Fx9WdYJl4j5e+ffApt3o60TLFBTkX+wwg 7l8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=uuNpayke; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kl11-20020a170907994b00b008e41fc88f2asi13438933ejc.385.2023.02.28.13.41.01; Tue, 28 Feb 2023 13:41:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=uuNpayke; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230179AbjB1Vj1 (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229919AbjB1ViV (ORCPT ); Tue, 28 Feb 2023 16:38:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 890CA3430F; Tue, 28 Feb 2023 13:37:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bHG0cAAB5y+5Fy2o+LQ/LT1veDA41EiEXJDm/yoHE4w=; b=uuNpaykeiK5frENkfd+aJNrgoD zi+BAtfLF/YVlEoNF2/fceWUxc4H557U7VOvkZavRFq+8vH7xnCPti0atW6QBc1YG7hC7hZocJ78G yesByn9YVrJPdpmy3ngrYC5YXRqKl+W3RBlRiDnHUDM5/XyrSOyubzsZu01ggy0jmQpkYpGeAWHRm lyTL+J5cDDw5ztIpSIT4nE7R5zTUN3fMKKEnnBDpb3ODjFvY1RvBN5YZiCWcuu8BvL9E6xCka7s/7 keZ7XcWYA1V7/9HEyfngx7D5SXVa9jECUsd7mDWRP2+0I46TYC4G42bdFY12DTKPLXSsurdY6UYLU BFkpkq2w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018q4-Ol; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , linux-sh@vger.kernel.org Subject: [PATCH v3 22/34] superh: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:25 +0000 Message-Id: <20230228213738.272178-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112576876405835?= X-GMAIL-MSGID: =?utf-8?q?1759112576876405835?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clean flag from being per-page to per-folio. Flush the entire folio containing the pages in flush_icache_pages() for ease of implementation. Signed-off-by: Matthew Wilcox (Oracle) Cc: Yoshinori Sato Cc: Rich Felker Cc: John Paul Adrian Glaubitz Cc: linux-sh@vger.kernel.org --- arch/sh/include/asm/cacheflush.h | 21 ++++++++----- arch/sh/include/asm/pgtable.h | 6 ++-- arch/sh/include/asm/pgtable_32.h | 16 ++++++++-- arch/sh/mm/cache-j2.c | 4 +-- arch/sh/mm/cache-sh4.c | 26 ++++++++++----- arch/sh/mm/cache-sh7705.c | 26 +++++++++------ arch/sh/mm/cache.c | 54 ++++++++++++++++++-------------- arch/sh/mm/kmap.c | 3 +- 8 files changed, 101 insertions(+), 55 deletions(-) diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 481a664287e2..9fceef6f3e00 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -13,9 +13,9 @@ * - flush_cache_page(mm, vmaddr, pfn) flushes a single page * - flush_cache_range(vma, start, end) flushes a range of pages * - * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache + * - flush_dcache_folio(folio) flushes(wback&invalidates) a folio for dcache * - flush_icache_range(start, end) flushes(invalidates) a range for icache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) pages for icache * - flush_cache_sigtramp(vaddr) flushes the signal trampoline */ extern void (*local_flush_cache_all)(void *args); @@ -23,9 +23,9 @@ extern void (*local_flush_cache_mm)(void *args); extern void (*local_flush_cache_dup_mm)(void *args); extern void (*local_flush_cache_page)(void *args); extern void (*local_flush_cache_range)(void *args); -extern void (*local_flush_dcache_page)(void *args); +extern void (*local_flush_dcache_folio)(void *args); extern void (*local_flush_icache_range)(void *args); -extern void (*local_flush_icache_page)(void *args); +extern void (*local_flush_icache_folio)(void *args); extern void (*local_flush_cache_sigtramp)(void *args); static inline void cache_noop(void *args) { } @@ -42,11 +42,18 @@ extern void flush_cache_page(struct vm_area_struct *vma, extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range -extern void flush_icache_page(struct vm_area_struct *vma, - struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index 3ce30becf6df..1a8fdc3bc363 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -102,13 +102,15 @@ extern void __update_cache(struct vm_area_struct *vma, extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); -static inline void -update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; __update_cache(vma, address, pte); __update_tlb(vma, address, pte); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h index 21952b094650..03ba1834e126 100644 --- a/arch/sh/include/asm/pgtable_32.h +++ b/arch/sh/include/asm/pgtable_32.h @@ -307,7 +307,19 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define set_pte(pteptr, pteval) (*(pteptr) = pteval) #endif -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* * (pmds are folded into pgds so this doesn't get actually called, @@ -323,7 +335,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define pte_none(x) (!pte_val(x)) #define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) +#define pte_clear(mm, addr, ptep) set_pte(ptep, __pte(0)) #define pmd_none(x) (!pmd_val(x)) #define pmd_present(x) (pmd_val(x)) diff --git a/arch/sh/mm/cache-j2.c b/arch/sh/mm/cache-j2.c index f277862a11f5..9ac960214380 100644 --- a/arch/sh/mm/cache-j2.c +++ b/arch/sh/mm/cache-j2.c @@ -55,9 +55,9 @@ void __init j2_cache_init(void) local_flush_cache_dup_mm = j2_flush_both; local_flush_cache_page = j2_flush_both; local_flush_cache_range = j2_flush_both; - local_flush_dcache_page = j2_flush_dcache; + local_flush_dcache_folio = j2_flush_dcache; local_flush_icache_range = j2_flush_icache; - local_flush_icache_page = j2_flush_icache; + local_flush_icache_folio = j2_flush_icache; local_flush_cache_sigtramp = j2_flush_icache; pr_info("Initial J2 CCR is %.8x\n", __raw_readl(j2_ccr_base)); diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c index 72c2e1b46c08..862046f26981 100644 --- a/arch/sh/mm/cache-sh4.c +++ b/arch/sh/mm/cache-sh4.c @@ -107,19 +107,29 @@ static inline void flush_cache_one(unsigned long start, unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh4_flush_dcache_page(void *arg) +static void sh4_flush_dcache_folio(void *arg) { - struct page *page = arg; - unsigned long addr = (unsigned long)page_address(page); + struct folio *folio = arg; #ifndef CONFIG_SMP - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); else #endif - flush_cache_one(CACHE_OC_ADDRESS_ARRAY | - (addr & shm_align_mask), page_to_phys(page)); + { + unsigned long pfn = folio_pfn(folio); + unsigned long addr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + flush_cache_one(CACHE_OC_ADDRESS_ARRAY | + (addr & shm_align_mask), + pfn * PAGE_SIZE); + addr += PAGE_SIZE; + pfn++; + } + } wmb(); } @@ -379,7 +389,7 @@ void __init sh4_cache_init(void) __raw_readl(CCN_PRR)); local_flush_icache_range = sh4_flush_icache_range; - local_flush_dcache_page = sh4_flush_dcache_page; + local_flush_dcache_folio = sh4_flush_dcache_folio; local_flush_cache_all = sh4_flush_cache_all; local_flush_cache_mm = sh4_flush_cache_mm; local_flush_cache_dup_mm = sh4_flush_cache_mm; diff --git a/arch/sh/mm/cache-sh7705.c b/arch/sh/mm/cache-sh7705.c index 9b63a53a5e46..b509a407588f 100644 --- a/arch/sh/mm/cache-sh7705.c +++ b/arch/sh/mm/cache-sh7705.c @@ -132,15 +132,20 @@ static void __flush_dcache_page(unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh7705_flush_dcache_page(void *arg) +static void sh7705_flush_dcache_folio(void *arg) { - struct page *page = arg; - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = arg; + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); - else - __flush_dcache_page(__pa(page_address(page))); + clear_bit(PG_dcache_clean, &folio->flags); + else { + unsigned long pfn = folio_pfn(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_dcache_page((pfn + i) * PAGE_SIZE); + } } static void sh7705_flush_cache_all(void *args) @@ -176,19 +181,20 @@ static void sh7705_flush_cache_page(void *args) * Not entirely sure why this is necessary on SH3 with 32K cache but * without it we get occasional "Memory fault" when loading a program. */ -static void sh7705_flush_icache_page(void *page) +static void sh7705_flush_icache_folio(void *arg) { - __flush_purge_region(page_address(page), PAGE_SIZE); + struct folio *folio = arg; + __flush_purge_region(folio_address(folio), folio_size(folio)); } void __init sh7705_cache_init(void) { local_flush_icache_range = sh7705_flush_icache_range; - local_flush_dcache_page = sh7705_flush_dcache_page; + local_flush_dcache_folio = sh7705_flush_dcache_folio; local_flush_cache_all = sh7705_flush_cache_all; local_flush_cache_mm = sh7705_flush_cache_all; local_flush_cache_dup_mm = sh7705_flush_cache_all; local_flush_cache_range = sh7705_flush_cache_all; local_flush_cache_page = sh7705_flush_cache_page; - local_flush_icache_page = sh7705_flush_icache_page; + local_flush_icache_folio = sh7705_flush_icache_folio; } diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 3aef78ceb820..93fc5fb8ec1c 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -20,9 +20,9 @@ void (*local_flush_cache_mm)(void *args) = cache_noop; void (*local_flush_cache_dup_mm)(void *args) = cache_noop; void (*local_flush_cache_page)(void *args) = cache_noop; void (*local_flush_cache_range)(void *args) = cache_noop; -void (*local_flush_dcache_page)(void *args) = cache_noop; +void (*local_flush_dcache_folio)(void *args) = cache_noop; void (*local_flush_icache_range)(void *args) = cache_noop; -void (*local_flush_icache_page)(void *args) = cache_noop; +void (*local_flush_icache_folio)(void *args) = cache_noop; void (*local_flush_cache_sigtramp)(void *args) = cache_noop; void (*__flush_wback_region)(void *start, int size); @@ -61,15 +61,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + struct folio *folio = page_folio(page); + + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(vto); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } if (vma->vm_flags & VM_EXEC) @@ -80,27 +82,30 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + test_bit(PG_dcache_clean, &folio->flags)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(vfrom); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } } void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); - if (boot_cpu_data.dcache.n_aliases && page_mapcount(from) && - test_bit(PG_dcache_clean, &from->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(src) && + test_bit(PG_dcache_clean, &src->flags)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(vfrom); @@ -136,35 +141,37 @@ EXPORT_SYMBOL(clear_user_highpage); void __update_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte) { - struct page *page; unsigned long pfn = pte_pfn(pte); if (!boot_cpu_data.dcache.n_aliases) return; - page = pfn_to_page(pfn); if (pfn_valid(pfn)) { - int dirty = !test_and_set_bit(PG_dcache_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dcache_clean, &folio->flags); if (dirty) - __flush_purge_region(page_address(page), PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } void __flush_anon_page(struct page *page, unsigned long vmaddr) { + struct folio *folio = page_folio(page); unsigned long addr = (unsigned long) page_address(page); if (pages_do_alias(addr, vmaddr)) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); /* XXX.. For now kunmap_coherent() does a purge */ /* __flush_purge_region((void *)kaddr, PAGE_SIZE); */ kunmap_coherent(kaddr); - } else - __flush_purge_region((void *)addr, PAGE_SIZE); + } else + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } @@ -215,11 +222,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, } EXPORT_SYMBOL(flush_cache_range); -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - cacheop_on_each_cpu(local_flush_dcache_page, page, 1); + cacheop_on_each_cpu(local_flush_dcache_folio, folio, 1); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void flush_icache_range(unsigned long start, unsigned long end) { @@ -233,10 +240,11 @@ void flush_icache_range(unsigned long start, unsigned long end) } EXPORT_SYMBOL(flush_icache_range); -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { - /* Nothing uses the VMA, so just pass the struct page along */ - cacheop_on_each_cpu(local_flush_icache_page, page, 1); + /* Nothing uses the VMA, so just pass the folio along */ + cacheop_on_each_cpu(local_flush_icache_folio, page_folio(page), 1); } void flush_cache_sigtramp(unsigned long address) diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c index 73fd7cc99430..fa50e8f6e7a9 100644 --- a/arch/sh/mm/kmap.c +++ b/arch/sh/mm/kmap.c @@ -27,10 +27,11 @@ void __init kmap_coherent_init(void) void *kmap_coherent(struct page *page, unsigned long addr) { + struct folio *folio = page_folio(page); enum fixed_addresses idx; unsigned long vaddr; - BUG_ON(!test_bit(PG_dcache_clean, &page->flags)); + BUG_ON(!test_bit(PG_dcache_clean, &folio->flags)); preempt_disable(); pagefault_disable(); From patchwork Tue Feb 28 21:37:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62643 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3272981wrd; Tue, 28 Feb 2023 13:55:03 -0800 (PST) X-Google-Smtp-Source: AK7set986AKstHuGsKAg0iU4SaQ+IVwEeeaBGINWWu8qvliBn88kLrfcmpHanXstZ24TzRuK8h5u X-Received: by 2002:a05:6a20:7f9c:b0:c7:4367:ae5a with SMTP id d28-20020a056a207f9c00b000c74367ae5amr17972383pzj.18.1677621303165; Tue, 28 Feb 2023 13:55:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677621303; cv=none; d=google.com; s=arc-20160816; b=W1SB8JNbZNKWG07ylgNvUftf9fDmVQxjVp4rX4unqgYvp8xSUAY1/9M/axnfypyxsO xxthy28YvIs1QHxaaO6xslLy5N0iRBVnzuLXgkiyQd76SB1WK//v/AQI1IS+gEMuLvWy 5O5VyP5m70to2tzDdLYPfQoh5oPBuk3A6kEARMyRPdgu8D9+3DDKgyPFG+mvtwtfiZau 3aGN9IysFCttlCopd01wW7HiYryTLw8PHpQfRjqS3yredeMlbeLiMi6IcwRhNYuIClWB vN9eSetNINi8GzLGcjOHcK3ybVc6pXWbDmRPiRu5jlLVMclDlozazKSAtja8OYgRS61f aVTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hq3kISoFqRozP9bJJptL5dM/gm/mC87Ff3e3oJGmMC0=; b=l5bt95vQUp76MuwUod4ItBDv4MCIfEXtB59Z/xsnBFhCxluXiFJ+ndYBU4/0KPVAur YivvutPCJNktuADvdm+SqXfy8bJGyFyfU06WibBORL10atXob/UypkpPRJJkUvvsG7J7 x3Ex9A3OtzOshWj/yYtmfYVZ8n7tLTNJM1arP7BtdMRtniyiiC7Y3UP2BiAefw4DR53x H1ZY6kx4z66eY+BCMRaKdqf9iKphlvPig7ueDLMcArwsmoggcKyXV5TB7CncK1feMMTB S2U+3rZNYlmW2jnqD+Z7JkXo95aawK1rI96Gagd1ZU17U721f3lFoiVlSmC6W28Uv6ZY sG4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HmCkTVEx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h5-20020a63df45000000b004fb43b8924asi10795449pgj.395.2023.02.28.13.54.46; Tue, 28 Feb 2023 13:55:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HmCkTVEx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230270AbjB1Vjt (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229990AbjB1Vi4 (ORCPT ); Tue, 28 Feb 2023 16:38:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 882E71988; Tue, 28 Feb 2023 13:37:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hq3kISoFqRozP9bJJptL5dM/gm/mC87Ff3e3oJGmMC0=; b=HmCkTVEx+/alS0XUELP9sviI0P XwT1wjtUe1UHwQkomBTxZ6s1g3pshmPH1hg8a7HjTzZqpZFzAUALurqF54RkhBf6hfWtHZZNcdFPR ARLhyIau0aDSwhLJkNu02P21NBw4Ksxam8xdlreRAHM7wnnddFKZXuif2FyJQONCZT/CZRR6pkG3a mO3MySAvI7RlNwsACfgqbsabTJBLW1X7YctVMHvkuujOGmlicHDytDEFtyjarwsFi8pZMwyQ43ayb gqJwIpJyk3fXgoXXvzzhiWHzpvGUeOjRAegjyuVrn+dH/asF3DIhdh8K/NmzVwCCO085Ss1V0QznX t15DgCmA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018qC-TJ; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v3 23/34] sparc32: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:26 +0000 Message-Id: <20230228213738.272178-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759113435481600155?= X-GMAIL-MSGID: =?utf-8?q?1759113435481600155?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_32.h | 9 +++++++-- arch/sparc/include/asm/pgtable_32.h | 15 ++++++++++++++- arch/sparc/mm/init_32.c | 13 +++++++++++-- 3 files changed, 32 insertions(+), 5 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index adb6991d0455..8dba35d63328 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -16,6 +16,7 @@ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) #define flush_icache_page(vma, pg) do { } while (0) +#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ @@ -35,11 +36,15 @@ #define flush_page_for_dma(addr) \ sparc32_cachetlb_ops->page_for_dma(addr) -struct page; void sparc_flush_page_to_ram(struct page *page); +void sparc_flush_folio_to_ram(struct folio *folio); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) sparc_flush_page_to_ram(page) +#define flush_dcache_folio(folio) sparc_flush_folio_to_ram(folio) +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index d4330e3c57a6..47ae55ea1837 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -101,7 +101,19 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) srmmu_swap((unsigned long *)ptep, pte_val(pteval)); } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline int srmmu_device_memory(unsigned long x) { @@ -318,6 +330,7 @@ void mmu_info(struct seq_file *m); #define FAULT_CODE_USER 0x4 #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) void srmmu_mapiorange(unsigned int bus, unsigned long xpa, unsigned long xva, unsigned int len); diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 9c0ea457bdf0..d96a14ffceeb 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -297,11 +297,20 @@ void sparc_flush_page_to_ram(struct page *page) { unsigned long vaddr = (unsigned long)page_address(page); - if (vaddr) - __flush_page_to_ram(vaddr); + __flush_page_to_ram(vaddr); } EXPORT_SYMBOL(sparc_flush_page_to_ram); +void sparc_flush_folio_to_ram(struct folio *folio) +{ + unsigned long vaddr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_page_to_ram(vaddr + i * PAGE_SIZE); +} +EXPORT_SYMBOL(sparc_flush_folio_to_ram); + static const pgprot_t protection_map[16] = { [VM_NONE] = PAGE_NONE, [VM_READ] = PAGE_READONLY, From patchwork Tue Feb 28 21:37:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62642 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3269244wrd; Tue, 28 Feb 2023 13:44:42 -0800 (PST) X-Google-Smtp-Source: AK7set/6RyyWBhqJHWiTwvRzLo68+G8e2dDgId8OSiEH/kRVl9ku3xMOjF+hQHJ7fedXq0ZyBAgZ X-Received: by 2002:aa7:c414:0:b0:4b8:9a32:ec07 with SMTP id j20-20020aa7c414000000b004b89a32ec07mr4962039edq.39.1677620682447; Tue, 28 Feb 2023 13:44:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620682; cv=none; d=google.com; s=arc-20160816; b=kxZW4W/mRMrNHBKsTxrIeAAKWV/oGlN7vTM8x80aMK6pEA0DXwryE/gm+dh9YSiqZG tkI1Z6Ex4rTpsZYBJE/RdLDyLq5FGRxHMguxMDWi0qlsHj+rhahMadPqPXd9WLZiSkw5 75j7AR4AF8FVF3jnuaAvdCdMdSVSMmUO9NGD1W0VF40J+vwoOGCzfZu6ipLRiE1H78D3 wHFKPaivcYIBjQKvsYf5dcQkYkXd0W6/vYlOwFoG/XmQdorperEVT0WFFPSdVT1/WQ13 tHJsZeXsjXcFH4KlQZR0XwNqaHNmTaIC5dm+QcQX0/1JR8c0sualAHYi4d4aZmCiXaUo Lfdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ILwuDhxkxYADxt38igbE8Aj0Ns9nbUEIuqdImY95JZY=; b=kk4E5vQuB3/qmLZ+NTWeU97CxYxGc2nfxzCuKoT1ZxDLJsyWIBYo9MyU9aNiKXHZzr s4f87r4uLon3mkSxOLNiUc4QTirKCjx2osRFNzCziZlvt+BDrwqNH/tqXkgejMwLlv2n 6soRQSkyRgbGRTci5miN1rpGSG7O1q4W5TqJyq226a1wBGOMzIrbBiNv+0Di1M9vz8tK CisHUyFO2y8Kzr75LIp64UPIMHDdwlAXys71yniXalkdhdqTmfIwI8MYGsYBOzoAaeiZ kXOdgREDMF2Z5t3Iku9S0VcvXL9Lretyh0zoR5NSC8JgjSfIYpevWQwqy6lUGkJ59uVy NrJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=VNWe56rI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bf13-20020a0564021a4d00b004acb626972dsi11938159edb.206.2023.02.28.13.44.19; Tue, 28 Feb 2023 13:44:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=VNWe56rI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230242AbjB1Vjp (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229987AbjB1Vif (ORCPT ); Tue, 28 Feb 2023 16:38:35 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B221360AA; Tue, 28 Feb 2023 13:37:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ILwuDhxkxYADxt38igbE8Aj0Ns9nbUEIuqdImY95JZY=; b=VNWe56rIvK6ZQNhcT+rHJK9Rdy skcBMuLILWbEoU7cjAt3BHNCVAm56AssVcRXA04yNLAzZmnmgfZfUwNaVIykjqj4hnPBKbxMPlpcx T4RviyPQUrRJzhxs6A3AwdkphUaleY0IoN/RMWVWj14zZvLs0bUyfun08bwmayhgP8EVBtl4nii1W 5ocKcCzq9ZlNj1bZNhyr9tSB7h14TOzc5xG+rhr3HXfZPe1P5V1FS8ccJDcbP+k9ViiAWAoCkoFse kp9yri9tUB3927/V55jcrIUlNInZr4q8BrD0kaAGViVdpJDT9mjm/aL66MtXF7MrzXcflmTH00q/C u6D56hEQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qE-1v; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v3 24/34] sparc64: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:27 +0000 Message-Id: <20230228213738.272178-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112784665610575?= X-GMAIL-MSGID: =?utf-8?q?1759112784665610575?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Convert the PG_dcache_dirty flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_64.h | 18 ++++-- arch/sparc/include/asm/pgtable_64.h | 25 +++++++-- arch/sparc/kernel/smp_64.c | 56 +++++++++++------- arch/sparc/mm/init_64.c | 78 +++++++++++++++----------- arch/sparc/mm/tlb.c | 5 +- 5 files changed, 117 insertions(+), 65 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index b9341836597e..a9a719f04d06 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -35,20 +35,26 @@ void flush_icache_range(unsigned long start, unsigned long end); void __flush_icache_page(unsigned long); void __flush_dcache_page(void *addr, int flush_icache); -void flush_dcache_page_impl(struct page *page); +void flush_dcache_folio_impl(struct folio *folio); #ifdef CONFIG_SMP -void smp_flush_dcache_page_impl(struct page *page, int cpu); -void flush_dcache_page_all(struct mm_struct *mm, struct page *page); +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu); +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio); #else -#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) -#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) +#define smp_flush_dcache_folio_impl(folio, cpu) flush_dcache_folio_impl(folio) +#define flush_dcache_folio_all(mm, folio) flush_dcache_folio_impl(folio) #endif void __flush_dcache_range(unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_icache_page(vma, pg) do { } while(0) +#define flush_icache_pages(vma, pg, nr) do { } while(0) void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 2dc8d4641734..d5c0088e0c6a 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -911,8 +911,20 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, maybe_tlb_batch_add(mm, addr, ptep, orig, fullmm, PAGE_SHIFT); } -#define set_pte_at(mm,addr,ptep,pte) \ - __set_pte_at((mm), (addr), (ptep), (pte), 0) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + addr += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1); #define pte_clear(mm,addr,ptep) \ set_pte_at((mm), (addr), (ptep), __pte(0UL)) @@ -931,8 +943,8 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, \ if (pfn_valid(this_pfn) && \ (((old_addr) ^ (new_addr)) & (1 << 13))) \ - flush_dcache_page_all(current->mm, \ - pfn_to_page(this_pfn)); \ + flush_dcache_folio_all(current->mm, \ + page_folio(pfn_to_page(this_pfn))); \ } \ newpte; \ }) @@ -947,7 +959,10 @@ struct seq_file; void mmu_info(struct seq_file *); struct vm_area_struct; -void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); +void update_mmu_cache_range(struct vm_area_struct *, unsigned long addr, + pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #ifdef CONFIG_TRANSPARENT_HUGEPAGE void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd); diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index a55295d1b924..90ef8677ac89 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -921,20 +921,26 @@ extern unsigned long xcall_flush_dcache_page_cheetah; #endif extern unsigned long xcall_flush_dcache_page_spitfire; -static inline void __local_flush_dcache_page(struct page *page) +static inline void __local_flush_dcache_folio(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + unsigned long pfn = folio_pfn(folio) + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } -void smp_flush_dcache_page_impl(struct page *page, int cpu) +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu) { int this_cpu; @@ -948,14 +954,14 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) this_cpu = get_cpu(); if (cpu == this_cpu) { - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); } else if (cpu_online(cpu)) { - void *pg_addr = page_address(page); + void *pg_addr = folio_address(folio); u64 data0 = 0; if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -963,18 +969,23 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpumask_of(cpu)); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpumask_of(cpu)); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } } put_cpu(); } -void flush_dcache_page_all(struct mm_struct *mm, struct page *page) +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio) { void *pg_addr; u64 data0; @@ -988,10 +999,10 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) atomic_inc(&dcpage_flushes); #endif data0 = 0; - pg_addr = page_address(page); + pg_addr = folio_address(folio); if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -999,13 +1010,18 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpu_online_mask); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpu_online_mask); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); preempt_enable(); } diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 04f9db0c3111..ab9aacbaf43c 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -195,21 +195,26 @@ atomic_t dcpage_flushes_xcall = ATOMIC_INIT(0); #endif #endif -inline void flush_dcache_page_impl(struct page *page) +inline void flush_dcache_folio_impl(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + BUG_ON(tlb_type == hypervisor); #ifdef CONFIG_DEBUG_DCFLUSH atomic_inc(&dcpage_flushes); #endif #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), - ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, + ((tlb_type == spitfire) && + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } @@ -218,10 +223,10 @@ inline void flush_dcache_page_impl(struct page *page) #define PG_dcache_cpu_mask \ ((1UL<flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) +#define dcache_dirty_cpu(folio) \ + (((folio)->flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) -static inline void set_dcache_dirty(struct page *page, int this_cpu) +static inline void set_dcache_dirty(struct folio *folio, int this_cpu) { unsigned long mask = this_cpu; unsigned long non_cpu_bits; @@ -238,11 +243,11 @@ static inline void set_dcache_dirty(struct page *page, int this_cpu) "bne,pn %%xcc, 1b\n\t" " nop" : /* no outputs */ - : "r" (mask), "r" (non_cpu_bits), "r" (&page->flags) + : "r" (mask), "r" (non_cpu_bits), "r" (&folio->flags) : "g1", "g7"); } -static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) +static inline void clear_dcache_dirty_cpu(struct folio *folio, unsigned long cpu) { unsigned long mask = (1UL << PG_dcache_dirty); @@ -260,7 +265,7 @@ static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) " nop\n" "2:" : /* no outputs */ - : "r" (cpu), "r" (mask), "r" (&page->flags), + : "r" (cpu), "r" (mask), "r" (&folio->flags), "i" (PG_dcache_cpu_mask), "i" (PG_dcache_cpu_shift) : "g1", "g7"); @@ -284,9 +289,10 @@ static void flush_dcache(unsigned long pfn) page = pfn_to_page(pfn); if (page) { + struct folio *folio = page_folio(page); unsigned long pg_flags; - pg_flags = page->flags; + pg_flags = folio->flags; if (pg_flags & (1UL << PG_dcache_dirty)) { int cpu = ((pg_flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask); @@ -296,11 +302,11 @@ static void flush_dcache(unsigned long pfn) * in the SMP case. */ if (cpu == this_cpu) - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); else - smp_flush_dcache_page_impl(page, cpu); + smp_flush_dcache_folio_impl(folio, cpu); - clear_dcache_dirty_cpu(page, cpu); + clear_dcache_dirty_cpu(folio, cpu); put_cpu(); } @@ -388,12 +394,14 @@ bool __init arch_hugetlb_valid_size(unsigned long size) } #endif /* CONFIG_HUGETLB_PAGE */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { struct mm_struct *mm; unsigned long flags; bool is_huge_tsb; pte_t pte = *ptep; + unsigned int i; if (tlb_type != hypervisor) { unsigned long pfn = pte_pfn(pte); @@ -440,15 +448,21 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * } } #endif - if (!is_huge_tsb) - __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, - address, pte_val(pte)); + if (!is_huge_tsb) { + for (i = 0; i < nr; i++) { + __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, + address, pte_val(pte)); + address += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } + } spin_unlock_irqrestore(&mm->context.lock, flags); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { + unsigned long pfn = folio_pfn(folio); struct address_space *mapping; int this_cpu; @@ -459,35 +473,35 @@ void flush_dcache_page(struct page *page) * is merely the zero page. The 'bigcore' testcase in GDB * causes this case to run millions of times. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; this_cpu = get_cpu(); - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) { - int dirty = test_bit(PG_dcache_dirty, &page->flags); + bool dirty = test_bit(PG_dcache_dirty, &folio->flags); if (dirty) { - int dirty_cpu = dcache_dirty_cpu(page); + int dirty_cpu = dcache_dirty_cpu(folio); if (dirty_cpu == this_cpu) goto out; - smp_flush_dcache_page_impl(page, dirty_cpu); + smp_flush_dcache_folio_impl(folio, dirty_cpu); } - set_dcache_dirty(page, this_cpu); + set_dcache_dirty(folio, this_cpu); } else { /* We could delay the flush for the !page_mapping * case too. But that case is for exec env/arg * pages and those are %99 certainly going to get * faulted into the tlb (and thus flushed) anyways. */ - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); } out: put_cpu(); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void __kprobes flush_icache_range(unsigned long start, unsigned long end) { @@ -2280,10 +2294,10 @@ void __init paging_init(void) setup_page_offset(); /* These build time checkes make sure that the dcache_dirty_cpu() - * page->flags usage will work. + * folio->flags usage will work. * * When a page gets marked as dcache-dirty, we store the - * cpu number starting at bit 32 in the page->flags. Also, + * cpu number starting at bit 32 in the folio->flags. Also, * functions like clear_dcache_dirty_cpu use the cpu mask * in 13-bit signed-immediate instruction fields. */ diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 9a725547578e..3fa6a070912d 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -118,6 +118,7 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, unsigned long paddr, pfn = pte_pfn(orig); struct address_space *mapping; struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) goto no_cache_flush; @@ -127,13 +128,13 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, goto no_cache_flush; /* A real file page? */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) goto no_cache_flush; paddr = (unsigned long) page_address(page); if ((paddr ^ vaddr) & (1 << 13)) - flush_dcache_page_all(mm, page); + flush_dcache_folio_all(mm, folio); } no_cache_flush: From patchwork Tue Feb 28 21:37:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62639 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3268221wrd; Tue, 28 Feb 2023 13:41:55 -0800 (PST) X-Google-Smtp-Source: AK7set/dtxLuJ3Lf1dPDaswK6kVcXJOQ2vRNS4WSChup99IlEVqeg0XJbXfRVgOjGMxYPgOhSN8t X-Received: by 2002:a05:6402:40e:b0:4a2:4ed3:c151 with SMTP id q14-20020a056402040e00b004a24ed3c151mr4625106edv.39.1677620515115; Tue, 28 Feb 2023 13:41:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620515; cv=none; d=google.com; s=arc-20160816; b=Tq2i+2x0IN/lP7cMZSmLs/mMgYCHvbRq6rpPTLLbw3jz9kdrs7r8BkdgH2nxnyGbhB GDdk+Fc5WlmS7bFCMtO5xeL9jSFepKZHo+yic/TY1moZc/EE8N8a2Nil5NbDvv5+OwSv jYzLwhX/RSF39S/do9iCwQHV1RWZTBdK/1RAI8UMxoPkDu4+94A0KJIScPYw7hhESKe5 3PjeSWAFvCKeUaRAotT6rgchdGZdCvEZbWPfMV0975QCwiMnl3fLWnOVrbpa/3tiilZt z8BU9oXt9c6A+dIsCOUE8zUA2V6JU5ZExi1tbFLVGHrL6VXROV6zyNRZnTD+CtlJDAY9 EKDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=dtiP3hUvA6nDur052lXOuZXkJQbzfHLCDQRu6Y0/CbA=; b=Pgv/OvSRmMWNIIjT23lOpx6zRTLgBCDt7CgWt0uU8mUuhUGQW12axJ3nJDwcYDV3ON RKDP2cdwsfNC6uMysOqPsw/NjHhn8RiG4odC620SpFB7PwUZlHDKw0IVPDHA2vX/kDsb z6UZpHmEBBJNQY7mldvbbT/OlEFhYiNOGCAU2RkkvoWqU3pIvv9wjBqFIHWDya/UmnCc PtlmEF9POFdbh93Bk/zUn9XZuLVSUEprmzfB1NhwJKHVNyxTZd67M+JxmqN4oPoGXJf8 HkQOdFC+i7o4QeTdL5qwbvK6A9eNfO+GjvoUCoB8Z822rvGIK1/wxQTsZefv16rP54ru wSYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="nim/Ffuw"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m7-20020aa7d347000000b004af6f3c2ea5si4503201edr.642.2023.02.28.13.41.31; Tue, 28 Feb 2023 13:41:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="nim/Ffuw"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230198AbjB1Vjc (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230039AbjB1ViV (ORCPT ); Tue, 28 Feb 2023 16:38:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBC1A35259; Tue, 28 Feb 2023 13:37:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dtiP3hUvA6nDur052lXOuZXkJQbzfHLCDQRu6Y0/CbA=; b=nim/FfuwphiszxzJgRhRboHqcH vQzqCBkhh0tvLmMfPiIWWtFpSXipbb1iphhpFKTWFPAu/hVwVZoPc6ZHlCj2/jQKdF91GmZJ9na7T coRSAXj3oPuim8mixy7pm83Peo0keEkBTn2/bmTHwumG4lAQepukA68VsAXvrA39a+I9BJTE7Hs+Z 2u1V+pOEg9omjGjJIo7CRah2Vmhk48cMhBKDhpBAzTccVvBia7mHwuEn9QwqlNekN5TbgYvLfWOcz 1PHel/gzKJuYCRqdbehQTZEfjReDu/fMv2F9CyrAVcSivKYRhU/co/gP5obxRB7MApo8aNtsWl1xA sOVMrNJA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qG-5T; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Richard Weinberger , Anton Ivanov , Johannes Berg , linux-um@lists.infradead.org Subject: [PATCH v3 25/34] um: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:28 +0000 Message-Id: <20230228213738.272178-26-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112609142483602?= X-GMAIL-MSGID: =?utf-8?q?1759112609142483602?= Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Richard Weinberger Cc: Anton Ivanov Cc: Johannes Berg Cc: linux-um@lists.infradead.org --- arch/um/include/asm/pgtable.h | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index a70d1618eb35..ca78c90ae74f 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -242,12 +242,20 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *pteptr, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - set_pte(pteptr, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + #define __HAVE_ARCH_PTE_SAME static inline int pte_same(pte_t pte_a, pte_t pte_b) { @@ -290,6 +298,7 @@ struct mm_struct; extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); #define update_mmu_cache(vma,address,ptep) do {} while (0) +#define update_mmu_cache_range(vma, address, ptep, nr) do {} while (0) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that From patchwork Tue Feb 28 21:37:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62629 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267646wrd; Tue, 28 Feb 2023 13:40:21 -0800 (PST) X-Google-Smtp-Source: AK7set+p15zNsbmacOJ8tfRtw0zO0N8CCPVkMrxG4+e1+Uw1zrdaLVMqjcifgZCGi2P1Oizf9MN6 X-Received: by 2002:a05:6a20:d49b:b0:cd:fc47:ddbf with SMTP id im27-20020a056a20d49b00b000cdfc47ddbfmr1979793pzb.47.1677620421304; Tue, 28 Feb 2023 13:40:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620421; cv=none; d=google.com; s=arc-20160816; b=VEtrbmHIt4S+7rKAinYFPWQBlzrmrletDO1Lqds4Cqe+nez2soldWBUv1y7RchM9TU FIMJ9BrLtwxom/MxypTPyiVGpXAyUG4rf5FMJrK4ZGXHNtt3BAA3BKB5J8Z5aY2z+lDw FfbFRLeXHtMkmaj9tkARYfq0u1V8Wb4Nj6Zq+bJFtFhRey5qzwhByWualL51GfUkI4QY Z8u2sDlH/YbWujeO1NJwsuz//RXzhw/Xky9G5EvfPtChBPV647AoOlKoQS4yi+SHd8aO 9/qYTSGj5njFe+fkVC7tC9YKHzNHKy1zbyWfCY6QvASl6aUole4UFjZubt73jo/LBrqp poeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fpcPwGn1cnZkifuBnwEfVFMB8mrQdavgisIkUPDNgF0=; b=bIWu7FI86rzAtx11w0ONzndMdtkcKAsDTFpZVkLbPc6eMgjA6s8eQ1f0mGLkSylVpi cTS1PcC1lyCgBZ9oLjr3snQ7S06ck+U0RouNoLhJoIUXZ006B/6ZVqdQ/kNRr++QH/hL jN3R5htP1Is8OrL1AdQWzXDi+QrHXGUzp0VcMi0PeM/65Egs82QCPhvIT3R7+ek8c9dc mO0ZL1mMW1JMZnwb6hiaVI/7j4ugduhqgeGPHXX9oFNRTbjjSPmtR87fgx7rkYG/k1WF Ji8qJH9zCVxLRxW2GFJQ0TEM0F3Dx1io6tMyn+PbW3Pifr/HaRDzlbtGLl3tIInkpOhI Cbnw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=NjCRe50+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i3-20020a6551c3000000b004fbffa7d038si9927802pgq.737.2023.02.28.13.39.51; Tue, 28 Feb 2023 13:40:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=NjCRe50+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229523AbjB1VjF (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230000AbjB1ViF (ORCPT ); Tue, 28 Feb 2023 16:38:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D5F735277; Tue, 28 Feb 2023 13:37:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fpcPwGn1cnZkifuBnwEfVFMB8mrQdavgisIkUPDNgF0=; b=NjCRe50+iJnOn2KOisgXMICdJ3 i3f0pYaSzRdjUJTsp293YdZYI8IxMAOKNR+z7ak18vrhXU29OXtVq/6sl1smHAbHvQAWRAsJuaJO7 HroZWUZSUw4hmaDi418EyQIjM4I2oPg2mtFYRimli3yyGf5PXj0e2xz6RsaTbj+jUDybB7J7W5GKo dCKhrpuyl764OzyK2702Y5Wjet1VVbzltlGR06KAtWjdP9L1uwbYUEQ3eoNlbZp75zltxKplP3zRq 14sK6gTtCoOIuWPWxnAQmtUIAZJyEmbBaNtkCrRvpb265t+Q1nKN59c9ANqXTk+Bk8/IUKkO5+9dv ltuj7eeg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qN-9o; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v3 26/34] x86: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:29 +0000 Message-Id: <20230228213738.272178-27-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112510974020843?= X-GMAIL-MSGID: =?utf-8?q?1759112510974020843?= Convert set_pte_at() into set_ptes() and add a noop update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" --- arch/x86/include/asm/pgtable.h | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 84be3e07b112..f424371ea143 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1019,13 +1019,22 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) return res; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - set_pte(ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { @@ -1291,6 +1300,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ +} static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { From patchwork Tue Feb 28 21:37:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62621 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267380wrd; Tue, 28 Feb 2023 13:39:37 -0800 (PST) X-Google-Smtp-Source: AK7set/iiKaYP2KRwI5h5xK/BaSbgSC8dQNhJqk9NxuCfPX5RUr/XDA+IwMGjaoSHEqFMs4vh5c6 X-Received: by 2002:a17:903:42c5:b0:19c:d5f9:3386 with SMTP id jy5-20020a17090342c500b0019cd5f93386mr3396893plb.61.1677620377614; Tue, 28 Feb 2023 13:39:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620377; cv=none; d=google.com; s=arc-20160816; b=SYmnEmF36Ok5mWO4piMdjPBr4lQBSdTbMgubpPgRlyEMAI5FSXgtql4Ah3shAkse+6 puSRYP+wZLoZOXiFTKNPzGrgGFtkx7pygEKf1C1/Uofp9Rp/97B3HLVofHjHla4xQaQh +FPWi6Y9V5tMqOhIrIpByvjhZr/ab3CgMDp5/w51+5RFxYN03hyhMjNXjYnuFsEJG4/L UskEzL+COilWxwJsaSGa3sAGK4KTksGSjajPf6LK/uDvUpeQeu5Ygv5kLpXmAtbik+Ox EfW4G5kRkZOY++aBq0jTQDI/e1zJtxNwx83gAKpPkWE9Qz5ucpasjS4KDrvM1/poZPAN dErg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FcAVSqRhOvKHjYDKfG+YKts/V6VftTlUNnoYvCjS+is=; b=V8GOtsRQT8FbQR4CybVWQqTfKJK/25rMAL13f2EUa8thjIZo7eOXIZJ7CM2zPxxjLa CVoQtEPSL5gTv0XJD60EJOyJ4wCay1FK+u808AlfUz9sDaVc0RafmEjhgwLIhsMKQtLF vJh+HjT7VFE/NI4/Bkps7CWhzD6vXtSnuWJdPVw1BWiIkPZN4n5ig4ViJ/IS3M+zLyr1 pb65n0Fg2tZsulc3K6dygEt3VdCQwk3G05d/Axi5+OZxlH+Ofmi1IU8BkUKh84QjTOnf BFeDVEdX5eEluElTLQrRVyAyaeAFR2BZ7TG5MTloidgVJO+5kMOwS11WbRwq4TFSpPAw 8XNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=gjFOpRbF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id je6-20020a170903264600b0019cb3c948d5si10405858plb.540.2023.02.28.13.39.23; Tue, 28 Feb 2023 13:39:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=gjFOpRbF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229730AbjB1Vij (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229978AbjB1ViE (ORCPT ); Tue, 28 Feb 2023 16:38:04 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D76D35271; Tue, 28 Feb 2023 13:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FcAVSqRhOvKHjYDKfG+YKts/V6VftTlUNnoYvCjS+is=; b=gjFOpRbF60JNy37K9eEbGTKOMu rAhhWaB7QqOETWVuOMHoykORkUMucxrCs96HybyjsTJ3vJTglIuX2iJHbkzpWlMe86FNKBH1x5oxw vAwlCCMDn+rZs2qNdqK5B2UIGPDAttKg4c74ukYUw97lfz84Q5OlfV/tNg1zfVxT4QbH14iBu2e5f Glna1tEHSYgtZWads993I+P+QquYUSAeQvodM8gLoQKXfd+7IVlo3At9HKfzq967zgG3QsqX3UHUu 4arrPA+eGIEXtMkBbM69Cw6ClBZTFfiHywWuDdmApshEdGCdhljblQKfJmSixpj8Pe8/d9KD00GFk ZbRsztxg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qP-CR; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH v3 27/34] xtensa: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:30 +0000 Message-Id: <20230228213738.272178-28-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112464538506563?= X-GMAIL-MSGID: =?utf-8?q?1759112464538506563?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Max Filippov Cc: linux-xtensa@linux-xtensa.org --- arch/xtensa/include/asm/cacheflush.h | 9 ++- arch/xtensa/include/asm/pgtable.h | 24 +++++--- arch/xtensa/mm/cache.c | 83 ++++++++++++++++------------ 3 files changed, 72 insertions(+), 44 deletions(-) diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 7b4359312c25..35153f6725e4 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -119,8 +119,14 @@ void flush_cache_page(struct vm_area_struct*, #define flush_cache_vmap(start,end) flush_cache_all() #define flush_cache_vunmap(start,end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} void local_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); @@ -156,6 +162,7 @@ void local_flush_cache_page(struct vm_area_struct *vma, /* This is not required, see Documentation/core-api/cachetlb.rst */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index fc7a14884c6c..293101530541 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -301,17 +301,25 @@ static inline void update_pte(pte_t *ptep, pte_t pteval) struct mm_struct; -static inline void -set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_pte(pte_t *ptep, pte_t pte) { - update_pte(ptep, pteval); + update_pte(ptep, pte); } -static inline void set_pte(pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - update_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void set_pmd(pmd_t *pmdp, pmd_t pmdval) { @@ -407,8 +415,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) #else -extern void update_mmu_cache(struct vm_area_struct * vma, - unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, address, ptep) \ + update_mmu_cache_range(vma, address, ptep, 1) typedef pte_t *pte_addr_t; diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c index 19e5a478a7e8..65c0d5298041 100644 --- a/arch/xtensa/mm/cache.c +++ b/arch/xtensa/mm/cache.c @@ -121,9 +121,9 @@ EXPORT_SYMBOL(copy_user_highpage); * */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); /* * If we have a mapping but the page is not mapped to user-space @@ -132,14 +132,14 @@ void flush_dcache_page(struct page *page) */ if (mapping && !mapping_mapped(mapping)) { - if (!test_bit(PG_arch_1, &page->flags)) - set_bit(PG_arch_1, &page->flags); + if (!test_bit(PG_arch_1, &folio->flags)) + set_bit(PG_arch_1, &folio->flags); return; } else { - - unsigned long phys = page_to_phys(page); - unsigned long temp = page->index << PAGE_SHIFT; + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; + unsigned long temp = folio_pos(folio); + unsigned int i, nr = folio_nr_pages(folio); unsigned long alias = !(DCACHE_ALIAS_EQ(temp, phys)); unsigned long virt; @@ -154,22 +154,26 @@ void flush_dcache_page(struct page *page) return; preempt_disable(); - virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(virt, phys); + for (i = 0; i < nr; i++) { + virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(virt, phys); - virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); + virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); - if (alias) - __flush_invalidate_dcache_page_alias(virt, phys); + if (alias) + __flush_invalidate_dcache_page_alias(virt, phys); - if (mapping) - __invalidate_icache_page_alias(virt, phys); + if (mapping) + __invalidate_icache_page_alias(virt, phys); + phys += PAGE_SIZE; + temp += PAGE_SIZE; + } preempt_enable(); } /* There shouldn't be an entry in the cache for this page anymore. */ } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* * For now, flush the whole cache. FIXME?? @@ -207,45 +211,52 @@ EXPORT_SYMBOL(local_flush_cache_page); #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */ -void -update_mmu_cache(struct vm_area_struct * vma, unsigned long addr, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; + unsigned int i; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); - /* Invalidate old entry in TLBs */ - - flush_tlb_page(vma, addr); + /* Invalidate old entries in TLBs */ + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); + nr = folio_nr_pages(folio); #if (DCACHE_WAY_SIZE > PAGE_SIZE) - if (!PageReserved(page) && test_bit(PG_arch_1, &page->flags)) { - unsigned long phys = page_to_phys(page); + if (!folio_test_reserved(folio) && test_bit(PG_arch_1, &folio->flags)) { + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; unsigned long tmp; preempt_disable(); - tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - __invalidate_icache_page_alias(tmp, phys); + for (i = 0; i < nr; i++) { + tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + __invalidate_icache_page_alias(tmp, phys); + phys += PAGE_SIZE; + } preempt_enable(); - clear_bit(PG_arch_1, &page->flags); + clear_bit(PG_arch_1, &folio->flags); } #else - if (!PageReserved(page) && !test_bit(PG_arch_1, &page->flags) + if (!folio_test_reserved(folio) && !test_bit(PG_arch_1, &folio->flags) && (vma->vm_flags & VM_EXEC) != 0) { - unsigned long paddr = (unsigned long)kmap_atomic(page); - __flush_dcache_page(paddr); - __invalidate_icache_page(paddr); - set_bit(PG_arch_1, &page->flags); - kunmap_atomic((void *)paddr); + for (i = 0; i < nr; i++) { + void *paddr = kmap_local_folio(folio, i * PAGE_SIZE); + __flush_dcache_page((unsigned long)paddr); + __invalidate_icache_page((unsigned long)paddr); + kunmap_atomic(paddr); + } + set_bit(PG_arch_1, &folio->flags); } #endif } From patchwork Tue Feb 28 21:37:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62620 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267353wrd; Tue, 28 Feb 2023 13:39:34 -0800 (PST) X-Google-Smtp-Source: AK7set/ZDLySCVWFC7FyyhzruJnCFj6+1BPzmeV5KLVRTUWzHgN7L1/IRCRPAuMy1mFNDpa9xdPZ X-Received: by 2002:a62:8453:0:b0:5a8:9f71:315b with SMTP id k80-20020a628453000000b005a89f71315bmr3746949pfd.4.1677620374319; Tue, 28 Feb 2023 13:39:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620374; cv=none; d=google.com; s=arc-20160816; b=v4EI3ogsWUSXXH4llhVYoxDjH6KcvQlMG4VAUif15IKfAeWvADlD3wBtDmf4vL6sRw itnfw1uh5cSX+ErP4igk07R0Qim0EoT2VZ5fTCaRg+L9mAG+zzYLHYegMuQ5t68ODzMP b5ET4uab8RsiMwi3tSdWrR/eHJu82p/rlHu6n7FhqihGJUK2Le2KPe8qZqJhPyL6tZMY 1vTfPxc4zd9y5Czn/6qRYf+gCsTbiv8T+Tz/VBtOf/pehhozUnmCSWxEFvKfC1m/QN5e tpGXvAJlu3m6bEnrcRYzhiOr9HJdnJlXYIQtyn70fFwEEXqDYFZojnHZaOO69PhRs6VS ZWiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mdaqpw27tgluLSVUfr5NCk7JYy/yOeBvYqmE7YJCi4Q=; b=ORDF2AQN6H5mbufXsXB1VR3GDD4F3SEaUlWNc4THu55RTxR+qviQINJCPoFgPI3QOT aZBSJrfLrYppKT4MJp/pFlHtmN9phkdOagUsTjDEbWZAGk+V58iHKwdvLF3jNcsmHyuV kIEGkrEAq9+D6WesjSwziH/DbK3GY8ouIKDYQY2oNXcGXmfoqOZ5MX3mSGg8Uyvje8p0 3Tkw/2amWg6spTjtEmYYb48MCoVpOELiCxMutSdZy7E1cHGHqNMUnsbE63MN6UuTEwbF MPBmpcGYgF0gzPCHmN50oCbpc7dLH3C9x2i3NnWKjEkFU8AA5P8uiYCABy06/pthaBIY XIYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=DrFh2yHP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q1-20020a6557c1000000b005030925d302si10452647pgr.26.2023.02.28.13.39.20; Tue, 28 Feb 2023 13:39:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=DrFh2yHP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230056AbjB1Vib (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229977AbjB1ViE (ORCPT ); Tue, 28 Feb 2023 16:38:04 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFF0A3401C; Tue, 28 Feb 2023 13:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mdaqpw27tgluLSVUfr5NCk7JYy/yOeBvYqmE7YJCi4Q=; b=DrFh2yHP+xndjR7rLzUn2Ia2US A9zyiUHLOTEyGpZP45ISo4r/tr45esU8qg5UQYLi1Tg8YmpQVCQFuth04qslUG1emgF5B594UN4+7 OqklC4ITNVGNREdIx2rLBwu7gfGDKNtcVmXeUlxiHR4acwBEx+18uyz/FMgo6Bg3OCgxbyP2HnkCo NtvjjX9fsWSims23QTMn3KVR2VLXfe3bqSN9gmaCMW0sK84KjQparXeo/4XYtu3lOAH8XkACexa6v VP06CViu2V6RZ3dqP9iH693OXYOMYPni/zYuFHC2aNm+CiyNLHGA2FHR6T/cIg4qNPPCIyRzhaGzX cSaNjUvw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qR-K2; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 28/34] mm: Remove page_mapping_file() Date: Tue, 28 Feb 2023 21:37:31 +0000 Message-Id: <20230228213738.272178-29-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112461810229127?= X-GMAIL-MSGID: =?utf-8?q?1759112461810229127?= This function has no more users. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 8 -------- 1 file changed, 8 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 1b1ba3d5100d..c21b3ad1068c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -394,14 +394,6 @@ static inline struct address_space *page_file_mapping(struct page *page) return folio_file_mapping(page_folio(page)); } -/* - * For file cache pages, return the address_space, otherwise return NULL - */ -static inline struct address_space *page_mapping_file(struct page *page) -{ - return folio_flush_mapping(page_folio(page)); -} - /** * folio_inode - Get the host inode for this folio. * @folio: The folio. From patchwork Tue Feb 28 21:37:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62625 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267439wrd; Tue, 28 Feb 2023 13:39:45 -0800 (PST) X-Google-Smtp-Source: AK7set8J8orwR7I4tHvCI8Neo5ptDOgXFuXpRo8J6AtTfi92RQ+g+Qzpn+hepnYEbIlr3SzRw5fT X-Received: by 2002:a05:6a20:914c:b0:cc:da74:9c5e with SMTP id x12-20020a056a20914c00b000ccda749c5emr5832560pzc.35.1677620385283; Tue, 28 Feb 2023 13:39:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620385; cv=none; d=google.com; s=arc-20160816; b=xYj5zdraKAjlKub3KInUKGRWIarn06EqxRMk/NtEWB3TJAzrzVCEFCk4wZl/qnrRV6 w7ordGovV4MZytwAj4Gg37jIIHc7BHGgUOi6GsUJJZgdWZsqFukEEKtse7W2ENSx8qtj JVsCbvQ4c5cXJA2NiGla+Djq6WA+HTsWoCH9YDQIbFsYBUbDQDgsS4SNi9pesKfJCT2x d/C2VGt/dWVOYdDzQhyPCOjP9rzFeCl/3SzHG3PGyVmN9HcxKb/S2WYZST8tfuij4H97 SpuCpOtkFQnOGuQ1pflE+DaFx8ldpa2kGYVpaKEU2NlMU1BJgjSqZ4dLqxjjkMc7GCO2 QGGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4+4t1rqEiuqyoKPIj2cQSmuuKOyYNvfnS04rFEq3Bbw=; b=awGnOuQJa4B5dRa0XJyCgHJO0+IOZwxCMbCOxar+Xv1ffCBEwV3qs/600W3aBaFREM a2lwwWB3tNaeZcceCA5IDFDX9GVibVW5MtZ+hnZ+0Be9TmslwGZN1J5M9tliV03p7teP h6jDst39VS5Iium65Q4Lf2DZwIvhuT/S2beNx8SAAKokYDJ/NKgmEPl46UuZOzOxYUx6 5SFMQiVpzg28tgq5EuLO7mK2AIiKRE6Vom6XxnOy9thwwfPvlWFTdU7DhR1STTd8ZoS0 fH1sG9uXpQIlzlVMO36ykuZnAAlQ+GtQ7VgUK5MNMyqNsMM90gnh4gPDdOQs2OuJzyHp JQqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=YjerQ6PU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z6-20020a655a46000000b0049b37046fbcsi11663513pgs.391.2023.02.28.13.39.32; Tue, 28 Feb 2023 13:39:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=YjerQ6PU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230074AbjB1Vi5 (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229996AbjB1ViF (ORCPT ); Tue, 28 Feb 2023 16:38:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC54C34318; Tue, 28 Feb 2023 13:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4+4t1rqEiuqyoKPIj2cQSmuuKOyYNvfnS04rFEq3Bbw=; b=YjerQ6PU7LAW9NfRpaTaBoZkTX 5pRbBMRHM6R06nY/z+O+qG6+GWpR68xxpcIPvXdcUCNlAUWSWOTbNaKWLYJRhtQyT99fSTpYhNEFo PzNX/9hgALt9UELLjVFRwIdXhiiGp5zb3T5hzElFR/BwNho/6u2XXPW52bnSSkeszWHCb2AI18Voj S1IL6qTAZevsZ80YCiEoZq5/vnPYMimwDo7jUtkPRXXjiObPNp7PSqiJuyJr/3lTtQOtx7UjPu3rc GxtqnEDXN60JlFU4FkBAufdlkdpAjnO3+St2uny78qm+t6sI8yjFd9GLEVkPfcGV20QAo7em/xW/g j+Gms1IQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qf-S8; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 29/34] mm: Rationalise flush_icache_pages() and flush_icache_page() Date: Tue, 28 Feb 2023 21:37:32 +0000 Message-Id: <20230228213738.272178-30-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112472785001427?= X-GMAIL-MSGID: =?utf-8?q?1759112472785001427?= Move the default (no-op) implementation of flush_icache_pages() to from . Remove the flush_icache_page() wrapper from each architecture into . Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Geert Uytterhoeven --- arch/alpha/include/asm/cacheflush.h | 5 +---- arch/arc/include/asm/cacheflush.h | 9 --------- arch/arm/include/asm/cacheflush.h | 7 ------- arch/csky/abiv1/inc/abi/cacheflush.h | 1 - arch/csky/abiv2/inc/abi/cacheflush.h | 1 - arch/hexagon/include/asm/cacheflush.h | 2 +- arch/loongarch/include/asm/cacheflush.h | 2 -- arch/m68k/include/asm/cacheflush_mm.h | 1 - arch/mips/include/asm/cacheflush.h | 6 ------ arch/nios2/include/asm/cacheflush.h | 2 +- arch/nios2/mm/cacheflush.c | 1 + arch/parisc/include/asm/cacheflush.h | 2 +- arch/sh/include/asm/cacheflush.h | 2 +- arch/sparc/include/asm/cacheflush_32.h | 2 -- arch/sparc/include/asm/cacheflush_64.h | 3 --- arch/xtensa/include/asm/cacheflush.h | 4 ---- include/asm-generic/cacheflush.h | 12 ------------ include/linux/cacheflush.h | 9 +++++++++ 18 files changed, 15 insertions(+), 56 deletions(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 3956460e69e2..36a7e924c3b9 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -53,10 +53,6 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_user_page flush_icache_user_page #endif /* CONFIG_SMP */ -/* This is used only in __do_fault and do_swap_page. */ -#define flush_icache_page(vma, page) \ - flush_icache_user_page((vma), (page), 0, 0) - /* * Both implementations of flush_icache_user_page flush the entire * address space, so one call, no matter how many pages. @@ -66,6 +62,7 @@ static inline void flush_icache_pages(struct vm_area_struct *vma, { flush_icache_user_page(vma, page, 0, 0); } +#define flush_icache_pages flush_icache_pages #include diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index 04f65f588510..bd5b1a9a0544 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -18,15 +18,6 @@ #include #include -/* - * Semantically we need this because icache doesn't snoop dcache/dma. - * However ARC Cache flush requires paddr as well as vaddr, latter not available - * in the flush_icache_page() API. So we no-op it but do the equivalent work - * in update_mmu_cache() - */ -#define flush_icache_page(vma, page) -#define flush_icache_pages(vma, page, nr) - void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 841e268d2374..f6181f69577f 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -321,13 +321,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -/* - * We don't appear to need to do anything here. In fact, if we did, we'd - * duplicate cache flushing elsewhere performed by flush_dcache_page(). - */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, * vmalloc, ioremap etc) in kernel space for pages. On non-VIPT diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index 0d6cb65624c4..908d8b0bc4fd 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -45,7 +45,6 @@ extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, u #define flush_cache_vmap(start, end) cache_wbinv_all() #define flush_cache_vunmap(start, end) cache_wbinv_all() -#define flush_icache_page(vma, page) do {} while (0); #define flush_icache_range(start, end) cache_wbinv_range(start, end) #define flush_icache_mm_range(mm, start, end) cache_wbinv_range(start, end) #define flush_icache_deferred(mm) do {} while (0); diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index 9c728933a776..40be16907267 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -33,7 +33,6 @@ static inline void flush_dcache_page(struct page *page) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) #define flush_icache_range(start, end) cache_wbinv_range(start, end) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 63ca314ede89..bdacf72d97e1 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -18,7 +18,7 @@ * - flush_cache_range(vma, start, end) flushes a range of pages * - flush_icache_range(start, end) flush a range of instructions * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) nr pages for icache * * Need to doublecheck which one is really needed for ptrace stuff to work. */ diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 7907eb42bfbd..326ac6f1b27c 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -46,8 +46,6 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) -#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) #define flush_dcache_folio(folio) do { } while (0) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index d43c8bce149b..c67a8d2e6d6a 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -260,7 +260,6 @@ static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) #define flush_dcache_mmap_unlock(mapping) do { } while (0) #define flush_icache_pages(vma, page, nr) \ __flush_pages_to_ram(page_address(page), nr) -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index 2683cade42ef..043e50effc62 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -82,12 +82,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) - extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); extern void (*__flush_icache_user_range)(unsigned long start, diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index 8624ca83cffe..7c48c5213fb7 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -35,7 +35,7 @@ void flush_dcache_folio(struct folio *folio); extern void flush_icache_range(unsigned long start, unsigned long end); void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); +#define flush_icache_pages flush_icache_pages #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 471485a84b2c..2565767b98a3 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -147,6 +147,7 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page, __flush_dcache(start, end); __flush_icache(start, end); } +#define flush_icache_pages flush_icache_pages void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index 0bf8b69d086b..e4fdce328dbd 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -59,7 +59,7 @@ static inline void flush_dcache_page(struct page *page) void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 9fceef6f3e00..878b6b551bd2 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -53,7 +53,7 @@ extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index 8dba35d63328..21f6c918238b 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -15,8 +15,6 @@ #define flush_cache_page(vma,addr,pfn) \ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) -#define flush_icache_page(vma, pg) do { } while (0) -#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index a9a719f04d06..0e879004efff 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -53,9 +53,6 @@ static inline void flush_dcache_page(struct page *page) flush_dcache_folio(page_folio(page)); } -#define flush_icache_page(vma, pg) do { } while(0) -#define flush_icache_pages(vma, pg, nr) do { } while(0) - void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, unsigned long len, int write); diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 35153f6725e4..785a00ce83c1 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -160,10 +160,6 @@ void local_flush_cache_page(struct vm_area_struct *vma, __invalidate_icache_range(start,(end) - (start)); \ } while (0) -/* This is not required, see Documentation/core-api/cachetlb.rst */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 09d51a680765..84ec53ccc450 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -77,18 +77,6 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #define flush_icache_user_range flush_icache_range #endif -#ifndef flush_icache_page -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} - -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) -{ -} -#endif - #ifndef flush_icache_user_page static inline void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index 82136f3fcf54..55f297b2c23f 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -17,4 +17,13 @@ static inline void flush_dcache_folio(struct folio *folio) #define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ +#ifndef flush_icache_pages +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} +#endif + +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) + #endif /* _LINUX_CACHEFLUSH_H */ From patchwork Tue Feb 28 21:37:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62623 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267413wrd; Tue, 28 Feb 2023 13:39:42 -0800 (PST) X-Google-Smtp-Source: AK7set9Mh8TTTg1LDN9CU7bSezH38jvQcXaF1vCLVbejlO/MHpfeM9+J3pe3hMk6rF0xMWJl+5/0 X-Received: by 2002:a05:6a20:3d02:b0:c9:9312:5f1d with SMTP id y2-20020a056a203d0200b000c993125f1dmr6316872pzi.4.1677620381751; Tue, 28 Feb 2023 13:39:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620381; cv=none; d=google.com; s=arc-20160816; b=l+J9BLO3FooKTBIkPH2kHtx2lyIK4hvtZCVYLnsSMPw2AQJVjvBHsxLSfUGJeTAEop secw5XLNfbUZ2UqhiG0LsTvq6lhGB2kRAgMxoD8FB5gFMVNK9f3C3mX008+GpzmrxcHe BSCh3qYL4cZOrRLifdYSI1rgHS9qwR/QnYXk8+7iUjxl/gK8vOOiddmxLUjXIHHOlRaW mI4cFqsalzqDdOr2LzdFFddAlG6NVs9M5qbeOsJ4eFZkaKMOsCLdOYwWdh20LuTd1BUZ 55x7+zUiPWyP9pgPvDFqnHiAnW7oSOscqUEuNnCji3F2ewbCuHtaA5pbSQ4DKjQa6/7O ywYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Yzj6cO2/vJ4MfYZK59aIBqu3Kv7SFJ8gH5oZ5flVgEs=; b=U5GQhsWNkXSeOxDXyyf2JDQcCyiTsDdbZEkin/662vUdw/taWVuJsPygHlQMBv8H+H pAemv4B/OwEMY5hkn2dTivdle5wZzU26gyeT9k0bD4DDE1JgcLraJuU7eRWfVRF1riWA Q2GTgFeHwiEEXDODovaV7nrdB/+3v/aUumCmtMaXzypIpxBGIL/hr43HsZJ+3Km32uNq JJrB4/MCTrViwKWRxYdDzTTJiks1OBbgMnXmAC/aBNrmHT7qybsj/JnSxQybWQ+uwULr +yqC4FSQBiX0EuAemfRp3siTySxxqh+xUhwn30Dy06AQyGHlfnlOFbAMIDLtRtard9/N wceg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=OD1wQ9FR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s1-20020a632c01000000b004fc27d88c6fsi10027059pgs.592.2023.02.28.13.39.27; Tue, 28 Feb 2023 13:39:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=OD1wQ9FR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229973AbjB1Vir (ORCPT + 99 others); Tue, 28 Feb 2023 16:38:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229990AbjB1ViF (ORCPT ); Tue, 28 Feb 2023 16:38:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AE8C35274; Tue, 28 Feb 2023 13:37:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Yzj6cO2/vJ4MfYZK59aIBqu3Kv7SFJ8gH5oZ5flVgEs=; b=OD1wQ9FRkxcghMLJg6Ce5C0ga2 cmeah8bBnDlnApT7w9eP4jLoOHWe5LOWJvaXjzySJToJ/ri5lFa/WPRrYlKCBfTnoTOy61IeVDPmi 68qgaO505JobxikfyskgY9xbNje+JqFPUysl0cnN+vEVR3pgREX8J89OLKMHl/TqyyAVjEpnol4zA mWNvomzEhL2O9QcgO/0OOrJvu8ccybEFryAof/kx97tzFyUluCPOeKa3/0cjX3UTPkb/j9069P53J +Cg4IG3uFwzyjrZHabcHWuf84tIb4VxnYUxvHW0kGOEgevnxc29eSy5Y6qNTUzaF2UnG761WUlGH0 nbiy8g/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018qr-6O; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 30/34] mm: Use flush_icache_pages() in do_set_pmd() Date: Tue, 28 Feb 2023 21:37:33 +0000 Message-Id: <20230228213738.272178-31-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112469744372791?= X-GMAIL-MSGID: =?utf-8?q?1759112469744372791?= Push the iteration over each page down to the architectures (many can flush the entire THP without iteration). Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index bfa3100ec5a3..69e844d5f75c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4222,8 +4222,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) if (unlikely(!pmd_none(*vmf->pmd))) goto out; - for (i = 0; i < HPAGE_PMD_NR; i++) - flush_icache_page(vma, page + i); + flush_icache_pages(vma, page, HPAGE_PMD_NR); entry = mk_huge_pmd(page, vma->vm_page_prot); if (write) From patchwork Tue Feb 28 21:37:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62632 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3267675wrd; Tue, 28 Feb 2023 13:40:25 -0800 (PST) X-Google-Smtp-Source: AK7set/FDlgE9/xP7FCQxrYOn4Ci566YPrISXj4GJKphJOZIiRPpE5N2qIvRmKpg9pU120NKR29Y X-Received: by 2002:a17:90a:182:b0:236:76cb:99d3 with SMTP id 2-20020a17090a018200b0023676cb99d3mr4738259pjc.12.1677620425104; Tue, 28 Feb 2023 13:40:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620425; cv=none; d=google.com; s=arc-20160816; b=hfiaadZJBEV+j1OnbkaOuAlCF6e3xXitTKNaarppbbOjJcxDFnejwoV8YNhQxLFAoa TBODBLzpqEJ1V2Nl/4cp0i3ow9V1EwVZZHtVMy/Fp3T517pDprF8QJf3K+s9GjcCSlGw CZu4yp71g3EmkdqHGk+GO5P7QEnjIdZVrw4cYrcAyqePc0JHFEG1pjvu40qIR9goMGHX ZktU6SnX6EtqOvsqq7FM3RvefyTu20crICUguXxVO9FbLSfm04UR9Kc1sTXSS6aN5BcJ A/O36GCOt+gH3hnyb7flr8xuRgx9GYTLX1i1D0T0EULFZUrJtrQ6nfA4EPYV5tDtSuzn meBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=i9ts6rBAYolQ1/FBCYccIPnlyslRrUlDUNhaTG0F47Y=; b=iQlXQqXKd76rfPkccDYdjzoEJXoUOUsEKGsA4yHgpV1spKDCWAlMSOD55hAy3kqgo0 /sGnlg48SUtxeIauGDC5ZcAqwQ75m1J8IC3DXtZ2cnVTAzENJd5fikmrqCW5wC2MNLiQ TNr0AYlr4Ak4/A47FEN+6283pMsQ02rjStEdILdBEkCvohfaAV2KKsJ6XDx/BLn2W6Dk drKy26+Gmb/a3oFxg34DSGmcAlQNP8fCysrRTcJry2y2ZU+34ap8456x4sWg0+fjRy4t y99w1XrAlwU97haT9d1yj12dTtn1U3OdeuoNFQRFKuQVJnFEbTsdysGDB+lDQ8O6vdnl 2X1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Zu3izYdW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ci21-20020a17090afc9500b00233b42dd1afsi14134197pjb.145.2023.02.28.13.40.11; Tue, 28 Feb 2023 13:40:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Zu3izYdW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230171AbjB1VjV (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230025AbjB1ViQ (ORCPT ); Tue, 28 Feb 2023 16:38:16 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86F2934016; Tue, 28 Feb 2023 13:37:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=i9ts6rBAYolQ1/FBCYccIPnlyslRrUlDUNhaTG0F47Y=; b=Zu3izYdWiUUrWNlblTIecgxVap vSbBWWBbGVf6F/mRhWkEatlIiQnHo0p87SmbzvwgHtTMhRqJ7iDgvDse5sB/S5t41WheVx2gII5C8 hBaIEcWUmNdss0vl3hGJmAep54TOAa3KmCoNQY+LnVxNkEkyaZnV/a4iADAxF7TcCPY5KSbsPG650 ADPBKhoPOPKcqaSgLW0NANyzDDwHvz3HXPOQJHU/aVC0a5QhSum4OYeiv1z77mW8yKu9D5p6/Lnrz da6ZPtIqG4XkeLCjYg0m3VLc2wKopty+hJW2xnBDcerJxOZQ7S2BK0qfWcEU3t1/OMfJ4ovzu+LDK 8Nda2UwQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018r0-C5; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 31/34] filemap: Add filemap_map_folio_range() Date: Tue, 28 Feb 2023 21:37:34 +0000 Message-Id: <20230228213738.272178-32-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112514663075687?= X-GMAIL-MSGID: =?utf-8?q?1759112514663075687?= From: Yin Fengwei filemap_map_folio_range() maps partial/full folio. Comparing to original filemap_map_pages(), it updates refcount once per folio instead of per page and gets minor performance improvement for large folio. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]), got 2% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 98 +++++++++++++++++++++++++++++----------------------- 1 file changed, 54 insertions(+), 44 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 2723104cc06a..db86e459dde6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2202,16 +2202,6 @@ unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, } EXPORT_SYMBOL(filemap_get_folios); -static inline -bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) -{ - if (!folio_test_large(folio) || folio_test_hugetlb(folio)) - return false; - if (index >= max) - return false; - return index < folio->index + folio_nr_pages(folio) - 1; -} - /** * filemap_get_folios_contig - Get a batch of contiguous folios * @mapping: The address_space to search @@ -3483,6 +3473,53 @@ static inline struct folio *next_map_page(struct address_space *mapping, mapping, xas, end_pgoff); } +/* + * Map page range [start_page, start_page + nr_pages) of folio. + * start_page is gotten from start by folio_page(folio, start) + */ +static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, + struct folio *folio, unsigned long start, + unsigned long addr, unsigned int nr_pages) +{ + vm_fault_t ret = 0; + struct vm_area_struct *vma = vmf->vma; + struct file *file = vma->vm_file; + struct page *page = folio_page(folio, start); + unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); + unsigned int ref_count = 0, count = 0; + + do { + if (PageHWPoison(page)) + continue; + + if (mmap_miss > 0) + mmap_miss--; + + /* + * NOTE: If there're PTE markers, we'll leave them to be + * handled in the specific fault path, and it'll prohibit the + * fault-around logic. + */ + if (!pte_none(*vmf->pte)) + continue; + + if (vmf->address == addr) + ret = VM_FAULT_NOPAGE; + + ref_count++; + do_set_pte(vmf, page, addr); + update_mmu_cache(vma, addr, vmf->pte); + } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + + /* Restore the vmf->pte */ + vmf->pte -= nr_pages; + + folio_ref_add(folio, ref_count); + WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); + + return ret; +} + vm_fault_t filemap_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff) { @@ -3493,9 +3530,9 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, unsigned long addr; XA_STATE(xas, &mapping->i_pages, start_pgoff); struct folio *folio; - struct page *page; unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); vm_fault_t ret = 0; + int nr_pages = 0; rcu_read_lock(); folio = first_map_page(mapping, &xas, end_pgoff); @@ -3510,45 +3547,18 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, addr = vma->vm_start + ((start_pgoff - vma->vm_pgoff) << PAGE_SHIFT); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); do { -again: - page = folio_file_page(folio, xas.xa_index); - if (PageHWPoison(page)) - goto unlock; - - if (mmap_miss > 0) - mmap_miss--; + unsigned long end; addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; + end = folio->index + folio_nr_pages(folio) - 1; + nr_pages = min(end, end_pgoff) - xas.xa_index + 1; - /* - * NOTE: If there're PTE markers, we'll leave them to be - * handled in the specific fault path, and it'll prohibit the - * fault-around logic. - */ - if (!pte_none(*vmf->pte)) - goto unlock; + ret |= filemap_map_folio_range(vmf, folio, + xas.xa_index - folio->index, addr, nr_pages); + xas.xa_index += nr_pages; - /* We're about to handle the fault */ - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; - - do_set_pte(vmf, page, addr); - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, addr, vmf->pte); - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - folio_ref_inc(folio); - goto again; - } - folio_unlock(folio); - continue; -unlock: - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - goto again; - } folio_unlock(folio); folio_put(folio); } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL); From patchwork Tue Feb 28 21:37:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62641 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3269173wrd; Tue, 28 Feb 2023 13:44:29 -0800 (PST) X-Google-Smtp-Source: AK7set9kWVx0Q9Lks7f70LCrC1NX3qmDgM8C+p0o241QVbtUUIp/sVb5KwR3Bjzuioh43DE98aNT X-Received: by 2002:a17:907:7254:b0:8e2:49c2:1a12 with SMTP id ds20-20020a170907725400b008e249c21a12mr5395647ejc.20.1677620669420; Tue, 28 Feb 2023 13:44:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620669; cv=none; d=google.com; s=arc-20160816; b=jPu41tx1aiSWflEfWKKjd1zW6Ie62IU5NXsxQQ1KGx4QTlbu3oNMvyt4aAnNFIcZw3 IXYgUJDcLtvFZi9dsnOj073bjdHwznqh8VpHW0eiXPYEhiQajarNb4Doq2Z0BT6NlQga feCAS6w2AD+MdHNy5lXML9z+v4N1jbWvkC0ahStwm1OBdVXnFkZv3XntJ0TzyIZt6Wp7 IG3nOMLdXj7mf7VenK6BAjDbLR8VcDyIs5yvXjcQkb0RHm/jQeOa8H75N30WjQsz6ZaY Z2eB1+JCzMgsr7IZCTZ1rt49ZaXsMGk1/ZeSvmh4eaVuzDizDFKeNFoYCJP+J+PaASXk i9bQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=y0P5OnBh3qG6d3noGaCIYdmYizhMJwO+EWfJPW8MG4E=; b=nA6/qPl/JhXFa978czLh6+e0aiuNuqR3MvuP10honYsJZpzkkETmfR4ysnyI0C/ITz vpSlMnil6Kxv8gh4Q7Bc9WT4SIb8kocDBJ4+d6N5F868IaqMqgA2rM5NSAgdnMgIsdR2 vIgYZCT8lk4mubeVHT+Sk1RvQXbBUxD5BRTjXhzEr4usuaJ6DQdj3pj2WB+x+FbMCZvI jYEI/F3uVACdoGdqh0Gx2K3/NnuX60s+4GipDbXcFZA3EAbrWSWtMQd45qzoenA5ca1s JaeREEt4lFSSFr1Us02Uo2VxGmLrZFA2Qyg05qoyQeQgyzcWBGQUsi91G19RjJShRZa4 XCGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=LRdOtZ9Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w24-20020a170906131800b008e01fdef880si10648491ejb.990.2023.02.28.13.44.06; Tue, 28 Feb 2023 13:44:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=LRdOtZ9Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230234AbjB1Vjm (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229977AbjB1Vif (ORCPT ); Tue, 28 Feb 2023 16:38:35 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A47A1A7; Tue, 28 Feb 2023 13:37:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=y0P5OnBh3qG6d3noGaCIYdmYizhMJwO+EWfJPW8MG4E=; b=LRdOtZ9ZFkTTxLpXQzhSaFBaCO 3LY1E0Ww6W7K84LoUvYzcxA8jYCxW/TlqQ6e5ETaKmaR6l27lmJAZCHy4QLqJdsCEmO7qZVip53ZF aMwH9kSUjY9Bf+LrPeWlxlolBANWfD0tGcMm9aTgIpfam04BFwmZiBTytL1Aj4xW8plPpu1BStvS/ YAGcDVb0J/Z34yIZc3XS3bwtDdU70HZuxJiDVtxN9iyhe2H5z0vZJ/avHH5y5dPwdoz5Kv7iNHEzL No0ijIELFIwUqqoIWgpdx38Wg9d+6zY92qOCl4JKw8mVTDyZvnl4BSbK8YBk4+101LB2HCYitNH6F QnOg7n2Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018r6-HG; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 32/34] rmap: add folio_add_file_rmap_range() Date: Tue, 28 Feb 2023 21:37:35 +0000 Message-Id: <20230228213738.272178-33-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112770941583162?= X-GMAIL-MSGID: =?utf-8?q?1759112770941583162?= From: Yin Fengwei folio_add_file_rmap_range() allows to add pte mapping to a specific range of file folio. Comparing to page_add_file_rmap(), it batched updates __lruvec_stat for large folio. Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 2 ++ mm/rmap.c | 60 +++++++++++++++++++++++++++++++++----------- 2 files changed, 48 insertions(+), 14 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b87d01660412..a3825ce81102 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -198,6 +198,8 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, + struct vm_area_struct *, bool compound); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); diff --git a/mm/rmap.c b/mm/rmap.c index bacdb795d5ee..fffdb85a3b3d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1303,31 +1303,39 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } /** - * page_add_file_rmap - add pte mapping to a file page - * @page: the page to add the mapping to + * folio_add_file_rmap_range - add pte mapping to page range of a folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @nr_pages: The number of pages which will be mapped * @vma: the vm area in which the mapping is added * @compound: charge the page as compound or small page * + * The page range of folio is defined by [first_page, first_page + nr_pages) + * * The caller needs to hold the pte lock. */ -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, - bool compound) +void folio_add_file_rmap_range(struct folio *folio, struct page *page, + unsigned int nr_pages, struct vm_area_struct *vma, + bool compound) { - struct folio *folio = page_folio(page); atomic_t *mapped = &folio->_nr_pages_mapped; - int nr = 0, nr_pmdmapped = 0; - bool first; + unsigned int nr_pmdmapped = 0, first; + int nr = 0; - VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); + VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio); /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { - first = atomic_inc_and_test(&page->_mapcount); - nr = first; - if (first && folio_test_large(folio)) { - nr = atomic_inc_return_relaxed(mapped); - nr = (nr < COMPOUND_MAPPED); - } + do { + first = atomic_inc_and_test(&page->_mapcount); + if (first && folio_test_large(folio)) { + first = atomic_inc_return_relaxed(mapped); + first = (nr < COMPOUND_MAPPED); + } + + if (first) + nr++; + } while (page++, --nr_pages > 0); } else if (folio_test_pmd_mappable(folio)) { /* That test is redundant: it's for safety or to optimize out */ @@ -1356,6 +1364,30 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma, compound); } +/** + * page_add_file_rmap - add pte mapping to a file page + * @page: the page to add the mapping to + * @vma: the vm area in which the mapping is added + * @compound: charge the page as compound or small page + * + * The caller needs to hold the pte lock. + */ +void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, + bool compound) +{ + struct folio *folio = page_folio(page); + unsigned int nr_pages; + + VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page); + + if (likely(!compound)) + nr_pages = 1; + else + nr_pages = folio_nr_pages(folio); + + folio_add_file_rmap_range(folio, page, nr_pages, vma, compound); +} + /** * page_remove_rmap - take down pte mapping from a page * @page: page to remove mapping from From patchwork Tue Feb 28 21:37:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62638 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3268107wrd; Tue, 28 Feb 2023 13:41:29 -0800 (PST) X-Google-Smtp-Source: AK7set92OTaCMHcA/ieiEqIQeIiWnAiPE1rc620j6n5KpM3J3ZWFBSFe43tpWqJseYtpTFJo9Ogg X-Received: by 2002:a17:906:a20a:b0:8af:514f:1078 with SMTP id r10-20020a170906a20a00b008af514f1078mr4376217ejy.31.1677620489101; Tue, 28 Feb 2023 13:41:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677620489; cv=none; d=google.com; s=arc-20160816; b=Qa238LZupwdw/sPRiFk4KNKUDG9g2j3IA+4Z6c86MH0h2VoZt/7KN4txFW3AtHEsPp 4yfIr85n4or5rBy4CkPxbZobS5nFMBDs1ubA8Adtt5NJIPQ+tnSyXrWC96JoaUvpVq4G pGZltcTOkONkjS7nNQE5jGwDhcv0qbC9Zqsr8zHOR5I3gBegx1VCVgai0KiY3A9qk8/Z 8OlSeeU6XzFLWYQ4twrYh52eFvlgAw2eQYl8rM40RmOM6fajgrbV0HF+nFN2obXniUGD X0AnaJDo1KBwbl7DacA2r43GMK11wQica2dswbmgb+d5voNF79ApycESpMSPtlqrCA4Y fYjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ARAoirNoaB9mRzIf53Xf1NK7jxsnUs5xL8l8fM8Dxdk=; b=wSI9HODsgPWKg6goYz4bB4zKDLtlzEzsKLuRVZe0+y/AD9hk+eR0OPzSAmGzUDM2TZ JVuhzK5Z1k7CgxM01yf3yu6SW2NS2Y7jsJD77fXaHMNTzq4u2ZhE+a+WxRDZxdPQ+mWy GVs/jmrwqc/ZZ2HJYfetNUmxUAzWf6Nb4eSJRnYAUnTlCjR5ZbJV3RVwXZsAT89LU6KJ eLUdmXFwH63C4WDCq8vf7mX7J6rUVhbepTOwUVad39n9SfBcu3oiOt8KDMlesS6NewhO vXUiYEC0s6Jli3/YAGJRRtZP1mQG+K2QTLvNdm8sxjNwuU0KmmGELF/Oun8tzLfJpZIk nrnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=iach5zv0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e24-20020a056402089800b004bbed2899d6si145622edy.540.2023.02.28.13.41.06; Tue, 28 Feb 2023 13:41:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=iach5zv0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230190AbjB1Vja (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230038AbjB1ViV (ORCPT ); Tue, 28 Feb 2023 16:38:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 986E535268; Tue, 28 Feb 2023 13:37:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ARAoirNoaB9mRzIf53Xf1NK7jxsnUs5xL8l8fM8Dxdk=; b=iach5zv0eq0D8FR1a4k/lIomSV pgV0FW5stQQcTQhQVoCe85oCF2tR0Zj8sfrut3+8k6372kXxFNSJGDZXqRN6yULzdojt2xBYsywgS 2W16HPgDHLjhzw5IzkQd/GsCfjPEVBtahoYMJvuz3QmDMqduVxPv52jHKli8DXp+8luYU2PFlLrjP NW8YWzQyvpAsuN8eJgVE83fFbYYHWReM/TXWACNDFuh99uLt18q2UYlUwuvP/Nju2F0XapCddF62s wTnfVleI9HrGcxOq/8tdigu//t0DHMbg0I38uigDhK6n+RzFcj5KM+/ffMUcDNAoHFv8BnTtALLAF gfFtG2sg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018rD-La; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 33/34] mm: Convert do_set_pte() to set_pte_range() Date: Tue, 28 Feb 2023 21:37:36 +0000 Message-Id: <20230228213738.272178-34-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759112581661073567?= X-GMAIL-MSGID: =?utf-8?q?1759112581661073567?= From: Yin Fengwei set_pte_range() allows to setup page table entries for a specific range. It takes advantage of batched rmap update for large folio. It now takes care of calling update_mmu_cache_range(). Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 2 +- include/linux/mm.h | 3 ++- mm/filemap.c | 3 +-- mm/memory.c | 27 +++++++++++++++------------ 4 files changed, 19 insertions(+), 16 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 7de7a7272a5e..922886fefb7f 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -663,7 +663,7 @@ locked. The VM will unlock the page. Filesystem should find and map pages associated with offsets from "start_pgoff" till "end_pgoff". ->map_pages() is called with page table locked and must not block. If it's not possible to reach a page without blocking, -filesystem should skip it. Filesystem should use do_set_pte() to setup +filesystem should skip it. Filesystem should use set_pte_range() to setup page table entry. Pointer to entry associated with the page is passed in "pte" field in vm_fault structure. Pointers to entries for other offsets should be calculated relative to "pte". diff --git a/include/linux/mm.h b/include/linux/mm.h index 1f79667824eb..568ebe7058d4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1168,7 +1168,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); diff --git a/mm/filemap.c b/mm/filemap.c index db86e459dde6..07ebd90967a3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3507,8 +3507,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, ret = VM_FAULT_NOPAGE; ref_count++; - do_set_pte(vmf, page, addr); - update_mmu_cache(vma, addr, vmf->pte); + set_pte_range(vmf, folio, page, 1, addr); } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); /* Restore the vmf->pte */ diff --git a/mm/memory.c b/mm/memory.c index 69e844d5f75c..efd17ff09315 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4255,7 +4255,8 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) } #endif -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; bool uffd_wp = pte_marker_uffd_wp(vmf->orig_pte); @@ -4263,7 +4264,7 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) bool prefault = vmf->address != addr; pte_t entry; - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr); entry = mk_pte(page, vma->vm_page_prot); if (prefault && arch_wants_old_prefaulted_pte()) @@ -4277,14 +4278,18 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) entry = pte_mkuffd_wp(entry); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr); - lru_cache_add_inactive_or_unevictable(page, vma); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); + VM_BUG_ON_FOLIO(nr != 1, folio); + folio_add_new_anon_rmap(folio, vma, addr); + folio_add_lru_vma(folio, vma); } else { - inc_mm_counter(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page, vma, false); + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); + folio_add_file_rmap_range(folio, page, nr, vma, false); } - set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vma, addr, vmf->pte, nr); } static bool vmf_pte_changed(struct vm_fault *vmf) @@ -4357,11 +4362,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) /* Re-check under ptl */ if (likely(!vmf_pte_changed(vmf))) { - do_set_pte(vmf, page, vmf->address); - - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, vmf->address, vmf->pte); + struct folio *folio = page_folio(page); + set_pte_range(vmf, folio, page, 1, vmf->address); ret = 0; } else { update_mmu_tlb(vma, vmf->address, vmf->pte); From patchwork Tue Feb 28 21:37:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62644 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp3273101wrd; Tue, 28 Feb 2023 13:55:26 -0800 (PST) X-Google-Smtp-Source: AK7set9CdYH4/srZccQjDiilAG8M6Ff7bRxQ9+kQiV2fHcjYGeRMluH8HtKhFsVo5iEV4tZmH4HF X-Received: by 2002:a17:902:e886:b0:19d:b02:cca5 with SMTP id w6-20020a170902e88600b0019d0b02cca5mr4393459plg.12.1677621326284; Tue, 28 Feb 2023 13:55:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677621326; cv=none; d=google.com; s=arc-20160816; b=WopKYHLnToJnjlpxEE1YuhRY8mMlKOwtIdgmdWaQjPxYBMdWc/H9KS1gx2OFOWGPeE pBC1iE6uW2FacG8SLFlAZZXxJX2GNFjNTLjn6h6qjGV1D6pwYKJ2BxoRKnLE7vd3Fsja NVQwitGn9f5l8euigNqCDKlsqqAzhO8A73scrC5BiE50btC2nqRAEMTQ4fZQr1fUW89m iBwOt7qXsako5AiAdFZoJIfOLta6RKLMfh1pBaLNNFgpFd5vUGUdnLTskuKvfxMBIYVK fVKKcA3G6G9p0RiVBf7//HOdm4FIzpjEyPr/BHFPv4s4rd1KYeztgCU+KIP/lengv9Hq 9mvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GeBhKEWLK25eomw3eGszIimIpcpbCe83zHF885nubCM=; b=OC0L4LtjaIkwlv6V8awfqBf8JUt8Py4SxkYD39ImIYu/5IMQV0chuo6ZoudUQsQtX5 uaxtWFKetngmB1SFBGpld0noZCj4cigRigfTE58vm+xyDlW/Bg6Mf+HupeicLOXEimbA 6nMx/YoFkBNmo+AqZKBSpZt4dyz4Nx9s33Tg9Exq7LA+9Rph/WalPvmElMI2GvhFbOAu zDuFTNXb7nIMYhCM6iJRiwq1Qkmi01sqUj3ICj6azS6zyB2GFZIzMQ7/eGyRxrUZKkjP SN3b9w1G5j8JJsS+G7V2STenJdaoh3vFKthA8CHH/YFZIf+LWO5ROQkEBRaljZfQDCY2 SXZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="LP/NyLEv"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lf7-20020a170902fb4700b001967580f631si9457939plb.559.2023.02.28.13.55.13; Tue, 28 Feb 2023 13:55:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="LP/NyLEv"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230000AbjB1Vjy (ORCPT + 99 others); Tue, 28 Feb 2023 16:39:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230088AbjB1VjA (ORCPT ); Tue, 28 Feb 2023 16:39:00 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D40C63ABE; Tue, 28 Feb 2023 13:37:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GeBhKEWLK25eomw3eGszIimIpcpbCe83zHF885nubCM=; b=LP/NyLEvU7Zr38BcJdWijZxpuz gziV4LpTw0wg2ceBinWysRsZZcycgBwZA2g7/Lm3HyxGiF1EEYEB3oZEMgrlbMW/lL2X2nrparQGy Zw04V6x8HNF5cPBAJOvScFx9DZMJVWxlE6+lIdHHDhw+d3ZT5d8h4XrEnqJGiVw0cPI4xltr7qIBy bUFvwy+/iUiECz7wXz+84dGuTi/q6iFsO4jyZD4oI7XQrBoeSvJnhDDqLdZ4xGHT8Bw5vZmVtkoVA C9BfuyTn+j+hO03JV95B0gmKzMTE65OMlLGGPxQnsIVjRDXfIlhs+T6oOBoljZ+Q77g/iY3WItu5S Pb6xt5sw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018rJ-PM; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 34/34] filemap: Batch PTE mappings Date: Tue, 28 Feb 2023 21:37:37 +0000 Message-Id: <20230228213738.272178-35-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759113459454952410?= X-GMAIL-MSGID: =?utf-8?q?1759113459454952410?= From: Yin Fengwei Call set_pte_range() once per contiguous range of the folio instead of once per page. This batches the updates to mm counters and the rmap. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]) got 15% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. Perf data collected before/after the change: 18.73%--page_add_file_rmap | --11.60%--__mod_lruvec_page_state | |--7.40%--__mod_memcg_lruvec_state | | | --5.58%--cgroup_rstat_updated | --2.53%--__mod_lruvec_state | --1.48%--__mod_node_page_state 9.93%--page_add_file_rmap_range | --2.67%--__mod_lruvec_page_state | |--1.95%--__mod_memcg_lruvec_state | | | --1.57%--cgroup_rstat_updated | --0.61%--__mod_lruvec_state | --0.54%--__mod_node_page_state The running time of __mode_lruvec_page_state() is reduced about 9%. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 36 +++++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 07ebd90967a3..40be33b5ee46 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3486,11 +3486,12 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct file *file = vma->vm_file; struct page *page = folio_page(folio, start); unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); - unsigned int ref_count = 0, count = 0; + unsigned int count = 0; + pte_t *old_ptep = vmf->pte; do { - if (PageHWPoison(page)) - continue; + if (PageHWPoison(page + count)) + goto skip; if (mmap_miss > 0) mmap_miss--; @@ -3500,20 +3501,33 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, * handled in the specific fault path, and it'll prohibit the * fault-around logic. */ - if (!pte_none(*vmf->pte)) - continue; + if (!pte_none(vmf->pte[count])) + goto skip; if (vmf->address == addr) ret = VM_FAULT_NOPAGE; - ref_count++; - set_pte_range(vmf, folio, page, 1, addr); - } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + count++; + continue; +skip: + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + } - /* Restore the vmf->pte */ - vmf->pte -= nr_pages; + count++; + page += count; + vmf->pte += count; + addr += count * PAGE_SIZE; + count = 0; + } while (--nr_pages > 0); + + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + } - folio_ref_add(folio, ref_count); + vmf->pte = old_ptep; WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret;