From patchwork Mon Feb 27 17:57:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62075 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2563436wrd; Mon, 27 Feb 2023 10:03:13 -0800 (PST) X-Google-Smtp-Source: AK7set/2YN1lgNHB+wHOLiQ954/PtSYCk9ChLXXy7MA+HasRA30uZXDiQXgT0AYCdcLG2I2hnyok X-Received: by 2002:a17:906:6885:b0:878:545b:e540 with SMTP id n5-20020a170906688500b00878545be540mr35441596ejr.51.1677520993386; Mon, 27 Feb 2023 10:03:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520993; cv=none; d=google.com; s=arc-20160816; b=Qm9mxIf/5pH9foLpB4vyPDAFCQDHJ0u85cR6fbM8+jz74hXiKoWYa4/aAjzBX4W52O ZY3p+Nl8yBjAZBNBQBZl2wX83TC0qrecBq7fnIM5x11H2Qvvp5DQc5RSfj6zXvPUdSnD KG3pcIu8QkrtXU/mGKAmg48IxZ9XTFHP6mzNGNoauHmOrk59xmaC5/vOR/CbqqmurB9/ 2jqsog25MtoxBVdhOKqndkGd5dck8hjHKRCzOuRjzpLCBmiLsBY0pXVsUVbd62rPALuk aexel+0JyARh7Jy7OoBYTY6LnN75bsdUBn8TAIo6RyjNcahLcspgOJeUr2raKy13ybP6 0odA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ohh0hBicoR9cEJC1f0QXvtIvk3E2NGoKfRsPDD6eKkI=; b=JJTvuSpdR4SsJ/qGa9epah1IZW6Piup+ejOtaaZ663oNoOUWzrQ9e2ksTcA8tqEsyV 8kVnefWOupQjkR5HHPSczsuqZP/h6MMdHvUitv69YtXy6lVBnf5wX/+IJw6nR29kavPM hFKzqdSzp1C3qdrEW1cTk3peArxZAo/xocYgjNybbUJfBeE/k00jgbqQaIK70yckTO5a qyd0iI9jy1eq7W+zFcZNr6gdmD6hzFHv7+33vjtQ/6TiKTzBAcjhdyoNlsfE3qzCgDNb OTu82kx/XWBtvTe5BNdnlJB6h3fsZqkDab8Rm+y8PpuzZuVEykziqmWsupcGX+QPY9l5 AC2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=FoMahLSp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n2-20020a1709061d0200b008b71bc0308fsi7954126ejh.357.2023.02.27.10.02.50; Mon, 27 Feb 2023 10:03:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=FoMahLSp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230488AbjB0R7E (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230304AbjB0R6Q (ORCPT ); Mon, 27 Feb 2023 12:58:16 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53BDB1BF7; Mon, 27 Feb 2023 09:57:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ohh0hBicoR9cEJC1f0QXvtIvk3E2NGoKfRsPDD6eKkI=; b=FoMahLSpZO7AlHGXXhBAHCLsCe 6SDkWmBf3/wG2qWYJoNE8bfijZhlzdWgCOzeikWmzflr9MxdkexO2ikTu7grJ2uvEvAakdTP8kluU vZ48j/BkdV3BwJD8J1CQXjqprP6HNwLyfZUpt0fugKGLuMaejD4OZ2yJ0+crM0BFeCx5NRSVJsX6C JzEsdJwnv5pcJNLtXAxrJ6L0HK/uj8BX/ybiYgkwLkaQN5MXX35MU0LGlJ/MBuzS1CzUiRWD93EFE fkFtfBIZQgWesSVuxE+FJW6QyBUQYr5U5f8TQNhln3BZdugndqr5cyiBjFOuFkbjAq6fqOloM2+yi 8m+xL4Jg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkt-000IWp-2W; Mon, 27 Feb 2023 17:57:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 01/30] mm: Convert page_table_check_pte_set() to page_table_check_ptes_set() Date: Mon, 27 Feb 2023 17:57:12 +0000 Message-Id: <20230227175741.71216-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008253254303859?= X-GMAIL-MSGID: =?utf-8?q?1759008253254303859?= Tell the page table check how many PTEs & PFNs we want it to check. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm64/include/asm/pgtable.h | 2 +- arch/riscv/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 2 +- include/linux/page_table_check.h | 14 +++++++------- mm/page_table_check.c | 14 ++++++++------ 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b6ba466e2e8a..69765dc697af 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -358,7 +358,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); return __set_pte_at(mm, addr, ptep, pte); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index ab05f892d317..b516f3b59616 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -459,7 +459,7 @@ static inline void __set_pte_at(struct mm_struct *mm, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) { - page_table_check_pte_set(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, 1); __set_pte_at(mm, addr, ptep, pteval); } diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7425f32e5293..84be3e07b112 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1022,7 +1022,7 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); set_pte(ptep, pte); } diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h index 01e16c7696ec..ba269c7009e4 100644 --- a/include/linux/page_table_check.h +++ b/include/linux/page_table_check.h @@ -20,8 +20,8 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr, pmd_t pmd); void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, pud_t pud); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte); +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, @@ -73,14 +73,14 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, __page_table_check_pud_clear(mm, addr, pud); } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; - __page_table_check_pte_set(mm, addr, ptep, pte); + __page_table_check_ptes_set(mm, addr, ptep, pte, nr); } static inline void page_table_check_pmd_set(struct mm_struct *mm, @@ -138,9 +138,9 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, { } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { } diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 25d8610c0042..e6f4d40caaa2 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -184,20 +184,22 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(__page_table_check_pud_clear); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + if (&init_mm == mm) return; - __page_table_check_pte_clear(mm, addr, *ptep); + for (i = 0; i < nr; i++) + __page_table_check_pte_clear(mm, addr, ptep[i]); if (pte_user_accessible_page(pte)) { - page_table_check_set(mm, addr, pte_pfn(pte), - PAGE_SIZE >> PAGE_SHIFT, + page_table_check_set(mm, addr, pte_pfn(pte), nr, pte_write(pte)); } } -EXPORT_SYMBOL(__page_table_check_pte_set); +EXPORT_SYMBOL(__page_table_check_ptes_set); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) From patchwork Mon Feb 27 17:57:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62079 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2563748wrd; Mon, 27 Feb 2023 10:03:41 -0800 (PST) X-Google-Smtp-Source: AK7set9N429K8/k2njVrQfOPf2LM0t+XnM4KaTER63/vAyfE0EYRUG8E3nxRlavchkIAFP8rXgYl X-Received: by 2002:a17:902:f54f:b0:199:2ee:6238 with SMTP id h15-20020a170902f54f00b0019902ee6238mr34000044plf.16.1677521020974; Mon, 27 Feb 2023 10:03:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677521020; cv=none; d=google.com; s=arc-20160816; b=ddcHVY76Hmg/r300FSzfVLg9s3tLZg9btUrv0VX4nRUPTrArRM9xFPMqKZ4oLblHog WBgORXApvStIRkgUbmnqSlDhwYDpkLQy2ljWh+4ZQm4lx05CWKqWVO8HpDTmwqzlrWFu Q3XLnBgtkTAEB+pffcMKlMGosb7bYy0fQKi8k0AQBClB6ml7ZouM9npSKuzHaz6qt9js z7VToDKOIk8q6fYAhtWXFfkDQwZQst91dMJG93rJlLJZmtvs/GD7i58XAcEYFOau2yex IA2xBFMOxXkCqubCYB0hgzj8Y5LlAelpMJLG035FW6H4//Y9BunTntPueBIB/amGMAzO VQpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=goynt5niP0v9aEgqldbKfY9zrWSKLPIYoftetp9ETpc=; b=dJ+WGNUvbA0SdlYVt6gF+YYg/vhPeLjd68R9lHrfKKkYfIK8cM9Og1ZnhuuQTJQKcL asNr1fW5PFOu2IW8qUQhfFSPf6ZlorVhWw/mx4gSn8swEfzlUK5CHQseaxnopsmv6K7Q vQja8wbgHg/ipJELxIjZNKa4f29S64U3ChaS8NQGe95UTN7DaolTSJidmO5opIT0ILVG hJfIh3KvjmHtF5zI0fATv7h1YYgdwq7u7MAIGNjVL+fXwUenruOvQ8bff02nFN87iKKf igz3TNoZHiktdEKsY3/TCmfDA6WxhQA46S0YRmzjK3ulEWzMav/unlEsQnShceyuUUsa RDZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=h9bwi+L4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ku5-20020a170903288500b0019ca7d55b46si7057493plb.450.2023.02.27.10.03.28; Mon, 27 Feb 2023 10:03:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=h9bwi+L4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229776AbjB0R70 (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230318AbjB0R6X (ORCPT ); Mon, 27 Feb 2023 12:58:23 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 138166EBB; Mon, 27 Feb 2023 09:57:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=goynt5niP0v9aEgqldbKfY9zrWSKLPIYoftetp9ETpc=; b=h9bwi+L4iRtZ37pWgqIDBoxtKK pTb0mscbPk+KSfX6eWJRu62nCtpffCMdzZYfeziXT/kFRBuLmr/R0d9fcKcjo45DsxzJFwTwSn5ZF +jG4YRFrGOvcwRV50TenIrRq5GOR2gBoyBF7vIEkn38QNcHBFr5DQuAtDCf9WLWskDNCxNBowtSp8 wfgQ3pqAJb4RVeWDlrZjPAXMJrSI7OwTZ810kAoSMsN0o3uoupKwo7O+TtaPYy8kK3E/YPbmv8/c8 fxQ3q7BfcUr/uq+z8Z+LF/HF38kOVHW2WhKCmA1EDrZXGimqjzk6Hra11Pxc41wRyG2iwhRW7wSr7 WMS3O3eA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkt-000IWr-6F; Mon, 27 Feb 2023 17:57:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 02/30] mm: Add generic flush_icache_pages() and documentation Date: Mon, 27 Feb 2023 17:57:13 +0000 Message-Id: <20230227175741.71216-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008282141094947?= X-GMAIL-MSGID: =?utf-8?q?1759008282141094947?= flush_icache_page() is deprecated but not yet removed, so add a range version of it. Change the documentation to refer to update_mmu_cache_range() instead of update_mmu_cache(). Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/core-api/cachetlb.rst | 35 +++++++++++++++-------------- include/asm-generic/cacheflush.h | 5 +++++ 2 files changed, 23 insertions(+), 17 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index 5c0552e78c58..d4c9e2a28d36 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -88,13 +88,13 @@ changes occur: This is used primarily during fault processing. -5) ``void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep)`` +5) ``void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr)`` - At the end of every page fault, this routine is invoked to - tell the architecture specific code that a translation - now exists at virtual address "address" for address space - "vma->vm_mm", in the software page tables. + At the end of every page fault, this routine is invoked to tell + the architecture specific code that translations now exists + in the software page tables for address space "vma->vm_mm" + at virtual address "address" for "nr" consecutive pages. A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -306,17 +306,18 @@ maps this page at its virtual address. private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. - This allows these interfaces to be implemented much more efficiently. - It allows one to "defer" (perhaps indefinitely) the actual flush if - there are currently no user processes mapping this page. See sparc64's - flush_dcache_page and update_mmu_cache implementations for an example - of how to go about doing this. + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) the + actual flush if there are currently no user processes mapping this + page. See sparc64's flush_dcache_page and update_mmu_cache_range + implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if page_file_mapping() - returns a mapping, and mapping_mapped on that mapping returns %false, - just mark the architecture private page flag bit. Later, in - update_mmu_cache(), a check is made of this flag bit, and if set the - flush is done and the flag bit is cleared. + The idea is, first at flush_dcache_page() time, if + page_file_mapping() returns a mapping, and mapping_mapped on that + mapping returns %false, just mark the architecture private page + flag bit. Later, in update_mmu_cache_range(), a check is made + of this flag bit, and if set the flush is done and the flag bit + is cleared. .. important:: @@ -369,7 +370,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache. In the future, the hope + flush_dcache_page and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index f46258d1a080..09d51a680765 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #endif #ifndef flush_icache_page +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} + static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) { From patchwork Mon Feb 27 17:57:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62078 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2563718wrd; Mon, 27 Feb 2023 10:03:38 -0800 (PST) X-Google-Smtp-Source: AK7set+rKlGrhm7f+Aylr++hKR0cokvml+kcSVnVF3x0mY5YDJLr8npSJMj2eAUOr7QeVmVJmh6d X-Received: by 2002:a05:6a20:431f:b0:cb:c276:5873 with SMTP id h31-20020a056a20431f00b000cbc2765873mr375744pzk.22.1677521018308; Mon, 27 Feb 2023 10:03:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677521018; cv=none; d=google.com; s=arc-20160816; b=wIsspWPjvjPRPkFaAL5IBnb/E9Q0CmrvItonnLUCgSS+wfuC6IgsWmxB0/RL3CVjeM lM6Pa5eaL+fKfeY6bmuwWDUSaBxxeSBC6QNMl1gEeN7cG3eXxLddDTqa833pQjvok1Jj okivAPG+JLJSFF8zycI8egKYSVBDDkfFLrXS3838gSQyQUUYqgzApmgp0X1NoGlLQWUa G6B6Uvemg58J4NenwSeSwzrR/dmZazNLyx4RfwNIpW7vZftZ+7aBKyUUY0t2jc1FZMtP +HkeRCYBObZtzTvDwAP25xWZL3sRjqTsbhs8SbNNJvrb7wJywA62Ys6aWpK8M8FiRkMd xTlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=aaXlbA2/BvRg6HHDAadnVZCwlkGxkg/4CEs3FrmdfUo=; b=h/ev5nEDAk6nVWpXCjUVHHM1sZMKzZfW44fqJF92VmWeIczZfcv1i02bXGZOX+wDT8 kZozLXfdmLIMg5z0OPkJoOUEasgf36DVkOo1vxS4Y2R/Na0sH8fdaipERaYQNKVecl8Z D0PS4l6Mcs2oN6B1o6bFGTd/SaWgswPgu0q8CVGPxK7/zUxWUshSOC0sBoTkN9IDRZGs rYL7XsX3SodDBE4dabwL9Z07Qdf969ERBOUOgmiyp6bAhNiKgGE3ElAxAScksqNfznR8 /O3gs4I/bO/O9BqF951hZ+lsLBnKQ5poBF7cXpkFruzqMQftDJhfXwWD6OEiFKSXUADQ DQww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HXN9VHt2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t128-20020a635f86000000b004fbd53187ecsi7468434pgb.258.2023.02.27.10.03.25; Mon, 27 Feb 2023 10:03:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HXN9VHt2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230527AbjB0R7W (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230258AbjB0R6W (ORCPT ); Mon, 27 Feb 2023 12:58:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 138F844B1; Mon, 27 Feb 2023 09:57:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aaXlbA2/BvRg6HHDAadnVZCwlkGxkg/4CEs3FrmdfUo=; b=HXN9VHt2EFXUIhhFmEnV3MVjKc W8bimZyhdFaayUqTL0aWIlWF4u8HSlrCZsPP6sAgvIwuQhK9Xu6jYzgV7d8u496T7uVOMbbb7lKs2 DiMor3aKe0pkza1VKFtk5sS4gi0rR8DzvDgOvfd+wGjAUOw1XZ3HutdOhQgQHoIB51MWYkybo0Mhy EPb4RTDx8bELqsN7hhScXf6q+G0xfo0BODTUyYYkw+Jzst9swDFfIaImLaFI07iafp0gD5Sh/CUdj NjVT0AXThjTjSN7OuG3+iOmCQUo+eSPJvGLcNaowabymnfQKN0Ul+3Xlfew5Gx285pk6Lsj1fXtFk OabJICJg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkt-000IWt-9T; Mon, 27 Feb 2023 17:57:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 03/30] mm: Add folio_flush_mapping() Date: Mon, 27 Feb 2023 17:57:14 +0000 Message-Id: <20230227175741.71216-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008278915449459?= X-GMAIL-MSGID: =?utf-8?q?1759008278915449459?= This is the folio equivalent of page_mapping_file(), but rename it to make it clear that it's very different from page_file_mapping(). Theoretically, there's nothing flush-only about it, but there are no other users today, and I doubt there will be; it's almost always more useful to know the swapfile's mapping or the swapcache's mapping. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 51b75b89730e..647c5a036a97 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -369,6 +369,26 @@ static inline struct address_space *folio_file_mapping(struct folio *folio) return folio->mapping; } +/** + * folio_flush_mapping - Find the file mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Anonymous folios return NULL, even if they're in + * the swap cache. Other kinds of folio also return NULL. + * + * This is ONLY used by architecture cache flushing code. If you aren't + * writing cache flushing code, you want either folio_mapping() or + * folio_file_mapping(). + */ +static inline struct address_space *folio_flush_mapping(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return swapcache_mapping(folio); + + return folio->mapping; +} + static inline struct address_space *page_file_mapping(struct page *page) { return folio_file_mapping(page_folio(page)); @@ -379,11 +399,7 @@ static inline struct address_space *page_file_mapping(struct page *page) */ static inline struct address_space *page_mapping_file(struct page *page) { - struct folio *folio = page_folio(page); - - if (unlikely(folio_test_swapcache(folio))) - return NULL; - return folio_mapping(folio); + return folio_flush_mapping(page_folio(page)); } /** From patchwork Mon Feb 27 17:57:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62053 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2560829wrd; Mon, 27 Feb 2023 09:59:00 -0800 (PST) X-Google-Smtp-Source: AK7set8Kb2qSXLFJ08V/xw2hB2CUB09MVDEHgiIIIRd61RMMR0HT1YvNarY2AcYLdQlJXKvvYnLA X-Received: by 2002:a17:903:68e:b0:19d:90f:6c49 with SMTP id ki14-20020a170903068e00b0019d090f6c49mr4309189plb.68.1677520740412; Mon, 27 Feb 2023 09:59:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520740; cv=none; d=google.com; s=arc-20160816; b=rc4Xs55Jlc4co5A6rC4QCHvdE8UtOblR1bGUOxVwiZExD0/jDIkLARBzIgNowmBS3d UHMv3zopLEO0bSCmEDe71PETxo6dm5qj2oyjqH3JY7RYdrN12knxx1uimCN7I5YKS4ui kafbiBl42AbbkaXy/jyuz07MLQm9J5XjYB54N8fZVVeS79wdGXeLQ2uOUnVg7kghUNGN vPiC4LjAjlJPneFMADRLdTM7EGoO400odc1atWUMTY9Ev7n/EUdXvWzHeT0w+dT5bfpO Zo+CiVZiUTTDLRbr3ZEtfmBXwBg9/5no4r5eCkB9XvdC3PNu4ckGhDiUUYZLSFTo0Z4V pPvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hMFJoNiTv8ERJFzuWrR/5P0mYOI2i9vULkEqsj63YhY=; b=yEtxkhOVnjTL4htwjWCdljabfCTVfLdOqv5G88vdc3wriIQFqKz/lr9ji10Cg9CI40 tCqyx5fdOAectSXLFmFYSTvPyHLKjWGEz7YmOVwmUTaG5VrO6gqltwxW/RBAsnvc/uYE zbhLslpea2viGdnap90UQRonAuWzvb27O9NZYbTmWlMWWYYWAqwaEExcWMJE9fz/P3+0 XfJpd/arKL5N9slmx9cPGE4nXPgR4gwtIkTJSOdf6yw02fq8jntku6ugS+SHSG3DCOVp gq/WrFBd1oFwuCLDI9sWreagpBjU8MM15utEgNqJqoIn7nCS0OUUvosVd51EpNnGnT9B iiwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=eJibqO57; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kv5-20020a17090328c500b0019b7766afdcsi6942716plb.554.2023.02.27.09.58.45; Mon, 27 Feb 2023 09:59:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=eJibqO57; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230109AbjB0R5z (ORCPT + 99 others); Mon, 27 Feb 2023 12:57:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230048AbjB0R5y (ORCPT ); Mon, 27 Feb 2023 12:57:54 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B475B241C6; Mon, 27 Feb 2023 09:57:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hMFJoNiTv8ERJFzuWrR/5P0mYOI2i9vULkEqsj63YhY=; b=eJibqO57YZFwds944V3v+goNNV uL8xM/5dlRClPneYAKfqRqNxOy9AXPlKJbOYwcYSD6xmr1gbh2nXoV2SJ90excX60nNbn4YkfzbJK jrFNcGl7JdscQ9qmNEdzF9dtc9/xvIIG9NfXDdiw2w15ps8w961rR0vTrDfnJ+zxF7gA+RaK7HCnr ttacLATW7WSZrkHX/D/coc6QziGwa115pKH4T0B2GkTIu5Tj5y98Q1TIOPVNHbRaGJQ0/gYffMm3L bOaO5ZUbzBO2xh1y91BH+TvJOV71GYaWaRrFslgqaSndJGwxodoVHPpuCXbE8mErNOigANX9YGZsP A61/SyTA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkt-000IWv-CC; Mon, 27 Feb 2023 17:57:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 04/30] mm: Remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO Date: Mon, 27 Feb 2023 17:57:15 +0000 Message-Id: <20230227175741.71216-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759007988336666735?= X-GMAIL-MSGID: =?utf-8?q?1759007988336666735?= Current best practice is to reuse the name of the function as a define to indicate that the function is implemented by the architecture. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/cacheflush.h | 4 ++-- mm/util.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index a6189d21f2ba..82136f3fcf54 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -7,14 +7,14 @@ struct folio; #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio); #endif #else static inline void flush_dcache_folio(struct folio *folio) { } -#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 +#define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* _LINUX_CACHEFLUSH_H */ diff --git a/mm/util.c b/mm/util.c index b8ed9dbc7fd5..f66e0ca82d2d 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1124,7 +1124,7 @@ void page_offline_end(void) } EXPORT_SYMBOL(page_offline_end); -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio) { long i, nr = folio_nr_pages(folio); From patchwork Mon Feb 27 17:57:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62072 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561909wrd; Mon, 27 Feb 2023 10:00:58 -0800 (PST) X-Google-Smtp-Source: AK7set/cjrFFp5SZZJJF40cTAPRiUd5AdleQwaTb9VxIdwM3VBWfZ3yRnIHLxQWlrGz+1NAZBscy X-Received: by 2002:a05:6a20:3d02:b0:bc:f45f:ecf7 with SMTP id y2-20020a056a203d0200b000bcf45fecf7mr476614pzi.2.1677520857856; Mon, 27 Feb 2023 10:00:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520857; cv=none; d=google.com; s=arc-20160816; b=0m8A2FeOj1KeLvy8G+eFipoXR04Tb8zwCZUDSmZRNX0AZyS6T/dBRA0TPLS8opkEAg AN5Qioflucy3IgBZfQSV2uVhxNR4xlZesHp2dAZ8CSd/TqDLaExBk9uw1FDm/iMvrpJ9 J9kp0jnqN1fsgO7tnGDoHdoQMtNEshcB4LBSXNS7LE6VrjPIp+c4TCsv6IWOY0R2ihCZ MjTpHk/VuR5qbqsbtzkR+fNTE1OMjgdagge4Wjar7+Q+R7mg7q2k6iPGyCiowSXCwwl5 qfdttId0eRTSOxf/aLsuuuGv0QQlJVHG8kGMqqGBPysEQWget3T8J2V71UH2cEl5+dN/ DfvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sxlK/dYV0ak5B9qHymTAjJFaSXLib2XJVaYWuBBEzbA=; b=ccvUxU2FmsCtkZfb5LVCkGnxsQ0x8iMZcZsa4kH85yymUM3RSHYUc1P9K0r6OXcj3r H047U84glZYi3rlOqcO4aVn/0cPoPTp3KA3zH/q8mdszwrT79fihOsZTBRoShnfeEOHt AUwVI6/QqrE940BCzMjHq2PaWpSyjmxi5aaQUQyVWhDr1B9tYE4AIuCQuPaq7RMTtQh2 by/UQFA0jUsnr5kt5uC337hSCoGUaNlDhqRTwrEkl/q3+Wzg2OorhaSTLeYMDILf0Z4u x9KYPW3USRpDrFe6wEZgMEoMYAeUYe6RVZuzwJJbt2+T8jw2/MWQeyeVaMrkGhI4ffev FBrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=KJArGZg5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v10-20020a63610a000000b004fb97a4c535si7531240pgb.876.2023.02.27.10.00.44; Mon, 27 Feb 2023 10:00:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=KJArGZg5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230448AbjB0R6w (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230181AbjB0R6P (ORCPT ); Mon, 27 Feb 2023 12:58:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 834B126AC; Mon, 27 Feb 2023 09:57:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sxlK/dYV0ak5B9qHymTAjJFaSXLib2XJVaYWuBBEzbA=; b=KJArGZg5ykb/Jt6MrUMbaOQqDd UnMjz5PSAOd/w5pETz+CjwQDQ6ckLaC2VobWTEa9SBodl7MPmdlHlNGKktbfKpGOfOsumfcLXwB67 lpmiE7axvL4uExQ5lnOl67durBwmWg4ZP3kkly1SwvjwaYj8bJtVY9L4hxVwBRLIDtsrWqBf/+2ce Ipuk59BoCRUfc2NYdkGz4zXmBWMrGqW4PBFJMHel9wuHi2Xl6xsMts8Xep9d5Y0AfTev9G8LXypdg TRRlSzCBok51T+YaD9NzrQ4G+tQ794Piz+yjO2KqsXpBBINYOdQIkLzcCoL89nSDxn2zDk3bGfxzL pjYGDM4g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkt-000IX4-Sk; Mon, 27 Feb 2023 17:57:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH v2 05/30] alpha: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:16 +0000 Message-Id: <20230227175741.71216-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008110968446702?= X-GMAIL-MSGID: =?utf-8?q?1759008110968446702?= Add set_ptes(), update_mmu_cache_range() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: linux-alpha@vger.kernel.org --- arch/alpha/include/asm/cacheflush.h | 10 ++++++++++ arch/alpha/include/asm/pgtable.h | 18 +++++++++++++++++- 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 9945ff483eaf..3956460e69e2 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -57,6 +57,16 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_page(vma, page) \ flush_icache_user_page((vma), (page), 0, 0) +/* + * Both implementations of flush_icache_user_page flush the entire + * address space, so one call, no matter how many pages. + */ +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ + flush_icache_user_page(vma, page, 0, 0); +} + #include #endif /* _ALPHA_CACHEFLUSH_H */ diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index ba43cb841d19..1e3354e9731b 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -26,7 +26,18 @@ struct vm_area_struct; * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1UL << 32; + } +} +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3)) @@ -303,6 +314,11 @@ extern inline void update_mmu_cache(struct vm_area_struct * vma, { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) +{ +} + /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that * are !pte_none() && !pte_present(). From patchwork Mon Feb 27 17:57:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62076 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2563537wrd; Mon, 27 Feb 2023 10:03:20 -0800 (PST) X-Google-Smtp-Source: AK7set+dOsLe2ir9KRNGFHTkcka7L0S5/sP7Yj30prwCZMgWSFG83hHCMpijLC4GPUVjcrlMcId0 X-Received: by 2002:a62:38d3:0:b0:5a8:d987:d012 with SMTP id f202-20020a6238d3000000b005a8d987d012mr23540511pfa.11.1677521000669; Mon, 27 Feb 2023 10:03:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677521000; cv=none; d=google.com; s=arc-20160816; b=DXwSpi3C2nNvtmPTsUWi8Uj5nKdzlwxkdZ5aBqZ0R4PuxAQWLAUcBxUpe+bZwsyZJV r6FXYnE1b2wrJ2kEANwcXsGDydXPW87Dcka78HEYQIN695UbcaM2o1adX6ZlzceMYrhZ 2LbsfI/pt8m1FDWDZZd4+mrDzCF33RmvgLMGy9zg+DXBTiPGCvry8PNCL5q7cwfozrzQ bumoc5STfsD+AILCCCYEovynBe8izmcvE36+RMyagN1hc/q24pDYAcEg0GORBmpye+OH kaZnxp5wR1xE5sNjZl0FnUuak4YuFSGC5TDIkrVVefS3c3FTgu563fPSGSSzuWRlB/pB Uy3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gcdz9luE7GFbgSNp0Zs9tHimGcNvElequdAYX7n/upY=; b=kzNCkFXHZ7a+QQD3qjJx6wYIrMqBcH6qBwVNr75JcfYG1xcf7IlGBxceZpHnrs5NbT uz11nTsOAYMCdqKSNzh3rj45J/SRWMxUGmn3ZZNQqAyyHWxmyh2uYMWHYnXpUj/peUFG GoDQuyHTeEN55mE/3nePC1kkHEhT3WJSsCgbRD5yyl3zR0nHpdqw3axrZn7Zp4hicoOH xSbHSvrsqhOT8I9sR8o5kjBx/AiWoJhfvl/pswXob52WMf5RKfSWM5T7wmz64FugT5fP kWJs1/1k/n/U4/CRxBonL1N0Vs9+3+cPmuHIvrkc+wXO1tEY1HDAgSkkUKC0xmb8ZTU8 +SbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hRELXBbB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r29-20020aa7963d000000b0058bc338c9b2si811410pfg.372.2023.02.27.10.03.07; Mon, 27 Feb 2023 10:03:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hRELXBbB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230513AbjB0R7M (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230110AbjB0R6W (ORCPT ); Mon, 27 Feb 2023 12:58:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 190032716; Mon, 27 Feb 2023 09:57:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gcdz9luE7GFbgSNp0Zs9tHimGcNvElequdAYX7n/upY=; b=hRELXBbBGCRUxkbHTqGP9x1fP/ zcxmxkYhxNAoWyTXwvUerJp4rekEAQQfvliZiks5N35DLlU/MleL/ZH5Ky4UXVUykkLq5mJ144W3t JRpU35Zr+4W3aVeEp0OqrkqGH8MYwubmZeu9Q/8gAZCXN17eN2W8tpFnE6qsc+XKkISrF6wpWIsR/ i1dfSVhkCdOAXBDzJZmqwItFtHNd381OvlX4xJhVzIGQKnFI+Imuj/De0WQEKWCU+4c51z19hQ0VJ eG25KHk8PSruwzoYeO+yJVLjrjcSs03ltxOomVz+hTrnUN/zEFVVftNosX3skoZUGjEA4feNR+L7u Ce0f4bKw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXA-1P; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH v2 06/30] arc: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:17 +0000 Message-Id: <20230227175741.71216-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008260748700176?= X-GMAIL-MSGID: =?utf-8?q?1759008260748700176?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dc_clean flag from being per-page to per-folio (which means it cannot always be set as we don't know that all pages in this folio were cleaned). Enhance the internal flush routines to take the number of pages to flush. Signed-off-by: Matthew Wilcox (Oracle) Cc: Vineet Gupta Cc: linux-snps-arc@lists.infradead.org --- arch/arc/include/asm/cacheflush.h | 7 +- arch/arc/include/asm/pgtable-bits-arcv2.h | 20 +++-- arch/arc/mm/cache.c | 61 ++++++++------ arch/arc/mm/tlb.c | 18 +++-- arch/arm/include/asm/cacheflush.h | 24 +++--- arch/arm/include/asm/pgtable.h | 5 +- arch/arm/include/asm/tlbflush.h | 13 +-- arch/arm/mm/copypage-v4mc.c | 5 +- arch/arm/mm/copypage-v6.c | 5 +- arch/arm/mm/copypage-xscale.c | 5 +- arch/arm/mm/dma-mapping.c | 24 +++--- arch/arm/mm/fault-armv.c | 14 ++-- arch/arm/mm/flush.c | 99 ++++++++++++++--------- arch/arm/mm/mm.h | 2 +- arch/arm/mm/mmu.c | 14 +++- 15 files changed, 193 insertions(+), 123 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index e201b4b1655a..04f65f588510 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -25,17 +25,20 @@ * in update_mmu_cache() */ #define flush_icache_page(vma, page) +#define flush_icache_pages(vma, page, nr) void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len); -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr); -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr); +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio void dma_cache_wback_inv(phys_addr_t start, unsigned long sz); void dma_cache_inv(phys_addr_t start, unsigned long sz); diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 6e9f8ca6d6a1..4a1b2ce204c6 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -100,14 +100,24 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index 55c6de138eae..3c16ee942a5c 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -752,17 +752,17 @@ static inline void arc_slc_enable(void) * There's a corollary case, where kernel READs from a userspace mapped page. * If the U-mapping is not congruent to K-mapping, former needs flushing. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; if (!cache_is_vipt_aliasing()) { - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); return; } /* don't handle anon pages here */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) return; @@ -771,17 +771,27 @@ void flush_dcache_page(struct page *page) * Make a note that K-mapping is dirty */ if (!mapping_mapped(mapping)) { - clear_bit(PG_dc_clean, &page->flags); - } else if (page_mapcount(page)) { - + clear_bit(PG_dc_clean, &folio->flags); + } else if (folio_mapped(folio)) { /* kernel reading from page with U-mapping */ - phys_addr_t paddr = (unsigned long)page_address(page); - unsigned long vaddr = page->index << PAGE_SHIFT; + phys_addr_t paddr = (unsigned long)folio_address(folio); + unsigned long vaddr = folio_pos(folio); + /* + * vaddr is not actually the virtual address, but is + * congruent to every user mapping. + */ if (addr_not_cache_congruent(paddr, vaddr)) - __flush_dcache_page(paddr, vaddr); + __flush_dcache_pages(paddr, vaddr, + folio_nr_pages(folio)); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + return flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); /* @@ -921,18 +931,18 @@ void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len) } /* wrapper to compile time eliminate alignment checks in flush loop */ -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr) +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE); + __ic_line_inv_vaddr(paddr, vaddr, nr * PAGE_SIZE); } /* * wrapper to clearout kernel or userspace mappings of a page * For kernel mappings @vaddr == @paddr */ -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr) +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __dc_line_op(paddr, vaddr & PAGE_MASK, PAGE_SIZE, OP_FLUSH_N_INV); + __dc_line_op(paddr, vaddr & PAGE_MASK, nr * PAGE_SIZE, OP_FLUSH_N_INV); } noinline void flush_cache_all(void) @@ -962,10 +972,10 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long u_vaddr, u_vaddr &= PAGE_MASK; - __flush_dcache_page(paddr, u_vaddr); + __flush_dcache_pages(paddr, u_vaddr, 1); if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, u_vaddr); + __inv_icache_pages(paddr, u_vaddr, 1); } void flush_cache_range(struct vm_area_struct *vma, unsigned long start, @@ -978,9 +988,9 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long u_vaddr) { /* TBD: do we really need to clear the kernel mapping */ - __flush_dcache_page((phys_addr_t)page_address(page), u_vaddr); - __flush_dcache_page((phys_addr_t)page_address(page), - (phys_addr_t)page_address(page)); + __flush_dcache_pages((phys_addr_t)page_address(page), u_vaddr, 1); + __flush_dcache_pages((phys_addr_t)page_address(page), + (phys_addr_t)page_address(page), 1); } @@ -989,6 +999,8 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); + struct folio *dst = page_folio(to); void *kfrom = kmap_atomic(from); void *kto = kmap_atomic(to); int clean_src_k_mappings = 0; @@ -1005,7 +1017,7 @@ void copy_user_highpage(struct page *to, struct page *from, * addr_not_cache_congruent() is 0 */ if (page_mapcount(from) && addr_not_cache_congruent(kfrom, u_vaddr)) { - __flush_dcache_page((unsigned long)kfrom, u_vaddr); + __flush_dcache_pages((unsigned long)kfrom, u_vaddr, 1); clean_src_k_mappings = 1; } @@ -1019,17 +1031,17 @@ void copy_user_highpage(struct page *to, struct page *from, * non copied user pages (e.g. read faults which wire in pagecache page * directly). */ - clear_bit(PG_dc_clean, &to->flags); + clear_bit(PG_dc_clean, &dst->flags); /* * if SRC was already usermapped and non-congruent to kernel mapping * sync the kernel mapping back to physical page */ if (clean_src_k_mappings) { - __flush_dcache_page((unsigned long)kfrom, (unsigned long)kfrom); - set_bit(PG_dc_clean, &from->flags); + __flush_dcache_pages((unsigned long)kfrom, + (unsigned long)kfrom, 1); } else { - clear_bit(PG_dc_clean, &from->flags); + clear_bit(PG_dc_clean, &src->flags); } kunmap_atomic(kto); @@ -1038,8 +1050,9 @@ void copy_user_highpage(struct page *to, struct page *from, void clear_user_page(void *to, unsigned long u_vaddr, struct page *page) { + struct folio *folio = page_folio(page); clear_page(to); - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); } EXPORT_SYMBOL(clear_user_page); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 5f71445f26bd..0a996b65bb4e 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -467,8 +467,8 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * Note that flush (when done) involves both WBACK - so physical page is * in sync as well as INV - so any non-congruent aliases don't remain */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr) { unsigned long vaddr = vaddr_unaligned & PAGE_MASK; phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; @@ -491,15 +491,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, */ if ((vma->vm_flags & VM_EXEC) || addr_not_cache_congruent(paddr, vaddr)) { - - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(page); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); if (dirty) { + unsigned long offset = offset_in_folio(folio, paddr); + nr = folio_nr_pages(folio); + paddr -= offset; + vaddr -= offset; /* wback + inv dcache lines (K-mapping) */ - __flush_dcache_page(paddr, paddr); + __flush_dcache_pages(paddr, paddr, nr); /* invalidate any existing icache lines (U-mapping) */ if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, vaddr); + __inv_icache_pages(paddr, vaddr, nr); } } } @@ -531,7 +535,7 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { pte_t pte = __pte(pmd_val(*pmd)); - update_mmu_cache(vma, addr, &pte); + update_mmu_cache_range(vma, addr, &pte, HPAGE_PMD_NR); } void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index a094f964c869..841e268d2374 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -231,14 +231,15 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned vma->vm_flags); } -static inline void -vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +static inline void vivt_flush_cache_pages(struct vm_area_struct *vma, + unsigned long user_addr, unsigned long pfn, unsigned int nr) { struct mm_struct *mm = vma->vm_mm; if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { unsigned long addr = user_addr & PAGE_MASK; - __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); + __cpuc_flush_user_range(addr, addr + nr * PAGE_SIZE, + vma->vm_flags); } } @@ -247,15 +248,17 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig vivt_flush_cache_mm(mm) #define flush_cache_range(vma,start,end) \ vivt_flush_cache_range(vma,start,end) -#define flush_cache_page(vma,addr,pfn) \ - vivt_flush_cache_page(vma,addr,pfn) +#define flush_cache_pages(vma, addr, pfn, nr) \ + vivt_flush_cache_pages(vma, addr, pfn, nr) #else -extern void flush_cache_mm(struct mm_struct *mm); -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); -extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn); +void flush_cache_mm(struct mm_struct *mm); +void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, + unsigned long pfn, unsigned int nr); #endif #define flush_cache_dup_mm(mm) flush_cache_mm(mm) +#define flush_cache_page(vma, addr, pfn) flush_cache_pages(vma, addr, pfn, 1) /* * flush_icache_user_range is used when we want to ensure that the @@ -289,7 +292,9 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr * See update_mmu_cache for the user space part. */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -extern void flush_dcache_page(struct page *); +void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 static inline void flush_kernel_vmap_range(void *addr, int size) @@ -321,6 +326,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, * duplicate cache flushing elsewhere performed by flush_dcache_page(). */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index a58ccbb406ad..6525ac82bd50 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -207,8 +207,9 @@ static inline void __sync_icache_dcache(pte_t pteval) extern void __sync_icache_dcache(pte_t pteval); #endif -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr); +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) { diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 0ccc985b90af..7d792e485f4f 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -619,18 +619,21 @@ extern void flush_bp_all(void); * If PG_dcache_clean is not set for the page, we need to ensure that any * cache entries for the kernels virtual memory range are written * back to the page. On ARMv6 and later, the cache coherency is handled via - * the set_pte_at() function. + * the set_ptes() function. */ #if __LINUX_ARM_ARCH__ < 6 -extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr); #else -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { } #endif +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #endif diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c index f1da3b439b96..7ddd82b9fe8b 100644 --- a/arch/arm/mm/copypage-v4mc.c +++ b/arch/arm/mm/copypage-v4mc.c @@ -64,10 +64,11 @@ static void mc_copy_user_page(void *from, void *to) void v4_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c index d8a115de5507..a1a71f36d850 100644 --- a/arch/arm/mm/copypage-v6.c +++ b/arch/arm/mm/copypage-v6.c @@ -69,11 +69,12 @@ static void discard_old_kernel_data(void *kto) static void v6_copy_user_highpage_aliasing(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); unsigned int offset = CACHE_COLOUR(vaddr); unsigned long kfrom, kto; - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); /* FIXME: not highmem safe */ discard_old_kernel_data(page_address(to)); diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c index bcb485620a05..f1e29d3e8193 100644 --- a/arch/arm/mm/copypage-xscale.c +++ b/arch/arm/mm/copypage-xscale.c @@ -84,10 +84,11 @@ static void mc_copy_user_page(void *from, void *to) void xscale_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 8bc01071474a..5ecfde41d70a 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -693,6 +693,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { + struct folio *folio = page_folio(page); phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ @@ -707,19 +708,18 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - unsigned long pfn; - size_t left = size; - - pfn = page_to_pfn(page) + off / PAGE_SIZE; - off %= PAGE_SIZE; - if (off) { - pfn++; - left -= PAGE_SIZE - off; + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); } - while (left >= PAGE_SIZE) { - page = pfn_to_page(pfn++); - set_bit(PG_dcache_clean, &page->flags); - left -= PAGE_SIZE; + + while (left >= (ssize_t)folio_size(folio)) { + set_bit(PG_dcache_clean, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } } diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index 0e49154454a6..e2c869b8f012 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -178,8 +178,8 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, * * Note that the pte lock will be held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); struct address_space *mapping; @@ -192,13 +192,13 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (mapping) { if (cache_is_vivt()) make_coherent(mapping, vma, addr, ptep, pfn); diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 7ff9feea13a6..07ea0ab51099 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -95,10 +95,10 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned __flush_icache_all(); } -void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr) { if (cache_is_vivt()) { - vivt_flush_cache_page(vma, user_addr, pfn); + vivt_flush_cache_pages(vma, user_addr, pfn, nr); return; } @@ -196,29 +196,31 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, #endif } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - if (!PageHighMem(page)) { - __cpuc_flush_dcache_area(page_address(page), page_size(page)); + if (!folio_test_highmem(folio)) { + __cpuc_flush_dcache_area(folio_address(folio), + folio_size(folio)); } else { unsigned long i; if (cache_is_vipt_nonaliasing()) { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_atomic(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_local_folio(folio, + i * PAGE_SIZE); __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_atomic(addr); + kunmap_local(addr); } } else { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_high_get(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_high_get(folio_page(folio, i)); if (addr) { __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_high(page + i); + kunmap_high(folio_page(folio, i)); } } } @@ -230,15 +232,14 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) * userspace colour, which is congruent with page->index. */ if (mapping && cache_is_vipt_aliasing()) - flush_pfn_alias(page_to_pfn(page), - page->index << PAGE_SHIFT); + flush_pfn_alias(folio_pfn(folio), folio_pos(folio)); } -static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) +static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; - pgoff_t pgoff; + struct vm_area_struct *vma; + pgoff_t pgoff, pgoff_end; /* * There are possible user space mappings of this page: @@ -246,21 +247,36 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p * data in the current VM view associated with this page. * - aliasing VIPT: we only need to find one mapping of this page. */ - pgoff = page->index; + pgoff = folio->index; + pgoff_end = pgoff + folio_nr_pages(folio) - 1; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff_end) { + unsigned long start, offset, pfn; + unsigned int nr; /* * If this VMA is not in our MM, we can ignore it. */ - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page)); + + start = vma->vm_start; + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + offset = pgoff - vma->vm_pgoff; + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + start += offset * PAGE_SIZE; + } + if (start + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - start) / PAGE_SIZE; + + flush_cache_pages(vma, start, pfn, nr); } flush_dcache_mmap_unlock(mapping); } @@ -269,7 +285,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p void __sync_icache_dcache(pte_t pteval) { unsigned long pfn; - struct page *page; + struct folio *folio; struct address_space *mapping; if (cache_is_vipt_nonaliasing() && !pte_exec(pteval)) @@ -279,14 +295,14 @@ void __sync_icache_dcache(pte_t pteval) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); if (cache_is_vipt_aliasing()) - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); else mapping = NULL; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (pte_exec(pteval)) __flush_icache_all(); @@ -312,7 +328,7 @@ void __sync_icache_dcache(pte_t pteval) * Note that we disable the lazy flush for SMP configurations where * the cache maintenance operations are not automatically broadcasted. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -320,31 +336,36 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); return; } - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!cache_ops_need_broadcast() && - mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping && cache_is_vivt()) - __flush_dcache_aliases(mapping, page); + __flush_dcache_aliases(mapping, folio); else if (mapping) __flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_page); /* * Flush an anonymous page so that users of get_user_pages() * can safely access the data. The expected sequence is: diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index d7ffccb7fea7..419316316711 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -45,7 +45,7 @@ struct mem_type { const struct mem_type *get_mem_type(unsigned int type); -extern void __flush_dcache_page(struct address_space *mapping, struct page *page); +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio); /* * ARM specific vm_struct->flags bits. diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 463fc2a8448f..9947bbc32b04 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1788,7 +1788,7 @@ void __init paging_init(const struct machine_desc *mdesc) bootmem_init(); empty_zero_page = virt_to_page(zero_page); - __flush_dcache_page(NULL, empty_zero_page); + __flush_dcache_folio(NULL, page_folio(empty_zero_page)); } void __init early_mm_init(const struct machine_desc *mdesc) @@ -1797,8 +1797,8 @@ void __init early_mm_init(const struct machine_desc *mdesc) early_paging_init(mdesc); } -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { unsigned long ext = 0; @@ -1808,5 +1808,11 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, ext |= PTE_EXT_NG; } - set_pte_ext(ptep, pteval, ext); + for (;;) { + set_pte_ext(ptep, pteval, ext); + if (--nr == 0) + break; + ptep++; + pte_val(pteval) += PAGE_SIZE; + } } From patchwork Mon Feb 27 17:57:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62069 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561392wrd; Mon, 27 Feb 2023 10:00:09 -0800 (PST) X-Google-Smtp-Source: AK7set/xaXSDdQ8QgC6BLS5zU8s7rSoCUH1JqPAO181c/r1o2CVQSPVy7a6XNXluoMVK9EVpkbf4 X-Received: by 2002:a05:6a20:2d8a:b0:cd:5334:e249 with SMTP id bf10-20020a056a202d8a00b000cd5334e249mr252501pzb.18.1677520809151; Mon, 27 Feb 2023 10:00:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520809; cv=none; d=google.com; s=arc-20160816; b=h+xqQGADpngXwjGs7S27LsxlJ7Hk35B/4rQNa4wqUYZyXaKH0st5DxicmofhWxCIOr aUbajU6vKjTPDTkr03ht3FZ5xVyjsn7i549TngJuVcjIBtDa+HzekqRDsIBW4REtyAlf pcLUCZFopZgFCwjzGSDRgH+iWrfmNdFWOulH64tgUqqN8nRIh/HgnK7vFIoPo/cFmUKo vtMuQg+26vOY0maykFyzU2UvHp2AVPSA+ypxbLL9CA/ESRQUmal283Y1SqxIEcsK3Ssu XHQvF1pl5dcrUW9l2TfjTHmp7g4kNEPE4u8RTAEzcUZVns7Q7/7SWqft/4wexXp0HKsk GM4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RnM8KszZJiX+sNWI95I2pqTk92H44CKgzXj9Vqv/ocM=; b=ur97/4baoR7XNrUxTM3WQbr2SVVS5qJ8igiRZYd0gea2bmo6hII8qT/vnQve0f43/n ylVX0iMYIvsM9sISeK7uryhf6eXrWTimBZXGlptXEaDRZIf/TZeDb0fx69276omwva4k FKuhV4kVuwWbDTlcUIV+LT/u5KWaV2/un7oNIOwfKCiyG6nhkQfZrFi8DHJVg8lCzQWP XxnKqUovBuAlflfX8ZEQjVcxkjMbIMrIgZSs7U8z0fbDj40keCGzdAW/V9vDjDoTOhHT B+G2BVbGIIJHusIMM1mg0VVKmgcD8tVOt2mOzbqmtwmBKbJL0Ic5Hn10pJtcs8VtCp/n 8A2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=e2kdPfSk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a27-20020a631a5b000000b00503016f280dsi7444652pgm.823.2023.02.27.09.59.56; Mon, 27 Feb 2023 10:00:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=e2kdPfSk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229709AbjB0R6T (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbjB0R54 (ORCPT ); Mon, 27 Feb 2023 12:57:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3184E241D4; Mon, 27 Feb 2023 09:57:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RnM8KszZJiX+sNWI95I2pqTk92H44CKgzXj9Vqv/ocM=; b=e2kdPfSkBwto7dMcOp3qWzfFre 6ZzeOB5qWGHTpG16KQskkagJiE2sBldHr2fJsDXQxVT7+oemOn7ZNxBY02d8YZbamuCziWPXOvDxp GZScJVAeHibEv0uGbwGUOiNi0Bj69a6EZtPl9FT1PjewLMY9rRmyhlP0ad5UgcGXLH0MLqn8ZJPA8 FNV8aS5YLIfkiChEwWfcPxtze2smOeP1f9+n+lNbpIqRv1NtHE/KmJMYJ6HJ2dYcRJxPdDmJPhF0D f6sA9ThdkPd/tCx4holKfEBZz+qKkE94I2tCeG/wCvbl3KcpKsdRiSwRnkSspc/8GOJnaV5OBBbCS 1KEZmRDw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXC-57; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 07/30] arm64: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:18 +0000 Message-Id: <20230227175741.71216-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008059863911632?= X-GMAIL-MSGID: =?utf-8?q?1759008059863911632?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/cacheflush.h | 4 +++- arch/arm64/include/asm/pgtable.h | 25 ++++++++++++++------ arch/arm64/mm/flush.c | 36 +++++++++++------------------ 3 files changed, 35 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 37185e978aeb..d115451ed263 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -114,7 +114,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, #define copy_to_user_page copy_to_user_page /* - * flush_dcache_page is used when the kernel has written to the page + * flush_dcache_folio is used when the kernel has written to the page * cache page at virtual address page->virtual. * * If this page isn't mapped (ie, page_mapping == NULL), or it might @@ -127,6 +127,8 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio static __always_inline void icache_inval_all_pou(void) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 69765dc697af..4d1b79dbff16 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -355,12 +355,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pte); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - return __set_pte_at(mm, addr, ptep, pte); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* * Huge pte definitions. @@ -1059,8 +1068,8 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { /* * We don't do anything here, so there's a very small chance of @@ -1069,6 +1078,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #ifdef CONFIG_ARM64_PA_BITS_52 diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 5f9379b3c8c8..deb781af0a3a 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -50,20 +50,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, void __sync_icache_dcache(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { - sync_icache_aliases((unsigned long)page_address(page), - (unsigned long)page_address(page) + - page_size(page)); - set_bit(PG_dcache_clean, &page->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + sync_icache_aliases((unsigned long)folio_address(folio), + (unsigned long)folio_address(folio) + + folio_size(folio)); + set_bit(PG_dcache_clean, &folio->flags); } } EXPORT_SYMBOL_GPL(__sync_icache_dcache); @@ -73,17 +66,16 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); * it as dirty for later flushing when mapped in user space (if executable, * see __sync_icache_dcache). */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in __sync_icache_dcache()). - */ - if (PageHuge(page)) - page = compound_head(page); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +EXPORT_SYMBOL(flush_dcache_folio); - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } EXPORT_SYMBOL(flush_dcache_page); From patchwork Mon Feb 27 17:57:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62063 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561169wrd; Mon, 27 Feb 2023 09:59:44 -0800 (PST) X-Google-Smtp-Source: AK7set+WCKq2QidluCqxQssT/q0KKBhujlU7RH7QHeLFLy5hLkSdcP/LMlD6qXFjntwm/U5yCOa7 X-Received: by 2002:a05:6a20:160e:b0:cd:8447:b982 with SMTP id l14-20020a056a20160e00b000cd8447b982mr366935pzj.8.1677520784034; Mon, 27 Feb 2023 09:59:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520784; cv=none; d=google.com; s=arc-20160816; b=LotSjoXgPxh3tbMRw9C48K4aEpNdw7+khQ7haekG66aEmG+zY9dvqfiFkTiOSckxmq uUhodJuw9cCJ5pgwnrh+qEQmCSx7vwIGjx5bdcPUgryYTXnW/r/+dR6mAaTxpcSOFoPV WpJlLUYpgrE8od7Ov3kyqrAotXhQbmZArY9dNskG9fIE7H7Gh2sm0Hm1kgMO2gNLqwDQ za0Z9kZnUszjFm8rC+1oImva60slARgAWShCwH5vCC93d/sG2fhlh9iHeRu6jL4hisaM ku4NbhNJSshBTVLkrBedllF0xgyCqhZ2vZRjvYiXE+SgcrFboUbZWqFW8SELVAe9i5C1 +AjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5aVa95nchpJ4Dt8sOhooEH4lhsUW8ElPlOkDTBmnyAI=; b=Va5AyaWq02Y3jZrgwGSlUTckoQV0EcFTzfvmY/YqPwseMlM37bLWpYjK21y5hoYDOw +QxaQW5Ndp6snj6YGADUZEgODjhOvT5YxsfL7MGUA4KLwdJcCqJ0BhwSaxzxnze8Eyg9 Bz/BE1E4lqS8dA7hKKSCqh6wpNcCEpPbaA0hLt8xpIp39DT0vbfNk66/i2sL1YjlYGfn aKq2SVqp2fuRRV/DqQDSaJ8wNPdthBUBOznuLBkC4kAhko42JwsSwoTptFCZrqbUAA3j 1JjeH3huKX5tPSeDfK++QwFpX7apqHt4Bxe9j9rOkiGhDW5eqqwaLn/xSZOqbKnAAWdJ ZJMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=CjmVsbfX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b10-20020a6541ca000000b00502f005f419si7816757pgq.551.2023.02.27.09.59.31; Mon, 27 Feb 2023 09:59:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=CjmVsbfX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230160AbjB0R6B (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjB0R5y (ORCPT ); Mon, 27 Feb 2023 12:57:54 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09B7C241CB; Mon, 27 Feb 2023 09:57:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5aVa95nchpJ4Dt8sOhooEH4lhsUW8ElPlOkDTBmnyAI=; b=CjmVsbfXMaP5DgsusaHshbm+YW InbtI0tX79xpyZ+ruAwcLigTBgzUaiTzHvopImUsJYzYMhbtzHl7YxNfYthVI5dfGPBK6w7Qqt8/D NkN8rqJIR3wzY3H7WUVo+eE8T/hjrKhVuLIfAZM6+9lT38smQ62EHs4ubH02YHU95vkM/cbB5o7s7 LaiHlMkH7ESKTcmG92K2BUyNPEYG4rFg/OfuTLip/e2GRLh8fGPsRidqWlDuna/T0yZhq3BZHPMZG ChCMCTqM4yuN2aY1lgjvWLNKoCAO9H3wqUkRWwoLt5bgaZAtIdtEtdAH3cN8XMvstwvU48p+dFheJ 7yBXvudA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXE-7s; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v2 08/30] csky: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:19 +0000 Message-Id: <20230227175741.71216-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008033743595426?= X-GMAIL-MSGID: =?utf-8?q?1759008033743595426?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Guo Ren Cc: linux-csky@vger.kernel.org Acked-by: Guo Ren --- arch/csky/abiv1/cacheflush.c | 32 +++++++++++++++++----------- arch/csky/abiv1/inc/abi/cacheflush.h | 2 ++ arch/csky/abiv2/cacheflush.c | 30 +++++++++++++------------- arch/csky/abiv2/inc/abi/cacheflush.h | 10 +++++++-- arch/csky/include/asm/pgtable.h | 21 +++++++++++++++--- 5 files changed, 62 insertions(+), 33 deletions(-) diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c index fb91b069dc69..ba43f6c26b4f 100644 --- a/arch/csky/abiv1/cacheflush.c +++ b/arch/csky/abiv1/cacheflush.c @@ -14,43 +14,49 @@ #define PG_dcache_clean PG_arch_1 -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); - if (mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + if (mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { dcache_wbinv_all(); if (mapping) icache_inv_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) dcache_wbinv_all(); - if (page_mapping_file(page)) { + if (folio_flush_mapping(folio)) { if (vma->vm_flags & VM_EXEC) icache_inv_all(); } diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..0d6cb65624c4 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,8 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 39c51399dd81..c1cf0d55a2a1 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -6,30 +6,30 @@ #include #include -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr) { - unsigned long addr; + unsigned long pfn = pte_pfn(*pte); struct page *page; + unsigned int i; - if (!pfn_valid(pte_pfn(*pte))) + if (!pfn_valid(pfn) || is_zero_pfn(pfn)) return; - page = pfn_to_page(pte_pfn(*pte)); - if (page == ZERO_PAGE(0)) - return; + folio = page_folio(pfn_to_page(pfn)); - if (test_and_set_bit(PG_dcache_clean, &page->flags)) + if (test_and_set_bit(PG_dcache_clean, &folio->flags)) return; - addr = (unsigned long) kmap_atomic(page); - - dcache_wb_range(addr, addr + PAGE_SIZE); + for (i = 0; i < folio_nr_pages(folio); i++) { + unsigned long addr = (unsigned long) kmap_local_folio(folio, + i * PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - - kunmap_atomic((void *) addr); + dcache_wb_range(addr, addr + PAGE_SIZE); + if (vma->vm_flags & VM_EXEC) + icache_inv_range(addr, addr + PAGE_SIZE); + kunmap_local((void *) addr); + } } void flush_icache_deferred(struct mm_struct *mm) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..9c728933a776 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -18,11 +18,17 @@ #define PG_dcache_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..a30ae048233e 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -90,7 +90,20 @@ static inline void set_pte(pte_t *p, pte_t pte) /* prevent out of order excution */ smp_mb(); } -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { @@ -263,8 +276,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot) From patchwork Mon Feb 27 17:57:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62070 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561414wrd; Mon, 27 Feb 2023 10:00:12 -0800 (PST) X-Google-Smtp-Source: AK7set92JRI2EXBvsVvhjapzvOBumwe+oWGPofoOAf3patpSK2dN8by+OqSMfBL0nbNQ1lFK9Ykp X-Received: by 2002:a05:6a20:6911:b0:cb:85eb:611c with SMTP id q17-20020a056a20691100b000cb85eb611cmr260298pzj.29.1677520812314; Mon, 27 Feb 2023 10:00:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520812; cv=none; d=google.com; s=arc-20160816; b=OlEPpaUaXoTvVxvgNVRQESLDOno12BqUE08B9qGLaKLaffXiZHs+sfmauZD+0Kd5lb WzmHcGIDPe6ggGzznZuOd7gA9jCIO8q0i83dBkfJh7ewSvPZNAUBYjfdfEvbRrWuNB6g zLyr+6NxqLOu7MJZNHNwPW+gfc9ZLvCIm5Rt1rkOX3s9jLq3DSe4NgbdWj/80bJMNxMM 8SmTjEuQSPS71SVI1bFOFBOPtb3/IvBSd8OobCE5arNvOEyNzI2ZMF7hmNkyzcybFgiM VwA2LoDCNOPiHPkT+/+u1ZDSsV37A2rjtX1lcGoSX9USmfIIPbE4et+im1L9KuihG0ch 0bMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=yM8cvfNz5jVfR0nEV77nAvQQBtTw5TotKep/NN0+z/U=; b=zHYad6VDPGVjCrr+G/ck3/Bf31DIDpVnXzq6QptnOy/vnQXWxNyY/xoHhJuMJF9JlQ ZcgAvkDpN/r8A21FzgwshSy8CdEfg4A2qe6wW+jUK7hsCrk08Ff+4cslPqDY6QgWNWHW yX+PHbvkphCFxM/KVY6tFD2gTRb5dkICoaWVvIBFwf7671vYmhXFujRYUo6CjjzjlKQr T1xD/xEReFbXBgv/Vkokr8Rn/8NY6S2UdZU1Cg6yy4eU/f7wHCE4DakOI4EL/pLQkC3I m0h3CIsARM09dvC6k1VfSt+g8w3aH0XcU5iNdB9mqDGhCl0Sx06tfLmq91MLSEsivm2U Zz4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cb9QUN95; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t69-20020a638148000000b00502f4fd0c16si7646212pgd.653.2023.02.27.09.59.59; Mon, 27 Feb 2023 10:00:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cb9QUN95; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229568AbjB0R6z (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230291AbjB0R6Q (ORCPT ); Mon, 27 Feb 2023 12:58:16 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29F372107; Mon, 27 Feb 2023 09:57:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yM8cvfNz5jVfR0nEV77nAvQQBtTw5TotKep/NN0+z/U=; b=cb9QUN95fkSR7nKcti6GhHyXtY 56DqsqF7zzmmEbqiLgEV80U+LGxfl+eESCShjFAAK90Ch1Z4gtgdWAsb+WD+X+VWBfghhL2BwEb3F GJGZASaY0IiF5HXZcytQN4HQ57z3o7N4+sTJT+G4DwqtxqoZIB2NKalce5TLzzCIdSt3JLfNP/Asm flBw6lItn8W4GpCaQ2phMC84kagXdG3ZRoqR9iGVewb9lc4MR8DEG4Id+sGp7gcDxdKCNlBqnu3FL TCj9fYtfVzARqg+Fh/pdMg2XkD/gNFg1uWXi0S+VO5D50h6LP7u7DXu3u8Dugq99HqqhnzWtvwL2A 1Ie4QEcA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXG-AG; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Brian Cain Subject: [PATCH v2 09/30] hexagon: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:20 +0000 Message-Id: <20230227175741.71216-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008063471139533?= X-GMAIL-MSGID: =?utf-8?q?1759008063471139533?= Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Brian Cain --- arch/hexagon/include/asm/cacheflush.h | 7 +++++-- arch/hexagon/include/asm/pgtable.h | 16 ++++++++++++++-- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 6eff0730e6ef..63ca314ede89 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -58,12 +58,15 @@ extern void flush_cache_all_hexagon(void); * clean the cache when the PTE is set. * */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* generic_ptrace_pokedata doesn't wind up here, does it? */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, void *src, int len); #define copy_to_user_page copy_to_user_page diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..f58f1d920769 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -346,12 +346,24 @@ static inline int pte_exec(pte_t pte) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) /* - * set_pte_at - update page table and do whatever magic may be + * set_ptes - update page table and do whatever magic may be * necessary to make the underlying hardware/firmware take note. * * VM may require a virtual instruction to alert the MMU. */ -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { From patchwork Mon Feb 27 17:57:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62064 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561172wrd; Mon, 27 Feb 2023 09:59:45 -0800 (PST) X-Google-Smtp-Source: AK7set/Oo+ZX0CEpb3FCdVnpQduKieWWvBeYqSJUF1djCNVBh/OdPJrVu/gnzss2lECWlpS2gG/v X-Received: by 2002:a05:6a20:8f16:b0:c7:1821:1b7d with SMTP id b22-20020a056a208f1600b000c718211b7dmr437983pzk.2.1677520784743; Mon, 27 Feb 2023 09:59:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520784; cv=none; d=google.com; s=arc-20160816; b=ffIDn2W5pmMpARw1Jr9pLu3BvS8keJqk7XQm7SVXkN/zbKsFoEkdXPL2+tC8hSqZ6+ 9oxflvYgihNeEwsJCEn/Lh5nrA7P9kviodDk0sKfyaW606p7u+9d+CZ17/ql8m+GbV3R Qt6EDkzK8vGUPlVEnN5iugl0RUmTDjpSVF95AUMEM78uaHRKT6hWbOqe5oz3ObFLE8XP b9Jw4FFbsI5ssAAZJ1DMJrRkkpMUKHENcOZSS6ScA2rdPPV9G4dfDKspmO1QrrG9Hb0R KV3y2ny3eFwFxgCwmcCzt/VwPh8xLeMFVLixwCNdYBg1ZqqFDLO4d+Mh1TFHLWrf5WhJ 22+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=X4gTGIoZDPD05YxRdFQZH/uA/i6Ep+mlgv+7pNzKT5k=; b=zm5ouv99Yy45MMNXfo0rAer+2rgTok/yJqM3IQFgmHegtosDciGTRvDUsFN3mFbax4 YnHY1Ul+f0BT/BT1o+4EznYpzgBxeht/vJNs4bDz3wj7op7dBcDRckQ7Sj8wfgyFWIgS pXVLwQ4sqEbu+R4RUsoS3NG5KEWAIySGCBw0Z59HCW17Elq/cbczUg2n5ffJQjqGlhR2 japwa1jAl87fNBSbKwb4Pff41XF8Gwnn0RF8jaQRCgQ1tDrP1u2px8KLGLOK5JDoRvbP OlIj2QJ860J/NlpfLu3Nh7atN3FF/a9XVOhDpObKzj2heJqstvrDAl+zvPGsVQiQX68w E8mQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Stc4k6nN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f4-20020a056a0022c400b005a8d85904e1si8185663pfj.361.2023.02.27.09.59.32; Mon, 27 Feb 2023 09:59:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Stc4k6nN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230168AbjB0R6E (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230002AbjB0R5y (ORCPT ); Mon, 27 Feb 2023 12:57:54 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 266C8241DB; Mon, 27 Feb 2023 09:57:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X4gTGIoZDPD05YxRdFQZH/uA/i6Ep+mlgv+7pNzKT5k=; b=Stc4k6nNmjxaPWz69W1KOrnUO2 aaMutptGkXNCVe3gZKsAonPL4JoW3W7A5oCHi0/rqfQoIe1pv0H3qEZ+XeaC0EaOh/qHj+1w7bpGp UpPx3ppcsRKWQQtwJ8cZ3M0yFSBXUDTDRPvj9R4oB7u3tTuRDqc5svb6VxJZtURnq/jaG+IebhQPO YAnfQkUnzeVHiEM0ewjgdj9SnYp2VN37ZU9kmKKS4mAaYTq+AjynPZ6PbLUwjXI1gQWs5PtTrkzcm x1HExJ9B5grwqCSJoP7LcpaY2XrYuOuNzPWG1XZXKo28GpahAObdAB/eO7OBPkBEUGx/f3sA0kCG/ JteVmCHA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXK-Dj; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org Subject: [PATCH v2 10/30] ia64: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:21 +0000 Message-Id: <20230227175741.71216-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008034391417502?= X-GMAIL-MSGID: =?utf-8?q?1759008034391417502?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_clean) flag from being per-page to per-folio, which makes arch_dma_mark_clean() and mark_clean() a little more exciting. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-ia64@vger.kernel.org --- arch/ia64/hp/common/sba_iommu.c | 26 +++++++++++++++----------- arch/ia64/include/asm/cacheflush.h | 14 ++++++++++---- arch/ia64/include/asm/pgtable.h | 14 +++++++++++++- arch/ia64/mm/init.c | 29 +++++++++++++++++++---------- 4 files changed, 57 insertions(+), 26 deletions(-) diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index 8ad6946521d8..48d475f10003 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -798,22 +798,26 @@ sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba) #endif #ifdef ENABLE_MARK_CLEAN -/** +/* * Since DMA is i-cache coherent, any (complete) pages that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ -static void -mark_clean (void *addr, size_t size) +static void mark_clean(void *addr, size_t size) { - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page((void *)pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; + struct folio *folio = virt_to_folio(addr); + ssize_t left = size; + size_t offset = offset_in_folio(folio, addr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= folio_size(folio)) { + set_bit(PG_arch_1, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } #endif diff --git a/arch/ia64/include/asm/cacheflush.h b/arch/ia64/include/asm/cacheflush.h index 708c0fa5d975..eac493fa9e0d 100644 --- a/arch/ia64/include/asm/cacheflush.h +++ b/arch/ia64/include/asm/cacheflush.h @@ -13,10 +13,16 @@ #include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) \ -do { \ - clear_bit(PG_arch_1, &(page)->flags); \ -} while (0) +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_arch_1, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_range flush_icache_range diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 21c97e31a28a..0c2be4ea664b 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -303,7 +303,18 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, add, ptep, pte, 1) /* * Make page protection values cacheable, uncacheable, or write- @@ -396,6 +407,7 @@ pte_same (pte_t a, pte_t b) return pte_val(a) == pte_val(b); } +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) #define update_mmu_cache(vma, address, ptep) do { } while (0) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 7f5353e28516..12aef25944aa 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -50,30 +50,39 @@ void __ia64_sync_icache_dcache (pte_t pte) { unsigned long addr; - struct page *page; + struct folio *folio; - page = pte_page(pte); - addr = (unsigned long) page_address(page); + folio = page_folio(pte_page(pte)); + addr = (unsigned long)folio_address(folio); - if (test_bit(PG_arch_1, &page->flags)) + if (test_bit(PG_arch_1, &folio->flags)) return; /* i-cache is already coherent with d-cache */ - flush_icache_range(addr, addr + page_size(page)); - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ + flush_icache_range(addr, addr + folio_size(folio)); + set_bit(PG_arch_1, &folio->flags); /* mark page as clean */ } /* - * Since DMA is i-cache coherent, any (complete) pages that were written via + * Since DMA is i-cache coherent, any (complete) folios that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ void arch_dma_mark_clean(phys_addr_t paddr, size_t size) { - unsigned long pfn = PHYS_PFN(paddr); + struct folio *folio = page_folio(phys_to_page(paddr)); + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); - do { + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= (ssize_t)folio_size(folio)) { set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); - } while (++pfn <= PHYS_PFN(paddr + size - 1)); + left -= folio_size(folio); + folio = folio_next(folio); + } } inline void From patchwork Mon Feb 27 17:57:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62066 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561182wrd; Mon, 27 Feb 2023 09:59:46 -0800 (PST) X-Google-Smtp-Source: AK7set9JhMOX09DjkIRD5VwsO5ef/coUyLhTYbmJANljw13eqchaNqe9RMtZyn0PnfGVe6exZPyM X-Received: by 2002:a17:90b:1d86:b0:237:99b8:4eef with SMTP id pf6-20020a17090b1d8600b0023799b84eefmr116234pjb.9.1677520786159; Mon, 27 Feb 2023 09:59:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520786; cv=none; d=google.com; s=arc-20160816; b=xzERQwQvWfrAdEH24EbIe10yZCpjNE4KG7YSOFxNgn89iDx7H4cOYqhuS1PHncpLj1 LPPReBxdGfkxXmK0cXjV1zQ/VKxe6CXXahpDU6gPUf7u5jUc7DlTEzV8SHtYAxajNiW0 ITiJeTrsC51HrVWjHdwzRSTX1vWvpQmds4lcpcZvjQVEiRRwqVZKuN9cYbo2n4wGymFT d/vgzB8X3o7Uz/JPxZ/Z30vRHlAUu/C8RqEk5opUs6/JcmWdZfbKePwWQb1ef/4ohQ8+ u1fgQJ6ZuA9/l/b1EFQCeWIECFT1VNpVjrGyFLUNRQQRUeaNaU9pF0OSVgTqwNqHgQnh 73rQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fxYZM+dA5j6swMF6gw/nu6/QS6N0if0H8qzTGtI3Nx4=; b=UEvjI0HF6aHlksu3f2VYPKrViwjmQU3O3F00YJusz1Bus370bIzo/4vT9nGaDKgzQF Kb7PXfk91gR9Iim2ceGH7bu6ztk41tGYfb7QW+UU1WwIf6xdUQd4vP98a4gOuQBMwwEG 4oGm7aFV3C8yh5YezO1/OJjyZM+i5UwYNk6YXaVyqizUTU0GBapUCxbQWbEpfitiFDbx tII+uH2obV/qv6sEI0n/+vH4belditIon4tX1SnY2wO7am95rWKACtfhPbbXhXP+E1fK Bj0AXVRlBirROTEPL+ynzOQFwLLjPjbtYsFUvxEpMlC0Mwc/eF9AUkq5VwpRyUGaQYxq KhRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=b1Oo1BaN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j14-20020a17090a738e00b00230ca3efcf3si7613153pjg.158.2023.02.27.09.59.33; Mon, 27 Feb 2023 09:59:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=b1Oo1BaN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230174AbjB0R6H (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230083AbjB0R5y (ORCPT ); Mon, 27 Feb 2023 12:57:54 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81FF22448C; Mon, 27 Feb 2023 09:57:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fxYZM+dA5j6swMF6gw/nu6/QS6N0if0H8qzTGtI3Nx4=; b=b1Oo1BaN8LP9ha/JnRRL5DsJ/V EoOvGgwr+jghsnF8bDZmTkXQLjBfuksYjYMyLBXvW/1odjPJlIm3s4dgF9og2c/I7g1bC+56ijwba j7Ta1l/0uPm4WPgpcHC8zMM32aZDUTBtelP6RMHAg4ibMytgguA9LwUh4YsU1mlG7BaHbEm2OcHPy Tl5mglk3oQIHBJxKysycjLijuvERqc9MndGUyA7hGduAhswYpEtS2qbzIYt27c3Q6/Eep2B23yb08 3Gfp5dOlhZxtrrGtnzcENqXnoJYdpS16nFWRBi/hBpg3UPW5TyJkSI+KkSI+Uuz3XlSPWHa9OI7uf lu5QnORw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXP-G1; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev Subject: [PATCH v2 11/30] loongarch: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:22 +0000 Message-Id: <20230227175741.71216-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008035837316682?= X-GMAIL-MSGID: =?utf-8?q?1759008035837316682?= Add set_ptes() and update_mmu_cache_range(). It would probably be more efficient to implement __update_tlb() by flushing the entire folio instead of calling it __update_tlb() N times, but I'll leave that for someone who understands the architecture better. Signed-off-by: Matthew Wilcox (Oracle) Cc: Huacai Chen Cc: WANG Xuerui Cc: loongarch@lists.linux.dev --- arch/loongarch/include/asm/cacheflush.h | 2 ++ arch/loongarch/include/asm/pgtable.h | 30 +++++++++++++++++++------ 2 files changed, 25 insertions(+), 7 deletions(-) diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 0681788eb474..7907eb42bfbd 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -47,8 +47,10 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define flush_icache_page(vma, page) do { } while (0) +#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) +#define flush_dcache_folio(folio) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index d28fb9dbec59..9154d317ffb4 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -334,12 +334,20 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) } } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << _PFN_SHIFT; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { /* Preserve global status for the pair */ @@ -445,11 +453,19 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - __update_tlb(vma, address, ptep); + for (;;) { + __update_tlb(vma, address, ptep); + if (--nr == 0) + break; + address += PAGE_SIZE; + ptep++; + } } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache From patchwork Mon Feb 27 17:57:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62077 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2563548wrd; Mon, 27 Feb 2023 10:03:22 -0800 (PST) X-Google-Smtp-Source: AK7set+N9Unrwzmgo23Ttkuw+ytnjVULpzkClFX6De+mIBJSeTsuuC668yBkq2ZhR/fkwmmcjXCZ X-Received: by 2002:a17:90b:4b0f:b0:237:b702:4958 with SMTP id lx15-20020a17090b4b0f00b00237b7024958mr44616pjb.38.1677521002054; Mon, 27 Feb 2023 10:03:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677521002; cv=none; d=google.com; s=arc-20160816; b=DW4YCF8MNh44K11JShyrCyIai1xOyr4aeeIwdxZMfOWbC7n6PtTd5wcFZ67a6xHN0B B2+hj8ncJFstkFfbZVBhTlcSiyTGjeEhKBWywqPDzxFjepmrVs5cU8tcdKWSxLnPbqv4 TPHuSogQB8kLoCOpHEQ07XI0a5RmIeyRnhIleiERQx+ueq5zOZ6p6ND7Q5v4f9l8jN46 KAsbna2062ymavXeCiJCeEudQl/TA9+64jPokIrGiFcbcWU5PDSqGz83E12Z7sKuJAob wpq5DcEE8gWA/uti/+VugjSfXX/4p3J8IUTAURRS+shCvNK38aasN/D650nkJhOQ8kCl Flcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=u7mDkU53gCVgq34r9bUs2tkHplV8+tfY4mkfvunD0YE=; b=xUBOo7pCiB3yGZSMFlpp3HKCpHYhi2NjAmxCaUmlo2El/qDgavOsBy5ebgnMk/Kzwv citWY2dXMemb8vtuwtoOCK1Jb8ZglRFtkzFmt9pxL9yYbtyzdICmKWAJMb9wMO6QYb5r N2TCirWGEQ8Mkyqij7JWmY6jmOVJA87ehfUsYdEqmAPFNKuLBBdFPiKTVME1McP9Nchb qTv6k6RNwNJ1LYXjSGn2/yn4rwjAdUHzDVZ43rQqxSlnZwvS1uEAvsXPaXx3yqUo1Gmb gDJx03IfJ9MIIszJOxtDI28+zfrFz2JULX8S6jtvZ2ZdcSySW7N5ykej1GVVvdCelofc cmAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=IPHv8AKZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a4-20020a17090a688400b0022c24bf1810si7652697pjd.29.2023.02.27.10.03.08; Mon, 27 Feb 2023 10:03:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=IPHv8AKZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230518AbjB0R7P (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbjB0R6W (ORCPT ); Mon, 27 Feb 2023 12:58:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBCE440FB; Mon, 27 Feb 2023 09:57:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=u7mDkU53gCVgq34r9bUs2tkHplV8+tfY4mkfvunD0YE=; b=IPHv8AKZiqSONzlPafglSgXacK Wc4fMenOBvLEZCczdgSLSCoZ5N9sQkaHmgYCTwgKPQvZtNPUN1wKyggomeSbyiAVIOKlFsjc4BNgu e+Y/kZkh8uxCpXbgv+3LrlJhmTBQSC9obGnElWoOesJqWlHi0TXOGnpvuitSCMRTWMpi9Pvm0DFui tJi2lPJ83YVqUXbpdazie10mQF1SLZrWSZCWXWutNWYziHmpa0jLLtWQej/sZ15TL+JIagp43EOmO BtstxiXGDPIvhbT4s1ApwzPN8MU9v3zI7U5crLlpPAc1fdNzwuHAPgGjCfR56UEDP7DKIe7aRgnI7 sf1PcRPw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXV-K2; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Geert Uytterhoeven , linux-m68k@lists.linux-m68k.org Subject: [PATCH v2 12/30] m68k: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:23 +0000 Message-Id: <20230227175741.71216-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008262337239895?= X-GMAIL-MSGID: =?utf-8?q?1759008262337239895?= Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Geert Uytterhoeven Cc: linux-m68k@lists.linux-m68k.org --- arch/m68k/include/asm/cacheflush_mm.h | 26 +++++++++++++++++--------- arch/m68k/include/asm/pgtable_mm.h | 21 ++++++++++++++++++--- arch/m68k/mm/motorola.c | 2 +- 3 files changed, 36 insertions(+), 13 deletions(-) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 1ac55e7b47f0..d43c8bce149b 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -220,24 +220,28 @@ static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vm /* Push the page at kernel virtual address and clear the icache */ /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ -static inline void __flush_page_to_ram(void *vaddr) +static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) { if (CPU_IS_COLDFIRE) { unsigned long addr, start, end; addr = ((unsigned long) vaddr) & ~(PAGE_SIZE - 1); start = addr & ICACHE_SET_MASK; - end = (addr + PAGE_SIZE - 1) & ICACHE_SET_MASK; + end = (addr + nr * PAGE_SIZE - 1) & ICACHE_SET_MASK; if (start > end) { flush_cf_bcache(0, end); end = ICACHE_MAX_ADDR; } flush_cf_bcache(start, end); } else if (CPU_IS_040_OR_060) { - __asm__ __volatile__("nop\n\t" - ".chip 68040\n\t" - "cpushp %%bc,(%0)\n\t" - ".chip 68k" - : : "a" (__pa(vaddr))); + unsigned long paddr = __pa(vaddr); + + while (nr--) { + __asm__ __volatile__("nop\n\t" + ".chip 68040\n\t" + "cpushp %%bc,(%0)\n\t" + ".chip 68k" + : : "a" (paddr + nr * PAGE_SIZE)); + } } else { unsigned long _tmp; __asm__ __volatile__("movec %%cacr,%0\n\t" @@ -249,10 +253,14 @@ static inline void __flush_page_to_ram(void *vaddr) } #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) +#define flush_dcache_page(page) __flush_pages_to_ram(page_address(page), 1) +#define flush_dcache_folio(folio) \ + __flush_pages_to_ram(folio_address(folio), folio_nr_pages(folio)) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) +#define flush_icache_pages(vma, page, nr) \ + __flush_pages_to_ram(page_address(page), nr) +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h index b93c41fe2067..400206c17c97 100644 --- a/arch/m68k/include/asm/pgtable_mm.h +++ b/arch/m68k/include/asm/pgtable_mm.h @@ -31,8 +31,20 @@ do{ \ *(pteptr) = (pteval); \ } while(0) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #if CONFIG_PGTABLE_LEVELS == 3 @@ -138,11 +150,14 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); * tables contain all the necessary information. The Sun3 does, but * they are updated on demand. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #endif /* !__ASSEMBLY__ */ /* MMU-specific headers */ diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 2a375637e007..7784d0fcdf6e 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -81,7 +81,7 @@ static inline void cache_page(void *vaddr) void mmu_page_ctor(void *page) { - __flush_page_to_ram(page); + __flush_pages_to_ram(page, 1); flush_tlb_kernel_page(page); nocache_page(page); } From patchwork Mon Feb 27 17:57:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62056 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561013wrd; Mon, 27 Feb 2023 09:59:24 -0800 (PST) X-Google-Smtp-Source: AK7set88P7GhO3vfnARVqq/eojcWqpGAX47O7kIRqXFhSnKTFcIQE7wQnLOKjDcTTcdBjV6eNdQ7 X-Received: by 2002:a17:90b:38c7:b0:237:8686:1b78 with SMTP id nn7-20020a17090b38c700b0023786861b78mr110505pjb.12.1677520764497; Mon, 27 Feb 2023 09:59:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520764; cv=none; d=google.com; s=arc-20160816; b=GC4G6ru+Vm+v5gHFG7OSSos8XjoPZBCJJ67MJk5kt6v1oYLQfOS6GYCHw6rieM+al9 mWWZ04NN+hgt4Sj8vtnH+Mduzx60ZV4rf1KELMolITMJLD9D8wVPEhr5x5L7cQE4DDIe nsp4ZdiV/oes/E6wCoH7mQYQ8mGGqveJKrvLCsAHL3xbnaBJcn4gIT0EHjqI89F1OFyD KUs1/C6/ZMlCNGVltGAc9j9/fnnh/zJAiwZWwYfJ2Qt3IVJRzxO3VnUmkXl5QdzZqxrl nE0q+cz27IK4gCYGosTP+Qy1xL/c07WZxNMy/bHijNkXOUwfqoUyddB6UKhRJnUwYusX 4qsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jco2/brYdWe0hU6YkAHAWSfRYtcsxYQg52X9PqtqDX0=; b=GvO8RSr09WT0m29XVsHKPdrfQTzqJqOteloDqtKOpBuQhKQgEaeIfyaQcGMNPDsA9F 3VHQ/TjexNcNc8iToMCWPhMglYP7CMTgBCIbwxe0WCx9HobdZdwWw+g7SfHvv0Rsx5Jd E86SAq8+tGGDRP1O/JM3w7ktgx8pDajvwUtHiZwaiIV7yfpCwSkxSPd8zyel/+jx7leZ Q0/wdHmXjXFEgUr5DPOAMVyANZzKJcHs3YzbSszAV+IOBGoj/YWEfL7KjbMdjNkHkpo1 w0mHm5XJi59d6jaMrCnpxIJObegqlcMxzWhPhl3vvxjZPFxdWfJAQNhPanbiiq1t/PTX 4hMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=ltONIFmG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g64-20020a636b43000000b00502e76a516csi73875pgc.500.2023.02.27.09.59.08; Mon, 27 Feb 2023 09:59:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=ltONIFmG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230313AbjB0R6Q (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230122AbjB0R54 (ORCPT ); Mon, 27 Feb 2023 12:57:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A726224498; Mon, 27 Feb 2023 09:57:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jco2/brYdWe0hU6YkAHAWSfRYtcsxYQg52X9PqtqDX0=; b=ltONIFmGp2DL+cwnqaDAtcnTF7 yZr9xg5m9G/HNIelkbXZt2N7Euxuzfpo4qmALcQ5ECJs4EiYndJZmi1vr7HlIyQ8WiMBDh+xtgUJk w6T5tJ3XhoYOT+/SxiJoJRVzYOTqXzjfOOfsY67/FqNBpsGzTSkIBApq1UJeUlXT4J+iXmRWkVKWv kT7NZjGS7yWHUI7xWMHZ7AkjSJ69dU/gsFulUF7QL3UuhK+UTYQK8h24WGtakSKwttfXo1GnVJDZZ GatgVC5/3PeSvKXMUrgd/xhvE9ze64jgEBPIvcAsPnAw0LRmlBt8PP5/98dvRjrrKrPxPVlhzfmwc pPTM0pwQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXY-OL; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Michal Simek Subject: [PATCH v2 13/30] microblaze: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:24 +0000 Message-Id: <20230227175741.71216-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008013603621254?= X-GMAIL-MSGID: =?utf-8?q?1759008013603621254?= Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Also change the calling convention for set_pte() to be the same as other architectures. Signed-off-by: Matthew Wilcox (Oracle) Cc: Michal Simek --- arch/microblaze/include/asm/cacheflush.h | 8 ++++++++ arch/microblaze/include/asm/pgtable.h | 17 ++++++++++++----- arch/microblaze/include/asm/tlbflush.h | 4 +++- 3 files changed, 23 insertions(+), 6 deletions(-) diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h index 39f8fb6768d8..e6641ff98cb3 100644 --- a/arch/microblaze/include/asm/cacheflush.h +++ b/arch/microblaze/include/asm/cacheflush.h @@ -74,6 +74,14 @@ do { \ flush_dcache_range((unsigned) (addr), (unsigned) (addr) + PAGE_SIZE); \ } while (0); +static void flush_dcache_folio(struct folio *folio) +{ + unsigned long addr = folio_pfn(folio) << PAGE_SHIFT; + + flush_dcache_range(addr, addr + folio_size(folio)); +} +#define flush_dcache_folio flush_dcache_folio + #define flush_cache_page(vma, vmaddr, pfn) \ flush_dcache_range(pfn << PAGE_SHIFT, (pfn << PAGE_SHIFT) + PAGE_SIZE); diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h index d1b8272abcd9..a01e1369b486 100644 --- a/arch/microblaze/include/asm/pgtable.h +++ b/arch/microblaze/include/asm/pgtable.h @@ -330,18 +330,25 @@ static inline unsigned long pte_update(pte_t *p, unsigned long clr, /* * set_pte stores a linux PTE into the linux page table. */ -static inline void set_pte(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_pte(pte_t *ptep, pte_t pte) { *ptep = pte; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - *ptep = pte; + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_SHIFT_OFFSET; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) diff --git a/arch/microblaze/include/asm/tlbflush.h b/arch/microblaze/include/asm/tlbflush.h index 2038168ed128..1b179e5e9062 100644 --- a/arch/microblaze/include/asm/tlbflush.h +++ b/arch/microblaze/include/asm/tlbflush.h @@ -33,7 +33,9 @@ static inline void local_flush_tlb_range(struct vm_area_struct *vma, #define flush_tlb_kernel_range(start, end) do { } while (0) -#define update_mmu_cache(vma, addr, ptep) do { } while (0) +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0) +#define update_mmu_cache(vma, addr, pte) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define flush_tlb_all local_flush_tlb_all #define flush_tlb_mm local_flush_tlb_mm From patchwork Mon Feb 27 17:57:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62055 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2560946wrd; Mon, 27 Feb 2023 09:59:15 -0800 (PST) X-Google-Smtp-Source: AK7set+rexv75KuZ665p24yN4WrMNl774FscS4ap1boM+QRXADx2lCTTHPcrm4srIn2LetuNguOt X-Received: by 2002:a17:90b:33c9:b0:237:7ed5:bbe2 with SMTP id lk9-20020a17090b33c900b002377ed5bbe2mr301950pjb.3.1677520755222; Mon, 27 Feb 2023 09:59:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520755; cv=none; d=google.com; s=arc-20160816; b=MhclJbE4hExoys8tKyIeUIFGDFk+aYEGEMx7aqtzxnC4y1Tk7SMGeHVUCc/FWfHyPm 34jGMvuCjP7liu7l4qNiar/bA1xKCBFKFeCOHRse3M8UvAD7qhngLuIE7BbFo5zDRjzm wMpsx9HD51Gg+92LKXnJZRsM92fsJPr+GTdI10e2ZsYZVQXG0/OL2epqjtward/9vlPP V+Wfg56r0eOVbyaYOjY5x3ULk5CrHz8YlV2wklsL11IFTBl5JVYjlu7mPrNHgCCk2lAe MBVTIeKb4Ua7+PVEOmw1eq4TPI/SYqaLLNtenl2CAfndoqIgkkop3SI7X7fmubgb1600 1G+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YFkv649gCM/AwFXo2f8Ac1lw9lI40kQCtDNPnXRRbZA=; b=LF9HyECxxyE1F71OW7BZQQ+ZPpzMq0PJ2hcH4Idhl578pTC0SPNUAYCSVC9nsvsbal z7EFdX6TdhhWQzQ0ks5y9he3hJ4/4/vM87/nSioFuUWOh9A+p6O6qm8MpavQ3Ha0ia+U MlY4gIpvvNfk61HgqPGWp/gA7pm/MAghM+q1QWcTc/21cl5mD97yP9kD1gt4zhfLOEZy Sd7fbE+mztO80hbNMhBX0Eo9GZUS++EdSkqao3FpEcT4NkBL+dSeXlwS4USZjPdHhIyF DSRI3YIfxoZHoJ88dmuAw2NxHFJhSaK1lrU62oLMSmPU2AWZ5xvD6K9lKknCc+FRtGD1 TBbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=jAPCFDrU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g11-20020a63e60b000000b004fc26dcefbbsi6976246pgh.92.2023.02.27.09.58.57; Mon, 27 Feb 2023 09:59:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=jAPCFDrU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230256AbjB0R6N (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230120AbjB0R54 (ORCPT ); Mon, 27 Feb 2023 12:57:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B5F2241E3; Mon, 27 Feb 2023 09:57:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YFkv649gCM/AwFXo2f8Ac1lw9lI40kQCtDNPnXRRbZA=; b=jAPCFDrUPRk9I3gfR14BfU1wJs euEEwb0qx15q11wU7OIGutBtKlJh9JbDNOoCVm3js6yC4c4jAB4VVZS8sx/x+uYw2heFO6fu38Zra smoCRsrhl9450ldQX0VhR90XPIGQQ/pTj/F6dI65HWBu8FsdlEY3MEx0YlQHYAhjUpcNkJVwiWGKX 60pDXD4xkywdgP0MGQEG1z7HdavgCB1+VErdeVsaK6MYLDaESKOMEf9up3iO+EO50PjNVw++KtaGF 9H68zc77zlmjJcBfKaOlYbbN6B5ctv1mLoFhqRD+m+nUq4yndqDscTOBIt1xJyVHjrkoY+B+a2jIc qAea5QaA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhku-000IXd-Si; Mon, 27 Feb 2023 17:57:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH v2 14/30] mips: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:25 +0000 Message-Id: <20230227175741.71216-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008003097879558?= X-GMAIL-MSGID: =?utf-8?q?1759008003097879558?= Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org --- arch/mips/include/asm/cacheflush.h | 32 +++++++++++------ arch/mips/include/asm/pgtable.h | 36 +++++++++++++------ arch/mips/mm/c-r4k.c | 5 +-- arch/mips/mm/cache.c | 56 +++++++++++++++--------------- arch/mips/mm/init.c | 17 +++++---- 5 files changed, 88 insertions(+), 58 deletions(-) diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index b3dc9c589442..2683cade42ef 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -36,12 +36,12 @@ */ #define PG_dcache_dirty PG_arch_1 -#define Page_dcache_dirty(page) \ - test_bit(PG_dcache_dirty, &(page)->flags) -#define SetPageDcacheDirty(page) \ - set_bit(PG_dcache_dirty, &(page)->flags) -#define ClearPageDcacheDirty(page) \ - clear_bit(PG_dcache_dirty, &(page)->flags) +#define folio_test_dcache_dirty(folio) \ + test_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_set_dcache_dirty(folio) \ + set_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_clear_dcache_dirty(folio) \ + clear_bit(PG_dcache_dirty, &(folio)->flags) extern void (*flush_cache_all)(void); extern void (*__flush_cache_all)(void); @@ -50,15 +50,24 @@ extern void (*flush_cache_mm)(struct mm_struct *mm); extern void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn); -extern void __flush_dcache_page(struct page *page); +extern void __flush_dcache_pages(struct page *page, unsigned int nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (cpu_has_dc_aliases) + __flush_dcache_pages(&folio->page, folio_nr_pages(folio)); + else if (!cpu_has_ic_fills_f_dc) + folio_set_dcache_dirty(folio); +} +#define flush_dcache_folio flush_dcache_folio + static inline void flush_dcache_page(struct page *page) { if (cpu_has_dc_aliases) - __flush_dcache_page(page); + __flush_dcache_pages(page, 1); else if (!cpu_has_ic_fills_f_dc) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) @@ -73,10 +82,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) { } +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 791389bf3c12..0cf0455e6ae8 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -105,8 +105,10 @@ do { \ } \ } while(0) -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) @@ -204,19 +206,31 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt } #endif -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + bool do_sync = false; - if (!pte_present(pteval)) - goto cache_sync_done; + for (i = 0; i < nr; i++) { + if (!pte_present(pte)) + continue; + if (pte_present(ptep[i]) && + (pte_pfn(ptep[i]) == pte_pfn(pte))) + continue; + do_sync = true; + } - if (pte_present(*ptep) && (pte_pfn(*ptep) == pte_pfn(pteval))) - goto cache_sync_done; + if (do_sync) + __update_cache(addr, pte); - __update_cache(addr, pteval); -cache_sync_done: - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << _PFN_SHIFT; + } } /* diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index a549fa98c2f4..7d2a42f0cffd 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -679,13 +679,14 @@ static inline void local_r4k_flush_cache_page(void *args) if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) vaddr = NULL; else { + struct folio *folio = page_folio(page); /* * Use kmap_coherent or kmap_atomic to do flushes for * another ASID than the current one. */ map_coherent = (cpu_has_dc_aliases && - page_mapcount(page) && - !Page_dcache_dirty(page)); + folio_mapped(folio) && + !folio_test_dcache_dirty(folio)); if (map_coherent) vaddr = kmap_coherent(page, addr); else diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 11b3e7ddafd5..0668435521fc 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -82,13 +82,15 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes, return 0; } -void __flush_dcache_page(struct page *page) +void __flush_dcache_pages(struct page *page, unsigned int nr) { - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = page_folio(page); + struct address_space *mapping = folio_flush_mapping(folio); unsigned long addr; + unsigned int i; if (mapping && !mapping_mapped(mapping)) { - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); return; } @@ -97,25 +99,21 @@ void __flush_dcache_page(struct page *page) * case is for exec env/arg pages and those are %99 certainly going to * get faulted into the tlb (and thus flushed) anyways. */ - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - flush_data_cache_page(addr); - - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + for (i = 0; i < nr; i++) { + addr = (unsigned long)kmap_local_page(page + i); + flush_data_cache_page(addr); + kunmap_local((void *)addr); + } } - -EXPORT_SYMBOL(__flush_dcache_page); +EXPORT_SYMBOL(__flush_dcache_pages); void __flush_anon_page(struct page *page, unsigned long vmaddr) { unsigned long addr = (unsigned long) page_address(page); + struct folio *folio = page_folio(page); if (pages_do_alias(addr, vmaddr)) { - if (page_mapcount(page) && !Page_dcache_dirty(page)) { + if (folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -130,27 +128,29 @@ EXPORT_SYMBOL(__flush_anon_page); void __update_cache(unsigned long address, pte_t pte) { - struct page *page; + struct folio *folio; unsigned long pfn, addr; int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc; + unsigned int i; pfn = pte_pfn(pte); if (unlikely(!pfn_valid(pfn))) return; - page = pfn_to_page(pfn); - if (Page_dcache_dirty(page)) { - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - if (exec || pages_do_alias(addr, address & PAGE_MASK)) - flush_data_cache_page(addr); - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + folio = page_folio(pfn_to_page(pfn)); + address &= PAGE_MASK; + address -= offset_in_folio(folio, pfn << PAGE_SHIFT); + + if (folio_test_dcache_dirty(folio)) { + for (i = 0; i < folio_nr_pages(folio); i++) { + addr = (unsigned long)kmap_local_folio(folio, i); - ClearPageDcacheDirty(page); + if (exec || pages_do_alias(addr, address)) + flush_data_cache_page(addr); + kunmap_local((void *)addr); + address += PAGE_SIZE; + } + folio_clear_dcache_dirty(folio); } } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5a8002839550..19d4ca3b3fbd 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -88,7 +88,7 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot) pte_t pte; int tlbidx; - BUG_ON(Page_dcache_dirty(page)); + BUG_ON(folio_test_dcache_dirty(page_folio(page))); preempt_disable(); pagefault_disable(); @@ -169,11 +169,12 @@ void kunmap_coherent(void) void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); if (cpu_has_dc_aliases && - page_mapcount(from) && !Page_dcache_dirty(from)) { + folio_mapped(src) && !folio_test_dcache_dirty(src)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(); @@ -194,15 +195,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } if (vma->vm_flags & VM_EXEC) flush_cache_page(vma, vaddr, page_to_pfn(page)); @@ -212,15 +215,17 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } } EXPORT_SYMBOL_GPL(copy_from_user_page); From patchwork Mon Feb 27 17:57:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62054 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2560933wrd; Mon, 27 Feb 2023 09:59:13 -0800 (PST) X-Google-Smtp-Source: AK7set950WUqHs4XqYDdNA+T6GRBQNaJzYPyQET3q6n/OF0bcS+q+dKZF8fpAyA/ndpTmIRcq9e7 X-Received: by 2002:a17:902:c94f:b0:19d:1bc1:ce22 with SMTP id i15-20020a170902c94f00b0019d1bc1ce22mr3942334pla.5.1677520753545; Mon, 27 Feb 2023 09:59:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520753; cv=none; d=google.com; s=arc-20160816; b=xgu4tiF3kOWipGZ8Jo0jdqbePubjzPi9TQZ5cciLAWX5R0M/ET+eqR3f9J19ENoMIs gllRQPts7cNgd6FyglMvVFaHxKqZVps9ewmF4PVc3bw+I+9bd+cpcdjW2a1wb+kkdztH 1hDYf2EEmgAKux66BnMEOl66zvmPxlJ46umnkadQP4xMD4qzdPUxjx5MlxuzpTADYwOE qIv3RZKg8XGPESw9ux4mcJmuE+lHkwH8Bw3OjvnwL4DZMGB7kGWJctkc41sgI+GBSPGF 5qTntRKKgfco/tVo5nt9YpEC8sHLZEzAwpqPFzC7oW6ldH0wwj/tLmrW8Lj44+2SFYcM k80A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=E51MTWtc4kvJF42CxWT2/h/jjoo3XaciyQ6YJuFHeTk=; b=rNHr46487vapQLipyEbWe2QdPHSiXipb/PFTG0swi1dGzaXFgqxSuJZlfGNYFadrUw ggRjJsMoUJswrwy5mDznnGhwk66un0RUOqOjD6MRaqEn9MNcd6Fj3tfyosbtyCAE/jFc z9jphsFmQ46c0CwRnL27BaLpZlftQf+REnSv7GXJJ7BsBUf/Ex8N1Bz0RJv/uDCsk669 99T3rqR21ghqZiNlQ+kjkYig/SyR0zHuNxY7mCSSX8cGCUQN2LhIjI8gcUHaVWHowbjU Xp/Eza9MHSoc26kMAEZk63/ALLEhQtsvRsRw6XcS67H6HZmaPSIDMrcoc9ahP5yRaHXb 6M2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=GeDQkA41; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lf11-20020a170902fb4b00b00182d9015d42si7317552plb.225.2023.02.27.09.58.55; Mon, 27 Feb 2023 09:59:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=GeDQkA41; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230199AbjB0R6K (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230076AbjB0R5y (ORCPT ); Mon, 27 Feb 2023 12:57:54 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E21C02448F; Mon, 27 Feb 2023 09:57:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E51MTWtc4kvJF42CxWT2/h/jjoo3XaciyQ6YJuFHeTk=; b=GeDQkA41WjVMRYX41bAXIE0pxw S7mWXtSUrphYxU2AkuJ/MUjta01txm9RNEGytGtkWxJCcWLLvECLgQ56RNNS8iARF1xrMmtiUueRI eV75XMIRQ4Lqthyf5Gy4kCZfxsk2APym+VdR38T1SDvk6zZ8EQLa0CQnG2iWgLq3JUSBWzWvsGFEq 3cBeWTaGvm9MwhkgZsJSWezHDVyn82qhNc7xtF+wYMoeNyI1njLeHTecykmsHu2JfdES9Hyu5jmtW 7LqKCt2gyxVudV9D+yTXwTcZhCIH/fRHZC3JW56Z3P1qbmURbhjmpftffB+x0ARZSTedjdkF9+0MB sjQp2FGg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkv-000IXf-0D; Mon, 27 Feb 2023 17:57:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Dinh Nguyen Subject: [PATCH v2 15/30] nios2: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:26 +0000 Message-Id: <20230227175741.71216-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008001755000885?= X-GMAIL-MSGID: =?utf-8?q?1759008001755000885?= Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Dinh Nguyen --- arch/nios2/include/asm/cacheflush.h | 6 ++- arch/nios2/include/asm/pgtable.h | 27 +++++++++---- arch/nios2/mm/cacheflush.c | 61 ++++++++++++++++------------- 3 files changed, 58 insertions(+), 36 deletions(-) diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index d0b71dd71287..8624ca83cffe 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -29,9 +29,13 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio extern void flush_icache_range(unsigned long start, unsigned long end); -extern void flush_icache_page(struct vm_area_struct *vma, struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 0f5c2564e9f5..8a77821a17a5 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -178,15 +178,23 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - unsigned long paddr = (unsigned long)page_to_virt(pte_page(pteval)); - - flush_dcache_range(paddr, paddr + PAGE_SIZE); - set_pte(ptep, pteval); + unsigned long paddr = (unsigned long)page_to_virt(pte_page(pte)); + + flush_dcache_range(paddr, paddr + nr * PAGE_SIZE); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline int pmd_none(pmd_t pmd) { return (pmd_val(pmd) == @@ -273,7 +281,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) extern void __init paging_init(void); extern void __init mmu_init(void); -extern void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #endif /* _ASM_NIOS2_PGTABLE_H */ diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 6aa9257c3ede..471485a84b2c 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -138,10 +138,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, __flush_icache(start, end); } -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { unsigned long start = (unsigned long) page_address(page); - unsigned long end = start + PAGE_SIZE; + unsigned long end = start + nr * PAGE_SIZE; __flush_dcache(start, end); __flush_icache(start, end); @@ -158,19 +159,19 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, __flush_icache(start, end); } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - unsigned long start = (unsigned long)page_address(page); + unsigned long start = (unsigned long)folio_address(folio); - __flush_dcache(start, start + PAGE_SIZE); + __flush_dcache(start, start + folio_size(folio)); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -178,32 +179,38 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); /* Flush this page if there are aliases. */ if (mapping && !mapping_mapped(mapping)) { - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping) { - unsigned long start = (unsigned long)page_address(page); - flush_aliases(mapping, page); - flush_icache_range(start, start + PAGE_SIZE); + unsigned long start = (unsigned long)folio_address(folio); + flush_aliases(mapping, folio); + flush_icache_range(start, start + folio_size(folio)); } - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_folio); -void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; struct address_space *mapping; reload_tlb_page(vma, address, pte); @@ -215,19 +222,19 @@ void update_mmu_cache(struct vm_area_struct *vma, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(folio); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); - if(mapping) - { - flush_aliases(mapping, page); + if (mapping) { + flush_aliases(mapping, folio); if (vma->vm_flags & VM_EXEC) - flush_icache_page(vma, page); + flush_icache_pages(vma, &folio->page, + folio_nr_pages(folio)); } } From patchwork Mon Feb 27 17:57:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62060 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561127wrd; Mon, 27 Feb 2023 09:59:39 -0800 (PST) X-Google-Smtp-Source: AK7set/jeS5pnil+nkZkolUVNnbUw7YkjCbUWDiKoNIxfMvvsVBy9FBBxyloJTcmEQFGqT6kMTKE X-Received: by 2002:a17:90b:4c0c:b0:233:feb4:895f with SMTP id na12-20020a17090b4c0c00b00233feb4895fmr26460342pjb.44.1677520779149; Mon, 27 Feb 2023 09:59:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520779; cv=none; d=google.com; s=arc-20160816; b=M7Hpiaxp2Q7+IliyK7WCDzTfntEhcwitYmlvScfHVeHjKKsFafEtcv1n9e5HlovSE4 b5t+RJYSG4REhox/6rUDnq8yNHoESYAWg8mU09cJPURjQ1wDH+YvQiILbeNi6HHlRCdo WJUU3UIlAfSRc7v0UCsr+7tJ1ynRmVhHW4fVwknhUe3KQRnpNM/FVRQKh7G6veWG2lyR Q95fcrMrMPxErjf+vB03WHqAdKKXWxQ49/j64dwW1KwDuopQxfXcKMPTDsmQHDfCoGiL /DQvuYcgwZnLXIkTy4yxz2nc7wsiAPe31dlKfygCJJfdWnrqG8BI1calbpF0fZl/Ca+Q 8Qyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Nqz7AKc3hPtY/ZHqOcsiEiY49KCNRExEKCVBLCnS/x4=; b=J9XnDzkOmRn7yBjm85cgC3boTLm/bn3uUUFHQ9Ty+ebXKEz/KXxNkpJ1cnW9Ow+kJB JHTg/tTNHdLENeXpHtJMH+Gc503LITe4GbcNsjAY5LSqwFhZd5XHGQNwIt9A9NPb3NP2 u+SSe2wgeXyixnGz+N3pIfBg4UOn1DlMNtya9nRgSsRj5VAjXLaEvqyVSPi7xpBU59kB 8kOALeaAK9z3AhrK1/pBZr0XfsLCZBDdIWQGkSNygjW8fSbIGaG+o3x4wRMOC3wjDwOb UmVnWxF0e9w6UdsjdGGlHWrUAya8IL8ZspbYVh8fEdHABHE9aiECVl24QqDpLy5OfrMX yE4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=jhpNHFRR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n89-20020a17090a2ce200b0022c216a776fsi7949061pjd.1.2023.02.27.09.59.26; Mon, 27 Feb 2023 09:59:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=jhpNHFRR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230404AbjB0R6f (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230205AbjB0R6K (ORCPT ); Mon, 27 Feb 2023 12:58:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22C69241D9; Mon, 27 Feb 2023 09:57:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Nqz7AKc3hPtY/ZHqOcsiEiY49KCNRExEKCVBLCnS/x4=; b=jhpNHFRRyHy/TpBPFwxYsEg8wh cAfbO9iPyvMMMxJLy9gFVnY64izGm0jMDIyMcKZ2uw2PGuZ2q4Vhzg0zHntzp8duThNZeQAS1EzIj szvUcBMcnPMwi4bKX0LG5dbY91B1UvdO92qBeIFHka3WECGGXsJTlDEI0kcCLKxrJnGnUOahjHtum I0/W48if3Ic2/DQkq4kUN9GQAW0dcN2bsoqvQZUwsWeDY0a33VLnDzWnTu4piJ6pqrf9OxglH7ep7 ERTtDf75judvHo0eNzidtpBMDJ4qO42dkIzfOmIgLUPlai6YSxZnNPHkfegT7ihWcgpZtN+ehHm11 Epy/Hc+g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkv-000IXh-3a; Mon, 27 Feb 2023 17:57:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Jonas Bonn , Stefan Kristiansson , Stafford Horne , linux-openrisc@vger.kernel.org Subject: [PATCH v2 16/30] openrisc: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:27 +0000 Message-Id: <20230227175741.71216-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008028515643038?= X-GMAIL-MSGID: =?utf-8?q?1759008028515643038?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Jonas Bonn Cc: Stefan Kristiansson Cc: Stafford Horne Cc: linux-openrisc@vger.kernel.org --- arch/openrisc/include/asm/cacheflush.h | 8 +++++++- arch/openrisc/include/asm/pgtable.h | 27 +++++++++++++++++++++----- arch/openrisc/mm/cache.c | 12 ++++++++---- 3 files changed, 37 insertions(+), 10 deletions(-) diff --git a/arch/openrisc/include/asm/cacheflush.h b/arch/openrisc/include/asm/cacheflush.h index eeac40d4a854..984c331ff5f4 100644 --- a/arch/openrisc/include/asm/cacheflush.h +++ b/arch/openrisc/include/asm/cacheflush.h @@ -56,10 +56,16 @@ static inline void sync_icache_dcache(struct page *page) */ #define PG_dc_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_dc_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - clear_bit(PG_dc_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_icache_user_page(vma, page, addr, len) \ diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h index 3eb9b9555d0d..1a7077150d7b 100644 --- a/arch/openrisc/include/asm/pgtable.h +++ b/arch/openrisc/include/asm/pgtable.h @@ -46,7 +46,21 @@ extern void paging_init(void); * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) @@ -379,13 +393,16 @@ static inline void update_tlb(struct vm_area_struct *vma, extern void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - update_tlb(vma, address, pte); - update_cache(vma, address, pte); + update_tlb(vma, address, ptep); + update_cache(vma, address, ptep); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + /* __PHX__ FIXME, SWAP, this probably doesn't work */ /* diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c index 534a52ec5e66..eb43b73f3855 100644 --- a/arch/openrisc/mm/cache.c +++ b/arch/openrisc/mm/cache.c @@ -43,15 +43,19 @@ void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte) { unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT; - struct page *page = pfn_to_page(pfn); - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); /* * Since icaches do not snoop for updated data on OpenRISC, we * must write back and invalidate any dirty pages manually. We * can skip data pages, since they will not end up in icaches. */ - if ((vma->vm_flags & VM_EXEC) && dirty) - sync_icache_dcache(page); + if ((vma->vm_flags & VM_EXEC) && dirty) { + unsigned int nr = folio_nr_pages(folio); + + while (nr--) + sync_icache_dcache(folio_page(folio, nr)); + } } From patchwork Mon Feb 27 17:57:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62068 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561390wrd; Mon, 27 Feb 2023 10:00:09 -0800 (PST) X-Google-Smtp-Source: AK7set/j0vhZA4cb556+gbCecxLB9dXQNu64jxDsuPVAMSFyfKTn8O6DRegiXnrO1aQ9b88m5qMl X-Received: by 2002:a17:903:2903:b0:19d:2542:96b7 with SMTP id lh3-20020a170903290300b0019d254296b7mr1834801plb.39.1677520808945; Mon, 27 Feb 2023 10:00:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520808; cv=none; d=google.com; s=arc-20160816; b=I3QtZ44bpSTNmSzymRwunb47rHrrm/d2D9k6CDbRS7T3gDwLgsomfygB3p2bTM6+ro tkmoAW7x/JBk62W9dEo/P2lkizbWPW5vZUUXsBlsCgoN7rtSeu3gcItoCfG4+YQAq0MH iTaGfofeW6vYQqLW6S1iJqtPlIaVMabS9h0eoqXhnmp0+JgkkJafS8R/+kszgMmhKWnF 0qxTm0+MvSf5vTDMZRVo2BoWFjJEfMH092w5pUf7jGek7rnG3/tXiRSsUlKJbqpp9Za7 3T4Tb/uvvCMqxRbrdOW7D32g/FTBnkTwITPjQ9pfVUOtlJtIEAmW60he/ulfHfxMT1B2 R2Qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=f4iId2jx8WHA9aYmBaR2cIQrZXvmhrhHuZFPGqlcK5c=; b=IcLpz6VskI4XNRWNHMyDf7IhYTp2sUGnXH6VkF7Z0NcmzsNaXSkCSIqfnLArQNkls/ ieMgyWCd0h/4xyIcvJOkZ0XrKbWxvzUN0/ZDyT0i2KcizpNl6hGKSds4FYZro63ztODT JIVbr6XJM5aaEzgFc3/ub5QKrP5atFhlvFrbimnQzuAf7IOm9lf+rGsBU34D5S95D1Kb r/e+qXBnW6EnAg0ge5q/5rtCFihunt7kD4bjx3HOiZQ6cOQzgDtes3+LroFKxmihYLCk 4DF2mlTVD1+HdR4T+IAxHcm4KYuPJSxWdyo7kauHBnhOXa4Q/mHvFZMytl982eo/Lnji EfRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=sXww9wYg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e2-20020a170902744200b0019b0b007994si7419041plt.163.2023.02.27.09.59.56; Mon, 27 Feb 2023 10:00:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=sXww9wYg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230361AbjB0R63 (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230184AbjB0R6J (ORCPT ); Mon, 27 Feb 2023 12:58:09 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD327241D6; Mon, 27 Feb 2023 09:57:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=f4iId2jx8WHA9aYmBaR2cIQrZXvmhrhHuZFPGqlcK5c=; b=sXww9wYgiCgowKq5XLFEcyHEop XW6NMHGVnb1oO6+9T/76a2+Y82dHeefnUfZBzhIxGcCj398YBLNFNhhOzidlolrvWvJDBitd7gk6/ am0yPVhFVNjf0/sdLFDqtWWhhNPXUrCXLh3zWC65Ttfpe+YSGMzQVnMGiwQZ3tAl4z+0gNOg09MfO RbigdZKOs0HwmktTL3Nl7hlzt1lhRKZbgsXReR+3nfUSk1gBCKQd3Ptp4XFepzP80wwCq+3zLcwY8 BWekd31pqnO73HDQhIWPmiQk4cxwfGgkLWx7V3AyvsUeWFPQlM2V2KaWXdd3FR8UhtiB1ZOCB1b8S wMs5O4gQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkv-000IXs-9X; Mon, 27 Feb 2023 17:57:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH v2 17/30] parisc: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:28 +0000 Message-Id: <20230227175741.71216-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008059351185781?= X-GMAIL-MSGID: =?utf-8?q?1759008059351185781?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org --- arch/parisc/include/asm/cacheflush.h | 14 ++-- arch/parisc/include/asm/pgtable.h | 28 +++++--- arch/parisc/kernel/cache.c | 101 +++++++++++++++++++-------- 3 files changed, 99 insertions(+), 44 deletions(-) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index ff07c509e04b..0bf8b69d086b 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -46,16 +46,20 @@ void invalidate_kernel_vmap_range(void *vaddr, int size); #define flush_cache_vmap(start, end) flush_cache_all() #define flush_cache_vunmap(start, end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -#define flush_icache_page(vma,page) do { \ - flush_kernel_dcache_page_addr(page_address(page)); \ - flush_kernel_icache_page(page_address(page)); \ -} while (0) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index e2950f5db7c9..78ee9816f423 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -73,14 +73,7 @@ extern void __update_cache(pte_t pte); mb(); \ } while(0) -#define set_pte_at(mm, addr, pteptr, pteval) \ - do { \ - if (pte_present(pteval) && \ - pte_user(pteval)) \ - __update_cache(pteval); \ - *(pteptr) = (pteval); \ - purge_tlb_entries(mm, addr); \ - } while (0) +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) #endif /* !__ASSEMBLY__ */ @@ -391,11 +384,28 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) extern void paging_init (void); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + if (pte_present(pte) && pte_user(pte)) + __update_cache(pte); + for (;;) { + *ptep = pte; + purge_tlb_entries(mm, addr); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_PTE_SHIFT; + addr += PAGE_SIZE; + } +} + /* Used for deferring calls to flush_dcache_page() */ #define PG_dcache_dirty PG_arch_1 -#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep) +#define update_mmu_cache_range(vma, addr, ptep, nr) __update_cache(*ptep) +#define update_mmu_cache(vma, addr, ptep) __update_cache(*ptep) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index 984d3a1b3828..16057812103b 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -92,11 +92,11 @@ static inline void flush_data_cache(void) /* Kernel virtual address of pfn. */ #define pfn_va(pfn) __va(PFN_PHYS(pfn)) -void -__update_cache(pte_t pte) +void __update_cache(pte_t pte) { unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; + unsigned int nr; /* We don't have pte special. As a result, we can be called with an invalid pfn and we don't need to flush the kernel dcache page. @@ -104,13 +104,17 @@ __update_cache(pte_t pte) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page_mapping_file(page) && - test_bit(PG_dcache_dirty, &page->flags)) { - flush_kernel_dcache_page_addr(pfn_va(pfn)); - clear_bit(PG_dcache_dirty, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + if (folio_flush_mapping(folio) && + test_bit(PG_dcache_dirty, &folio->flags)) { + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); + clear_bit(PG_dcache_dirty, &folio->flags); } else if (parisc_requires_coherency()) - flush_kernel_dcache_page_addr(pfn_va(pfn)); + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); } void @@ -365,6 +369,20 @@ static void flush_user_cache_page(struct vm_area_struct *vma, unsigned long vmad preempt_enable(); } +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) +{ + void *kaddr = page_address(page); + + for (;;) { + flush_kernel_dcache_page_addr(kaddr); + flush_kernel_icache_page(kaddr); + if (--nr == 0) + break; + page += PAGE_SIZE; + } +} + static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) { pte_t *ptep = NULL; @@ -393,26 +411,30 @@ static inline bool pte_needs_flush(pte_t pte) == (_PAGE_PRESENT | _PAGE_ACCESSED); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); - struct vm_area_struct *mpnt; - unsigned long offset; + struct address_space *mapping = folio_flush_mapping(folio); + struct vm_area_struct *vma; unsigned long addr, old_addr = 0; + void *kaddr; unsigned long count = 0; + unsigned long i, nr; pgoff_t pgoff; if (mapping && !mapping_mapped(mapping)) { - set_bit(PG_dcache_dirty, &page->flags); + set_bit(PG_dcache_dirty, &folio->flags); return; } - flush_kernel_dcache_page_addr(page_address(page)); + nr = folio_nr_pages(folio); + kaddr = folio_address(folio); + for (i = 0; i < nr; i++) + flush_kernel_dcache_page_addr(kaddr + i * PAGE_SIZE); if (!mapping) return; - pgoff = page->index; + pgoff = folio->index; /* * We have carefully arranged in arch_get_unmapped_area() that @@ -422,15 +444,29 @@ void flush_dcache_page(struct page *page) * on machines that support equivalent aliasing */ flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - addr = mpnt->vm_start + offset; - if (parisc_requires_coherency()) { - pte_t *ptep; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + nr - 1) { + unsigned long offset = pgoff - vma->vm_pgoff; + unsigned long pfn = folio_pfn(folio); + + addr = vma->vm_start; + nr = folio_nr_pages(folio); + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + addr += offset * PAGE_SIZE; + } + if (addr + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - addr) / PAGE_SIZE; - ptep = get_ptep(mpnt->vm_mm, addr); - if (ptep && pte_needs_flush(*ptep)) - flush_user_cache_page(mpnt, addr); + if (parisc_requires_coherency()) { + for (i = 0; i < nr; i++) { + pte_t *ptep = get_ptep(vma->vm_mm, + addr + i * PAGE_SIZE); + if (ptep && pte_needs_flush(*ptep)) + flush_user_cache_page(vma, + addr + i * PAGE_SIZE); + } } else { /* * The TLB is the engine of coherence on parisc: @@ -443,27 +479,32 @@ void flush_dcache_page(struct page *page) * in (until the user or kernel specifically * accesses it, of course) */ - flush_tlb_page(mpnt, addr); + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); if (old_addr == 0 || (old_addr & (SHM_COLOUR - 1)) != (addr & (SHM_COLOUR - 1))) { - __flush_cache_page(mpnt, addr, page_to_phys(page)); + for (i = 0; i < nr; i++) + __flush_cache_page(vma, + addr + i * PAGE_SIZE, + (pfn + i) * PAGE_SIZE); /* * Software is allowed to have any number * of private mappings to a page. */ - if (!(mpnt->vm_flags & VM_SHARED)) + if (!(vma->vm_flags & VM_SHARED)) continue; if (old_addr) pr_err("INEQUIVALENT ALIASES 0x%lx and 0x%lx in file %pD\n", - old_addr, addr, mpnt->vm_file); - old_addr = addr; + old_addr, addr, vma->vm_file); + if (nr == folio_nr_pages(folio)) + old_addr = addr; } } WARN_ON(++count == 4096); } flush_dcache_mmap_unlock(mapping); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* Defined in arch/parisc/kernel/pacache.S */ EXPORT_SYMBOL(flush_kernel_dcache_range_asm); From patchwork Mon Feb 27 17:57:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62061 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561130wrd; Mon, 27 Feb 2023 09:59:39 -0800 (PST) X-Google-Smtp-Source: AK7set9hmx9zOIf88BSUdKa1rYXgarwLPZcgnM5lzHumWpyCNIIbMnbtg3zi2eOvrk9mui0KDum5 X-Received: by 2002:a17:90b:4d8d:b0:237:b121:670b with SMTP id oj13-20020a17090b4d8d00b00237b121670bmr178913pjb.7.1677520779568; Mon, 27 Feb 2023 09:59:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520779; cv=none; d=google.com; s=arc-20160816; b=ZzPPibApe6rUzvf6MYp9aj53/cryaEmLqbDIfRZauddUyBJxKn2ftVf2VjIIVsAHyM VXmnmLy6CLt4CME0dIc09wcBtaRJBlF81HpQItAPGOOMzTWQMHD1i2Cy+hEruT0LzSZk G4c0VGTeN8oqZ6O4whtw/r9u76El/aXECYoihspo7kiYDjSNa0mbBlMmeIHKXMB5qpLa xhrtgUNjT8oILqZAbletnA1Xr50nKNPAW2rJWT11dWcuzYyfLfF/8ASQ6k7Dt1Ir3j+q w9nkc7LOUwMy4i6OxAI/e8Z+/5rF2TpUP4uNhp3D6+k4C34i8wDL6PVOkYA7GGw0U7r1 g1LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oGyJSppUKWwmSlbLYRX/2GyCtC4NUpfVzMzjWWfp1nw=; b=bBBzUBE/qix6CcfscyB9c4l6pK2NfXchHkX8YW7Pqb6WZQErd1ORtF8W99P+xTYySb iotW3avghUilxfAo2lrH3iPRBSb5pnkrm5+cLDCsPcUd96T8PK2mVqLTgDhnb1AsehJb 1n+X2bbJs0DOzrVXBxNyZ7jBL7ijzbUzmYGHrAeIeVOUc+H1jY4BgwkuUTkyZE7777g1 tlGmzVTJ5rHVDjk32Nxebkz/KbF4krf9AtoNpFzaw+um+jhk18STStg0FJuUbKt11ePD JH9H8JgfUpeETbZlAPF6c/AV0WqQPlFtbZObiJHaHEb8/lf/yqNdkBzgizSE9jDngBBN GaZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Mgrbb2Ph; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a28-20020a631a1c000000b004fbba9a8f99si6777290pga.856.2023.02.27.09.59.26; Mon, 27 Feb 2023 09:59:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Mgrbb2Ph; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229627AbjB0R6h (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230230AbjB0R6L (ORCPT ); Mon, 27 Feb 2023 12:58:11 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A60BE2449F; Mon, 27 Feb 2023 09:57:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=oGyJSppUKWwmSlbLYRX/2GyCtC4NUpfVzMzjWWfp1nw=; b=Mgrbb2PhWxMHKyM01T69WgWmBj 1DDnypyXeiUCNNh+gpsmio8NhP7yojwo9oWrccMuHuo60mjR2OTaOmeQ3V9sGKSmikOiPo3CHEztj tfZZjj6tVTPLG8RknlP06UdSG+kmADUadCoO9B4Vzwf5fwfL7Sutb7zPEf+H6AJJCE73IzDucQOiJ tmazwEqjtCLRN3fGAWnk/JiBYHTOfPug2T5mQ93o/n8eMin9ZSUqdPKoNrZj8UYv7Jgxmkkemh9cE axX4mLp90GO24EzGV67+1T/eEiMoJpHeuQ5qzLf+CvYViGjD1iUvplO9DiNtqzW2iMVuLcZLb7QTI c55cNPUg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkv-000IXu-CX; Mon, 27 Feb 2023 17:57:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 18/30] powerpc: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:29 +0000 Message-Id: <20230227175741.71216-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008029052369206?= X-GMAIL-MSGID: =?utf-8?q?1759008029052369206?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. I'm unsure about my merging of flush_dcache_icache_hugepage() and flush_dcache_icache_page() into flush_dcache_icache_folio() and subsequent removal of flush_dcache_icache_phys(). Please review. Signed-off-by: Matthew Wilcox (Oracle) Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: linuxppc-dev@lists.ozlabs.org --- arch/powerpc/include/asm/book3s/pgtable.h | 10 +-- arch/powerpc/include/asm/cacheflush.h | 14 ++-- arch/powerpc/include/asm/kvm_ppc.h | 10 +-- arch/powerpc/include/asm/nohash/pgtable.h | 13 ++-- arch/powerpc/include/asm/pgtable.h | 6 ++ arch/powerpc/mm/book3s64/hash_utils.c | 11 +-- arch/powerpc/mm/cacheflush.c | 81 +++-------------------- arch/powerpc/mm/nohash/e500_hugetlbpage.c | 3 +- arch/powerpc/mm/pgtable.c | 51 ++++++++------ 9 files changed, 73 insertions(+), 126 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h index d18b748ea3ae..c2ef811505b0 100644 --- a/arch/powerpc/include/asm/book3s/pgtable.h +++ b/arch/powerpc/include/asm/book3s/pgtable.h @@ -9,13 +9,6 @@ #endif #ifndef __ASSEMBLY__ -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - - #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty); @@ -36,7 +29,8 @@ void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * corresponding HPTE into the hash table ahead of time, instead of * waiting for the inevitable extra hash-table miss exception. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { if (IS_ENABLED(CONFIG_PPC32) && !mmu_has_feature(MMU_FTR_HPTE_TABLE)) return; diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h index 7564dd4fd12b..ef7d2de33b89 100644 --- a/arch/powerpc/include/asm/cacheflush.h +++ b/arch/powerpc/include/asm/cacheflush.h @@ -35,13 +35,19 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) * It just marks the page as not i-cache clean. We do the i-cache * flush later when the page is given to a user process, if necessary. */ -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) return; /* avoid an atomic op if possible */ - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } void flush_icache_range(unsigned long start, unsigned long stop); @@ -51,7 +57,7 @@ void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); #define flush_icache_user_page flush_icache_user_page -void flush_dcache_icache_page(struct page *page); +void flush_dcache_icache_folio(struct folio *folio); /** * flush_dcache_range(): Write any modified data cache blocks out to memory and diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 6bef23d6d0e3..e91dd8e88bb7 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -868,7 +868,7 @@ void kvmppc_init_lpid(unsigned long nr_lpids); static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) { - struct page *page; + struct folio *folio; /* * We can only access pages that the kernel maps * as memory. Bail out for unmapped ones. @@ -877,10 +877,10 @@ static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) return; /* Clear i-cache for new pages */ - page = pfn_to_page(pfn); - if (!test_bit(PG_dcache_clean, &page->flags)) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index a6caaaab6f92..69a7dd47a9f0 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -166,12 +166,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return __pte(pte_val(pte) & ~_PAGE_SWP_EXCLUSIVE); } -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - /* This low level function performs the actual PTE insertion * Setting the PTE depends on the MMU type and other factors. It's * an horrible mess that I'm not going to try to clean up now but @@ -282,10 +276,11 @@ static inline int pud_huge(pud_t pud) * for the page which has just been mapped in. */ #if defined(CONFIG_PPC_E500) && defined(CONFIG_HUGETLB_PAGE) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); #else -static inline -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) {} +static inline void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) {} #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 9972626ddaf6..bf1263ff7e67 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -41,6 +41,12 @@ struct mm_struct; #ifndef __ASSEMBLY__ +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr); +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1); + #ifndef MAX_PTRS_PER_PGD #define MAX_PTRS_PER_PGD PTRS_PER_PGD #endif diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index fedffe3ae136..ad2afa08e62e 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -1307,18 +1307,19 @@ void hash__early_init_mmu_secondary(void) */ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap) { - struct page *page; + struct folio *folio; if (!pfn_valid(pte_pfn(pte))) return pp; - page = pte_page(pte); + folio = page_folio(pte_page(pte)); /* page is dirty */ - if (!test_bit(PG_dcache_clean, &page->flags) && !PageReserved(page)) { + if (!test_bit(PG_dcache_clean, &folio->flags) && + !folio_test_reserved(folio)) { if (trap == INTERRUPT_INST_STORAGE) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } else pp |= HPTE_R_N; } diff --git a/arch/powerpc/mm/cacheflush.c b/arch/powerpc/mm/cacheflush.c index 0e9b4879c0f9..8ea6a096a664 100644 --- a/arch/powerpc/mm/cacheflush.c +++ b/arch/powerpc/mm/cacheflush.c @@ -76,51 +76,6 @@ void flush_icache_range(unsigned long start, unsigned long stop) } EXPORT_SYMBOL(flush_icache_range); -#ifdef CONFIG_HIGHMEM -/** - * flush_dcache_icache_phys() - Flush a page by it's physical address - * @physaddr: the physical address of the page - */ -static void flush_dcache_icache_phys(unsigned long physaddr) -{ - unsigned long bytes = l1_dcache_bytes(); - unsigned long nb = PAGE_SIZE / bytes; - unsigned long addr = physaddr & PAGE_MASK; - unsigned long msr, msr0; - unsigned long loop1 = addr, loop2 = addr; - - msr0 = mfmsr(); - msr = msr0 & ~MSR_DR; - /* - * This must remain as ASM to prevent potential memory accesses - * while the data MMU is disabled - */ - asm volatile( - " mtctr %2;\n" - " mtmsr %3;\n" - " isync;\n" - "0: dcbst 0, %0;\n" - " addi %0, %0, %4;\n" - " bdnz 0b;\n" - " sync;\n" - " mtctr %2;\n" - "1: icbi 0, %1;\n" - " addi %1, %1, %4;\n" - " bdnz 1b;\n" - " sync;\n" - " mtmsr %5;\n" - " isync;\n" - : "+&r" (loop1), "+&r" (loop2) - : "r" (nb), "r" (msr), "i" (bytes), "r" (msr0) - : "ctr", "memory"); -} -NOKPROBE_SYMBOL(flush_dcache_icache_phys) -#else -static void flush_dcache_icache_phys(unsigned long physaddr) -{ -} -#endif - /** * __flush_dcache_icache(): Flush a particular page from the data cache to RAM. * Note: this is necessary because the instruction cache does *not* @@ -148,17 +103,20 @@ static void __flush_dcache_icache(void *p) invalidate_icache_range(addr, addr + PAGE_SIZE); } -static void flush_dcache_icache_hugepage(struct page *page) +void flush_dcache_icache_folio(struct folio *folio) { - int i; - int nr = compound_nr(page); + unsigned int i, nr = folio_nr_pages(folio); - if (!PageHighMem(page)) { + if (flush_coherent_icache()) + return; + + if (!folio_test_highmem(folio)) { + void *addr = folio_address(folio); for (i = 0; i < nr; i++) - __flush_dcache_icache(lowmem_page_address(page + i)); + __flush_dcache_icache(addr + i * PAGE_SIZE); } else { for (i = 0; i < nr; i++) { - void *start = kmap_local_page(page + i); + void *start = kmap_local_folio(folio, i * PAGE_SIZE); __flush_dcache_icache(start); kunmap_local(start); @@ -166,27 +124,6 @@ static void flush_dcache_icache_hugepage(struct page *page) } } -void flush_dcache_icache_page(struct page *page) -{ - if (flush_coherent_icache()) - return; - - if (PageCompound(page)) - return flush_dcache_icache_hugepage(page); - - if (!PageHighMem(page)) { - __flush_dcache_icache(lowmem_page_address(page)); - } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { - void *start = kmap_local_page(page); - - __flush_dcache_icache(start); - kunmap_local(start); - } else { - flush_dcache_icache_phys(page_to_phys(page)); - } -} -EXPORT_SYMBOL(flush_dcache_icache_page); - void clear_user_page(void *page, unsigned long vaddr, struct page *pg) { clear_page(page); diff --git a/arch/powerpc/mm/nohash/e500_hugetlbpage.c b/arch/powerpc/mm/nohash/e500_hugetlbpage.c index 58c8d9849cb1..f3cb91107a47 100644 --- a/arch/powerpc/mm/nohash/e500_hugetlbpage.c +++ b/arch/powerpc/mm/nohash/e500_hugetlbpage.c @@ -178,7 +178,8 @@ book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte) * * This must always be called with the pte lock held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { if (is_vm_hugetlb_page(vma)) book3e_hugetlb_preload(vma, address, *ptep); diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index cb2dcdb18f8e..b3c7b874a7a2 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -58,7 +58,7 @@ static inline int pte_looks_normal(pte_t pte) return 0; } -static struct page *maybe_pte_to_page(pte_t pte) +static struct folio *maybe_pte_to_folio(pte_t pte) { unsigned long pfn = pte_pfn(pte); struct page *page; @@ -68,7 +68,7 @@ static struct page *maybe_pte_to_page(pte_t pte) page = pfn_to_page(pfn); if (PageReserved(page)) return NULL; - return page; + return page_folio(page); } #ifdef CONFIG_PPC_BOOK3S @@ -84,12 +84,12 @@ static pte_t set_pte_filter_hash(pte_t pte) pte = __pte(pte_val(pte) & ~_PAGE_HPTEFLAGS); if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE) || cpu_has_feature(CPU_FTR_NOEXECUTE))) { - struct page *pg = maybe_pte_to_page(pte); - if (!pg) + struct folio *folio = maybe_pte_to_folio(pte); + if (!folio) return pte; - if (!test_bit(PG_dcache_clean, &pg->flags)) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } return pte; @@ -107,7 +107,7 @@ static pte_t set_pte_filter_hash(pte_t pte) { return pte; } */ static inline pte_t set_pte_filter(pte_t pte) { - struct page *pg; + struct folio *folio; if (radix_enabled()) return pte; @@ -120,18 +120,18 @@ static inline pte_t set_pte_filter(pte_t pte) return pte; /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) return pte; /* If the page clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) return pte; /* If it's an exec fault, we flush the cache and make it clean */ if (is_exec_fault()) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); return pte; } @@ -142,7 +142,7 @@ static inline pte_t set_pte_filter(pte_t pte) static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, int dirty) { - struct page *pg; + struct folio *folio; if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) return pte; @@ -168,17 +168,17 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, #endif /* CONFIG_DEBUG_VM */ /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) goto bail; /* If the page is already clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) goto bail; /* Clean the page and set PG_dcache_clean */ - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); bail: return pte_mkexec(pte); @@ -187,8 +187,8 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, /* * set_pte stores a linux PTE into the linux page table. */ -void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr) { /* * Make sure hardware valid bit is not set. We don't do @@ -203,7 +203,14 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte = set_pte_filter(pte); /* Perform the setting of the PTE */ - __set_pte_at(mm, addr, ptep, pte, 0); + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + addr += PAGE_SIZE; + } } void unmap_kernel_page(unsigned long va) From patchwork Mon Feb 27 17:57:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62062 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561167wrd; Mon, 27 Feb 2023 09:59:44 -0800 (PST) X-Google-Smtp-Source: AK7set9ikSMHGLatHXNPOzjFbEzQLt2HJwCoRSqOygH8IvLnO/tza2bOxk8L4uL50dU+KWB5QeEV X-Received: by 2002:a17:90b:1c88:b0:233:ebd4:301c with SMTP id oo8-20020a17090b1c8800b00233ebd4301cmr134935pjb.1.1677520783838; Mon, 27 Feb 2023 09:59:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520783; cv=none; d=google.com; s=arc-20160816; b=TGravX43Q557D+7V8mU1ZSwj8DEyID/eA4eNTsdMTDAJ4VSUu0ByzbqUUx45fli/CY +SAcUJcqtenFBQUo4urGFZlaa612sHtVoa1afhn8dI4tXYuJRPVUrfy/I0IHamLO9VfK VzUGNndkwn43q4qV/S1vjuaF6rUuudZ8Gfh7BViUszRJCXHQaRAZn2V0qG1RWmT5nntw QqiMKI67P/rDSfBHlD/GlE67pb7JvbsL0kX77NORZF7cl5aDJjfMPir1l8Eff1k7wPkF 2WNa0yLRwT45mRZ7ecDL0A5Z0CQL0K0CBsnlj0Q7MCtTq4uNL6pHAy7w282YE0Oxj0wC zi9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bgDXt1j338PLwuReLhtbxzMM/LqJBLyZGUvNk00hFLE=; b=bp5768inulS0o3a3asPFLVv43sKDWaRjqwsPLl6jQqV8fCW2w6CZ4W2ia4heLT7YL1 BDJa82QL9lzxgPQ5UqUunW1JSIO+Q02UKASRCYs5rItpDCuAIlGI1pQ1CP3V1FbeGrhW 1wmTiZSlB+rWI9sxzVSLClRR5jRCDihkI+zDTbENWMJW0mXFaDFhQadeYf/nt81se6Sz 054qH3c0CX0DPPLOanAF3A0EolqnV5k5KG658D7tN6fhqVbwAHVyZlf/BNq5q2u4Ql9d 7FoR0WDSjGd88zz0RHKIuI0a/1QUTuRakFK88EwkSj3DEe1QorFS1eGuQxTE4tL//25Y vtHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=fXZxVUhM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fz10-20020a17090b024a00b002373abf1a3asi10660666pjb.37.2023.02.27.09.59.31; Mon, 27 Feb 2023 09:59:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=fXZxVUhM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230105AbjB0R6o (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230250AbjB0R6N (ORCPT ); Mon, 27 Feb 2023 12:58:13 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B58A81BC5; Mon, 27 Feb 2023 09:57:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bgDXt1j338PLwuReLhtbxzMM/LqJBLyZGUvNk00hFLE=; b=fXZxVUhM8zzyPOZj2ei9NGlJ1G Y7znnj4hjeGc2cExR2/3txSarQ6qUGvz24mekxKTsFuLR3zZJKwPCpPKVXlFClEem3Y5RQNu+Yfnj g9pmCdARqZsoljLm+g+Iefpx/6/p3h9mw4CXZ52UiE24IdCwkEBDT+3JIuF+172AlBqSFIbVjX3hd ptzgU+Ip4+5uitj4fJdd/oOOPmUdGdMJSWOJ8XtNakxNV6bapIUlNMg4KP/lhqjjFfH+knMZ0nnLT Zde+bpXJTks/VLsaXD25/U5lBgKimm8vO4Fooz84WGZehArha0TVSp2lJj65gqv+Xlolir94JQLOD ShdgebWg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkv-000IXx-Fk; Mon, 27 Feb 2023 17:57:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Alexandre Ghiti , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [PATCH v2 19/30] riscv: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:30 +0000 Message-Id: <20230227175741.71216-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008033159374343?= X-GMAIL-MSGID: =?utf-8?q?1759008033159374343?= Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Alexandre Ghiti Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: linux-riscv@lists.infradead.org --- arch/riscv/include/asm/cacheflush.h | 19 +++++++++---------- arch/riscv/include/asm/pgtable.h | 26 +++++++++++++++++++------- arch/riscv/mm/cacheflush.c | 11 ++--------- 3 files changed, 30 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 03e3b95ae6da..10e5e96f09b5 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -15,20 +15,19 @@ static inline void local_flush_icache_all(void) #define PG_dcache_clean PG_arch_1 -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in flush_icache_pte()). - */ - if (PageHuge(page)) - page = compound_head(page); - - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); } +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + /* * RISC-V doesn't have an instruction to flush parts of the instruction cache, * so instead we just flush the whole thing. diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index b516f3b59616..3a3a776fc047 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -405,8 +405,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) /* Commit new configuration to MMU hardware */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* * The kernel assumes that TLBs don't cache invalid entries, but @@ -415,8 +415,11 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, * Relying on flush_tlb_fix_spurious_fault would suffice, but * the extra traps reduce performance. So, eagerly SFENCE.VMA. */ - local_flush_tlb_page(address); + while (nr--) + local_flush_tlb_page(address + nr * PAGE_SIZE); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -456,12 +459,21 @@ static inline void __set_pte_at(struct mm_struct *mm, set_pte(ptep, pteval); } -static inline void set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pteval, 1); - __set_pte_at(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pteval); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pteval) += 1 << _PAGE_PFN_SHIFT; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c index fcd6145fbead..e36a851e5788 100644 --- a/arch/riscv/mm/cacheflush.c +++ b/arch/riscv/mm/cacheflush.c @@ -81,16 +81,9 @@ void flush_icache_mm(struct mm_struct *mm, bool local) #ifdef CONFIG_MMU void flush_icache_pte(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { + if (!test_bit(PG_dcache_clean, &folio->flags)) { flush_icache_all(); set_bit(PG_dcache_clean, &page->flags); } From patchwork Mon Feb 27 17:57:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62059 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561111wrd; Mon, 27 Feb 2023 09:59:37 -0800 (PST) X-Google-Smtp-Source: AK7set+f6XRz3vSaz+tVnWEQw3b8JverG1wOmbuY8jSyTkTkq2oxeF243p9NKnxTu9Or4gJ6hCLe X-Received: by 2002:a17:902:d2cf:b0:19c:fbdb:43cb with SMTP id n15-20020a170902d2cf00b0019cfbdb43cbmr6464054plc.51.1677520776931; Mon, 27 Feb 2023 09:59:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520776; cv=none; d=google.com; s=arc-20160816; b=kcbVcSHffMArZ9JfIaVjh3aW9sgMgJ8qBl7Wj9FSIH5krZqHAuKh3jjl7uNjSHNZgG 0xQ1DaY3sOKAHg4aCf9xwnD2es8N14r53J8nIqCXwwHLvtEJmg+0lgutvVJhqKNMhsmK MNfh/89PzLYzh/ZfXhYzcv572oBKV4KNWwqszIq+gCqrKRFSMza8UUGDUtqErQHmNN1r j6PSOsdcyQKlek68Wkwsmh0jmXlBQfLVZBIeF+qleY4pcx+gQ+tv3wR5QMxfYU9w5S2p LBe1XUnoj9L1ERfORPW2SkRaUVl1FPRiBnKgJxrlys/OEP/QiKAMXebUjD1i6xz5DhhI +vPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OsGw+Kta3YWjQfJb2PytPQ2Q1ww6/aja6t5dfRe0Qq4=; b=RNMq439LRegPKSZVwtistyGJKULNN7UvDp+dcWZf9l91xTJSqirWa3rYU2+Pd4fDjj X+qtgbN9lCUz+dQzJhDqIo7kBVEeoUJyZIJRuM/OwK34dlZCFRR+C70LhFuERtufF98r RSMoDSgEIJEIo0kSZtMKs4C1eIo+xZi0BhHnWs7Zmvu4P//lgOZHlcjypCltmwSCgJNJ SVJGuIt4eotXU/klWslTK3fTSKi+hR28m2yK7O6TTuTBghO2rFd/YJ6QePZ03xJOsRsb t6fRqF4zUrAbYPnj2N7SeqAVbBGpqkawvgh8KRrmMKYgbQzOGJWecYlua9Wt5H7E1Wc1 5bfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=knrGnGCf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id li15-20020a170903294f00b0019cd865cba4si7228623plb.142.2023.02.27.09.59.24; Mon, 27 Feb 2023 09:59:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=knrGnGCf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230321AbjB0R6X (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230130AbjB0R54 (ORCPT ); Mon, 27 Feb 2023 12:57:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC39D2449A; Mon, 27 Feb 2023 09:57:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OsGw+Kta3YWjQfJb2PytPQ2Q1ww6/aja6t5dfRe0Qq4=; b=knrGnGCfUo2tcrcph/DNuFOLU6 VIAEyQ7tnaj2wqyjOcp4mXoeoxzw4dUgrk3AVhoFoxQ9ZOEMHuSPFYcXInl/YZmH8ee9T8cknoGTF eHrNsl145ZSjyTlkUHZEkR6G4QC5T7R8yWXjyERjnGCiAJyDbgQXEg6v9JVmAZ4aT1uyhyHTAFnAh nmO55q2lznKdtJeR7mEiQ4Ix0wdFz54AyibNXqYVMVDCdi26fa9uN8StvjC/DBvPXQD7ClGulqmgs n++OiRl5uf0PefxKVaq/Z6LQOaZWNCHQPn5Zm4G542cDA2BA9nFr0V3ar6DaTt61wl/fwnsLF5s+W varwqbKA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkv-000IY8-Ki; Mon, 27 Feb 2023 17:57:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org Subject: [PATCH v2 20/30] s390: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:31 +0000 Message-Id: <20230227175741.71216-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008026385328000?= X-GMAIL-MSGID: =?utf-8?q?1759008026385328000?= Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: linux-s390@vger.kernel.org --- arch/s390/include/asm/pgtable.h | 34 ++++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 2c70b4d1263d..46bf475116f1 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -50,6 +50,7 @@ void arch_report_meminfo(struct seq_file *m); * tables contain all the necessary information. */ #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0) #define update_mmu_cache_pmd(vma, address, ptep) do { } while (0) /* @@ -1317,21 +1318,36 @@ pgprot_t pgprot_writecombine(pgprot_t prot); pgprot_t pgprot_writethrough(pgprot_t prot); /* - * Certain architectures need to do special things when PTEs - * within a page table are directly modified. Thus, the following - * hook is made available. + * Set multiple PTEs to consecutive pages with a single call. All PTEs + * are within the same folio, PMD and VMA. */ -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t entry) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t entry, unsigned int nr) { if (pte_present(entry)) entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED)); - if (mm_has_pgste(mm)) - ptep_set_pte_at(mm, addr, ptep, entry); - else - set_pte(ptep, entry); + if (mm_has_pgste(mm)) { + for (;;) { + ptep_set_pte_at(mm, addr, ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + addr += PAGE_SIZE; + } + } else { + for (;;) { + set_pte(ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + } + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + /* * Conversion functions: convert a page and protection to a page entry, * and a page entry and page directory to the page they refer to. From patchwork Mon Feb 27 17:57:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62067 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561215wrd; Mon, 27 Feb 2023 09:59:50 -0800 (PST) X-Google-Smtp-Source: AK7set/HWll38kopUqPkbo6cQ/VI7DMuqu1XMJ5FnRioCTsuhgPHk9vsq+0kG97CCLbLddBNZ816 X-Received: by 2002:aa7:9639:0:b0:5ab:be1b:c75e with SMTP id r25-20020aa79639000000b005abbe1bc75emr22822504pfg.24.1677520790217; Mon, 27 Feb 2023 09:59:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520790; cv=none; d=google.com; s=arc-20160816; b=Um9l0bOJasWfXCC89ekhqueFwNBwArcj3OgHPuMOcq+0MK4LG9Fiv1KbBfqru+ScRS VEPj4MS1bIGCiUjtVaxk9IJG7VyxYusKXWZig3m4/1pjjHHNWPug9rdQ2PVVSAwKjqPs TanhYaXTp7zLiutqhDbqfptP/ACU+W761gUBTwg+xaz/Ttx4aynl7ZjX0x6/FcZW6iX0 uFQJqJFpXlSCMXFlFkIv+OOXRQCd+VbX/AIR9IOYNZ6UneErCy0WgscMgM0LUr8twSU5 dtucgkm+2fdvYWJ3DUa8dGOpE8cqpVbqr889tmsBNYIkI3khpqy7UdNFeOCj58crUiDh QFiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bHG0cAAB5y+5Fy2o+LQ/LT1veDA41EiEXJDm/yoHE4w=; b=aNLusMlnuO09nV9AQ2hI8CWP+9fnWEyGTinJjEumS6Nrm3iciMd6Z6jTEDJ4huWCHD oMl+m+kqtWaGSmtUseQUu4G8FmB45whCmnnRCn9uMHPEhMWl8jRJkHpnp1wFdVoivYfY wfpJ+yUVKKL9CHT+IpayLpOLVOIe8urcUSh2g3f4woijlkEXoneE7XMyAWrdrGVIohOX DCYwz1z8lbZxtDUgf5cnMOHTGVFBdqD0S6GevtIsmrwgGuQp4IFZxcqsD7gxx4/v/lSK cjTrB4xhkUKIp+7X7qx9Q3LMRLf3XNJk5Q3M4PGildMWZUIsAQdINY2M1wU4zsW0tXnz L6Rw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=UDvwIKK4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i16-20020a635850000000b004f1bde9e750si7171110pgm.879.2023.02.27.09.59.37; Mon, 27 Feb 2023 09:59:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=UDvwIKK4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230430AbjB0R6t (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230248AbjB0R6P (ORCPT ); Mon, 27 Feb 2023 12:58:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E6BF9C; Mon, 27 Feb 2023 09:57:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bHG0cAAB5y+5Fy2o+LQ/LT1veDA41EiEXJDm/yoHE4w=; b=UDvwIKK4d88/Z+ZVSEI6qzjHgK /o/Hz/5qOwY8MRC1/uAavSEdZEzUEUVJxkZduovyPXMzY5T3CSQ8/9qFTB3WViPE4KaFQ4xvU4x4o LKqTEQ3dTLI/v7wZf/NDHZVBon23R/U9LAAwyyigMorgVdUdqqi/lJo92QDV8ez6z+zlcMf289Vsh 4kHEOj8KHbvBbH67/YU1caGrEhlpMM+Zjm5lmc/Bw2/CpJyNQAYgnlr2TAVTZNsjrAhzA4C1uuSns sT8WBWHLcdYSkR5b+elE4PmwL7cva8wSrNXBliGsLIzjkS52rSoPAvkRcA8RSdwoHzggtNzCSF4Il ifGTtsvQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkv-000IYD-OR; Mon, 27 Feb 2023 17:57:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , linux-sh@vger.kernel.org Subject: [PATCH v2 21/30] superh: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:32 +0000 Message-Id: <20230227175741.71216-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008040448497445?= X-GMAIL-MSGID: =?utf-8?q?1759008040448497445?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clean flag from being per-page to per-folio. Flush the entire folio containing the pages in flush_icache_pages() for ease of implementation. Signed-off-by: Matthew Wilcox (Oracle) Cc: Yoshinori Sato Cc: Rich Felker Cc: John Paul Adrian Glaubitz Cc: linux-sh@vger.kernel.org --- arch/sh/include/asm/cacheflush.h | 21 ++++++++----- arch/sh/include/asm/pgtable.h | 6 ++-- arch/sh/include/asm/pgtable_32.h | 16 ++++++++-- arch/sh/mm/cache-j2.c | 4 +-- arch/sh/mm/cache-sh4.c | 26 ++++++++++----- arch/sh/mm/cache-sh7705.c | 26 +++++++++------ arch/sh/mm/cache.c | 54 ++++++++++++++++++-------------- arch/sh/mm/kmap.c | 3 +- 8 files changed, 101 insertions(+), 55 deletions(-) diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 481a664287e2..9fceef6f3e00 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -13,9 +13,9 @@ * - flush_cache_page(mm, vmaddr, pfn) flushes a single page * - flush_cache_range(vma, start, end) flushes a range of pages * - * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache + * - flush_dcache_folio(folio) flushes(wback&invalidates) a folio for dcache * - flush_icache_range(start, end) flushes(invalidates) a range for icache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) pages for icache * - flush_cache_sigtramp(vaddr) flushes the signal trampoline */ extern void (*local_flush_cache_all)(void *args); @@ -23,9 +23,9 @@ extern void (*local_flush_cache_mm)(void *args); extern void (*local_flush_cache_dup_mm)(void *args); extern void (*local_flush_cache_page)(void *args); extern void (*local_flush_cache_range)(void *args); -extern void (*local_flush_dcache_page)(void *args); +extern void (*local_flush_dcache_folio)(void *args); extern void (*local_flush_icache_range)(void *args); -extern void (*local_flush_icache_page)(void *args); +extern void (*local_flush_icache_folio)(void *args); extern void (*local_flush_cache_sigtramp)(void *args); static inline void cache_noop(void *args) { } @@ -42,11 +42,18 @@ extern void flush_cache_page(struct vm_area_struct *vma, extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range -extern void flush_icache_page(struct vm_area_struct *vma, - struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index 3ce30becf6df..1a8fdc3bc363 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -102,13 +102,15 @@ extern void __update_cache(struct vm_area_struct *vma, extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); -static inline void -update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; __update_cache(vma, address, pte); __update_tlb(vma, address, pte); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h index 21952b094650..03ba1834e126 100644 --- a/arch/sh/include/asm/pgtable_32.h +++ b/arch/sh/include/asm/pgtable_32.h @@ -307,7 +307,19 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define set_pte(pteptr, pteval) (*(pteptr) = pteval) #endif -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* * (pmds are folded into pgds so this doesn't get actually called, @@ -323,7 +335,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define pte_none(x) (!pte_val(x)) #define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) +#define pte_clear(mm, addr, ptep) set_pte(ptep, __pte(0)) #define pmd_none(x) (!pmd_val(x)) #define pmd_present(x) (pmd_val(x)) diff --git a/arch/sh/mm/cache-j2.c b/arch/sh/mm/cache-j2.c index f277862a11f5..9ac960214380 100644 --- a/arch/sh/mm/cache-j2.c +++ b/arch/sh/mm/cache-j2.c @@ -55,9 +55,9 @@ void __init j2_cache_init(void) local_flush_cache_dup_mm = j2_flush_both; local_flush_cache_page = j2_flush_both; local_flush_cache_range = j2_flush_both; - local_flush_dcache_page = j2_flush_dcache; + local_flush_dcache_folio = j2_flush_dcache; local_flush_icache_range = j2_flush_icache; - local_flush_icache_page = j2_flush_icache; + local_flush_icache_folio = j2_flush_icache; local_flush_cache_sigtramp = j2_flush_icache; pr_info("Initial J2 CCR is %.8x\n", __raw_readl(j2_ccr_base)); diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c index 72c2e1b46c08..862046f26981 100644 --- a/arch/sh/mm/cache-sh4.c +++ b/arch/sh/mm/cache-sh4.c @@ -107,19 +107,29 @@ static inline void flush_cache_one(unsigned long start, unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh4_flush_dcache_page(void *arg) +static void sh4_flush_dcache_folio(void *arg) { - struct page *page = arg; - unsigned long addr = (unsigned long)page_address(page); + struct folio *folio = arg; #ifndef CONFIG_SMP - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); else #endif - flush_cache_one(CACHE_OC_ADDRESS_ARRAY | - (addr & shm_align_mask), page_to_phys(page)); + { + unsigned long pfn = folio_pfn(folio); + unsigned long addr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + flush_cache_one(CACHE_OC_ADDRESS_ARRAY | + (addr & shm_align_mask), + pfn * PAGE_SIZE); + addr += PAGE_SIZE; + pfn++; + } + } wmb(); } @@ -379,7 +389,7 @@ void __init sh4_cache_init(void) __raw_readl(CCN_PRR)); local_flush_icache_range = sh4_flush_icache_range; - local_flush_dcache_page = sh4_flush_dcache_page; + local_flush_dcache_folio = sh4_flush_dcache_folio; local_flush_cache_all = sh4_flush_cache_all; local_flush_cache_mm = sh4_flush_cache_mm; local_flush_cache_dup_mm = sh4_flush_cache_mm; diff --git a/arch/sh/mm/cache-sh7705.c b/arch/sh/mm/cache-sh7705.c index 9b63a53a5e46..b509a407588f 100644 --- a/arch/sh/mm/cache-sh7705.c +++ b/arch/sh/mm/cache-sh7705.c @@ -132,15 +132,20 @@ static void __flush_dcache_page(unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh7705_flush_dcache_page(void *arg) +static void sh7705_flush_dcache_folio(void *arg) { - struct page *page = arg; - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = arg; + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); - else - __flush_dcache_page(__pa(page_address(page))); + clear_bit(PG_dcache_clean, &folio->flags); + else { + unsigned long pfn = folio_pfn(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_dcache_page((pfn + i) * PAGE_SIZE); + } } static void sh7705_flush_cache_all(void *args) @@ -176,19 +181,20 @@ static void sh7705_flush_cache_page(void *args) * Not entirely sure why this is necessary on SH3 with 32K cache but * without it we get occasional "Memory fault" when loading a program. */ -static void sh7705_flush_icache_page(void *page) +static void sh7705_flush_icache_folio(void *arg) { - __flush_purge_region(page_address(page), PAGE_SIZE); + struct folio *folio = arg; + __flush_purge_region(folio_address(folio), folio_size(folio)); } void __init sh7705_cache_init(void) { local_flush_icache_range = sh7705_flush_icache_range; - local_flush_dcache_page = sh7705_flush_dcache_page; + local_flush_dcache_folio = sh7705_flush_dcache_folio; local_flush_cache_all = sh7705_flush_cache_all; local_flush_cache_mm = sh7705_flush_cache_all; local_flush_cache_dup_mm = sh7705_flush_cache_all; local_flush_cache_range = sh7705_flush_cache_all; local_flush_cache_page = sh7705_flush_cache_page; - local_flush_icache_page = sh7705_flush_icache_page; + local_flush_icache_folio = sh7705_flush_icache_folio; } diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 3aef78ceb820..93fc5fb8ec1c 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -20,9 +20,9 @@ void (*local_flush_cache_mm)(void *args) = cache_noop; void (*local_flush_cache_dup_mm)(void *args) = cache_noop; void (*local_flush_cache_page)(void *args) = cache_noop; void (*local_flush_cache_range)(void *args) = cache_noop; -void (*local_flush_dcache_page)(void *args) = cache_noop; +void (*local_flush_dcache_folio)(void *args) = cache_noop; void (*local_flush_icache_range)(void *args) = cache_noop; -void (*local_flush_icache_page)(void *args) = cache_noop; +void (*local_flush_icache_folio)(void *args) = cache_noop; void (*local_flush_cache_sigtramp)(void *args) = cache_noop; void (*__flush_wback_region)(void *start, int size); @@ -61,15 +61,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + struct folio *folio = page_folio(page); + + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(vto); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } if (vma->vm_flags & VM_EXEC) @@ -80,27 +82,30 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + test_bit(PG_dcache_clean, &folio->flags)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(vfrom); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } } void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); - if (boot_cpu_data.dcache.n_aliases && page_mapcount(from) && - test_bit(PG_dcache_clean, &from->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(src) && + test_bit(PG_dcache_clean, &src->flags)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(vfrom); @@ -136,35 +141,37 @@ EXPORT_SYMBOL(clear_user_highpage); void __update_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte) { - struct page *page; unsigned long pfn = pte_pfn(pte); if (!boot_cpu_data.dcache.n_aliases) return; - page = pfn_to_page(pfn); if (pfn_valid(pfn)) { - int dirty = !test_and_set_bit(PG_dcache_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dcache_clean, &folio->flags); if (dirty) - __flush_purge_region(page_address(page), PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } void __flush_anon_page(struct page *page, unsigned long vmaddr) { + struct folio *folio = page_folio(page); unsigned long addr = (unsigned long) page_address(page); if (pages_do_alias(addr, vmaddr)) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); /* XXX.. For now kunmap_coherent() does a purge */ /* __flush_purge_region((void *)kaddr, PAGE_SIZE); */ kunmap_coherent(kaddr); - } else - __flush_purge_region((void *)addr, PAGE_SIZE); + } else + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } @@ -215,11 +222,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, } EXPORT_SYMBOL(flush_cache_range); -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - cacheop_on_each_cpu(local_flush_dcache_page, page, 1); + cacheop_on_each_cpu(local_flush_dcache_folio, folio, 1); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void flush_icache_range(unsigned long start, unsigned long end) { @@ -233,10 +240,11 @@ void flush_icache_range(unsigned long start, unsigned long end) } EXPORT_SYMBOL(flush_icache_range); -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { - /* Nothing uses the VMA, so just pass the struct page along */ - cacheop_on_each_cpu(local_flush_icache_page, page, 1); + /* Nothing uses the VMA, so just pass the folio along */ + cacheop_on_each_cpu(local_flush_icache_folio, page_folio(page), 1); } void flush_cache_sigtramp(unsigned long address) diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c index 73fd7cc99430..fa50e8f6e7a9 100644 --- a/arch/sh/mm/kmap.c +++ b/arch/sh/mm/kmap.c @@ -27,10 +27,11 @@ void __init kmap_coherent_init(void) void *kmap_coherent(struct page *page, unsigned long addr) { + struct folio *folio = page_folio(page); enum fixed_addresses idx; unsigned long vaddr; - BUG_ON(!test_bit(PG_dcache_clean, &page->flags)); + BUG_ON(!test_bit(PG_dcache_clean, &folio->flags)); preempt_disable(); pagefault_disable(); From patchwork Mon Feb 27 17:57:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62058 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561077wrd; Mon, 27 Feb 2023 09:59:31 -0800 (PST) X-Google-Smtp-Source: AK7set/qrGxMVzhMy3dmAWqxmJ6tkcPfBMS8pzYvWbH7r7PXaR452ubIIgxZNl2636IbunFOsU+Y X-Received: by 2002:a17:90b:20e:b0:237:d44c:5861 with SMTP id fy14-20020a17090b020e00b00237d44c5861mr92655pjb.12.1677520771501; Mon, 27 Feb 2023 09:59:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520771; cv=none; d=google.com; s=arc-20160816; b=WQT8jN+Mo7ycu5trClGGp76XWFYTnLMkyKVnrnGONHvQ/q2oKDcUpFBcS5RxEJHSbL qFii0qsQkYEnCU4HVFtLWznjNouusFzIV6J1ifJWcHqVowIyLVjqr+WGJAtquErLiMlf 4zAvlE/9vOEqCQ6KK37VE7t2em5iUKSlhBla6MNYxQVQeM4lmkB/WrnGCfyeBhc+4mp8 GWitHkHuyU/dhUhPbCKAwBn/7KQth1NP86DwgtNJXcRZx5IS7Ej9itmMrDichn1ALuRo Dx0ixv7ge1ZZM7jDtxMozofz5PTeUUESk76uFkIEToUBgSqsDTbL/GhZANB1GnUj+6Hj 6lRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hq3kISoFqRozP9bJJptL5dM/gm/mC87Ff3e3oJGmMC0=; b=puFs2uwSCYC7YCgLXYd7MlRi7wRMlMY7wt4+4xCcrL44eaCYKaieDK3KoKDkkCDdO+ PrtAu4RnrleQ8WYCCTPvIXNijM8H+nhWOVKJFRi0m9SDyUPpldx4iSWNRoQBcAbgdiLr eQUGUBJMgrhQpaDUMkrSc/L/Xb4Vosfwj64hczKlb7gz+3HcbByOoP83MPzyFPUHJXSO N+4uHvRMi1SsbJKSmdZkX3zeDDs1/AemfjXSgaEJr/SnXYYThBGGNITBd1yljytzwwLn vOLRiXc3fiAXcOArqWL3Rw4EopRSGAW4XLEU9USEZK0l+GrutkOLwUoRK6RyQkB/n7BB PRPA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cbxS9VK9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h13-20020a17090ac38d00b002340f36b545si7837388pjt.18.2023.02.27.09.59.17; Mon, 27 Feb 2023 09:59:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cbxS9VK9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230369AbjB0R6d (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230206AbjB0R6K (ORCPT ); Mon, 27 Feb 2023 12:58:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A23B92449C; Mon, 27 Feb 2023 09:57:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hq3kISoFqRozP9bJJptL5dM/gm/mC87Ff3e3oJGmMC0=; b=cbxS9VK9MYXMYT01A3bZZGFF2W B+5ggsaBa3ENEQKMM98W0IbxREG8/YKxl4Oa1VjlFjSJTABMyP++zzgiAqIP/AE9Nsnq451nC3o2X x68Zw0mxe34XSOukJOKU/oroID010FCbGAg1rD9zoj/qyVCbA1ENaXUooQYhIpD2NOpWq3OZzo8ro drqDLU+3QpES4i6hCkkQln/Li6B7vidBoDv1yOabITrqNfD+4mtnF3U1ESzfeuHk+eBYhNtbf1LoX O8T/CtmQum8hNP5OJ77LkEmnpAAB4hUefV6hsby8pFNfFokt6C/lOLjr0WLWIOjwBpPOKd8TL4ShZ JWSKxPVg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkv-000IYJ-Sz; Mon, 27 Feb 2023 17:57:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v2 22/30] sparc32: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:33 +0000 Message-Id: <20230227175741.71216-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008020451963372?= X-GMAIL-MSGID: =?utf-8?q?1759008020451963372?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_32.h | 9 +++++++-- arch/sparc/include/asm/pgtable_32.h | 15 ++++++++++++++- arch/sparc/mm/init_32.c | 13 +++++++++++-- 3 files changed, 32 insertions(+), 5 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index adb6991d0455..8dba35d63328 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -16,6 +16,7 @@ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) #define flush_icache_page(vma, pg) do { } while (0) +#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ @@ -35,11 +36,15 @@ #define flush_page_for_dma(addr) \ sparc32_cachetlb_ops->page_for_dma(addr) -struct page; void sparc_flush_page_to_ram(struct page *page); +void sparc_flush_folio_to_ram(struct folio *folio); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) sparc_flush_page_to_ram(page) +#define flush_dcache_folio(folio) sparc_flush_folio_to_ram(folio) +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index d4330e3c57a6..47ae55ea1837 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -101,7 +101,19 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) srmmu_swap((unsigned long *)ptep, pte_val(pteval)); } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline int srmmu_device_memory(unsigned long x) { @@ -318,6 +330,7 @@ void mmu_info(struct seq_file *m); #define FAULT_CODE_USER 0x4 #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) void srmmu_mapiorange(unsigned int bus, unsigned long xpa, unsigned long xva, unsigned int len); diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 9c0ea457bdf0..d96a14ffceeb 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -297,11 +297,20 @@ void sparc_flush_page_to_ram(struct page *page) { unsigned long vaddr = (unsigned long)page_address(page); - if (vaddr) - __flush_page_to_ram(vaddr); + __flush_page_to_ram(vaddr); } EXPORT_SYMBOL(sparc_flush_page_to_ram); +void sparc_flush_folio_to_ram(struct folio *folio) +{ + unsigned long vaddr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_page_to_ram(vaddr + i * PAGE_SIZE); +} +EXPORT_SYMBOL(sparc_flush_folio_to_ram); + static const pgprot_t protection_map[16] = { [VM_NONE] = PAGE_NONE, [VM_READ] = PAGE_READONLY, From patchwork Mon Feb 27 17:57:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62081 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2563875wrd; Mon, 27 Feb 2023 10:03:53 -0800 (PST) X-Google-Smtp-Source: AK7set+l4h1rbxLkK1Ma1cNUmKKVNUh26YF+5Z5cZstEXPcHFBg0pWRTAr16sdiWQCnGpRc7IeD1 X-Received: by 2002:a17:906:748d:b0:8b3:e24:de0e with SMTP id e13-20020a170906748d00b008b30e24de0emr40894680ejl.27.1677521032871; Mon, 27 Feb 2023 10:03:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677521032; cv=none; d=google.com; s=arc-20160816; b=T3PoA/cP33V3BxFFgl4xAoaZBx6XAwlG8MXwbVJ++sXwv+C/39P9g+V6ZEjxBCj2d2 FAf6MCyjUGduwYuiEAgpHsnDsZ0Izy8YMnOHkOpQmbnLr0/Ix7v30Edirv6JR/hmZF/b VSN9XihcaCHYixAiLIlO6+uVxQu87wQ3KlDFEdTeKc+MDsfGRBug+/DaeCG1RzEhTkcF E0453Ddaqyac3Oy0QOG8abOLDE8opRQyPLOGGDJoBc+Vtzqd0pSMQobQARdmM9FrOOX2 6/WuyDRIt/2CoiaZNS0XafHHlT+ySxFj+jT0sEGE2wfpt4QePq/BOIG09phtLYI+iy/G e74w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ILwuDhxkxYADxt38igbE8Aj0Ns9nbUEIuqdImY95JZY=; b=I5zw1EK5airy89xFWiuNmvqxQ72ALWJQ02smSMoigU0VtuideVxenMAk0zmgMtyjRp jq0bqjMFnRzO5a0T4/F1hGBgpn7iHC5jp9Hy6teZZ+hDm4PJfp9wPbLK5YzzdFUNIYZM gNLtbXVMXfzQOnXlyg1oXip1v3NxRvL/kJqOmIR6OahGKakqZBJerE8jQl0WY4Lp/DPo NeftmDnVphhlA7atW0FkaiwS+YEJA5p4rsveu2lBZ6j8/8X+djwEAa8OLQGEg6/nL8sC ooI2iijuCJSkV4gl+SVK/w+FeLJxT1NYWNcN8cf5DvQn6hWDs7jxxTBZ++XSTxfnwwGH e1Hw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Tp2FrTyT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fi17-20020a170906da1100b008da520f3fc5si7596849ejb.503.2023.02.27.10.03.30; Mon, 27 Feb 2023 10:03:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Tp2FrTyT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230499AbjB0R7I (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230297AbjB0R6Q (ORCPT ); Mon, 27 Feb 2023 12:58:16 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6363D3A83; Mon, 27 Feb 2023 09:57:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ILwuDhxkxYADxt38igbE8Aj0Ns9nbUEIuqdImY95JZY=; b=Tp2FrTyTIjlm0yan1E10ewv/lg 9faO6nV/Sly4mbKAkseZm/JAfYSMUlun3+h10jfyk+D0HsHa0DbYjGaGE037uzZIMajK/yURXKYeJ 4caj1X2norrd4eXR4Z2MMWr70XGHK2MkTHyxZV6PZkK9nvWDocnjZlPcdD+ExveY+3EnLz24zUbd2 6FFvv+SCtxI0XxvvZRHDzl9fZq50AAukBUWm9NPrUvBqyhKatYUipW+4JPwEfsCyzq6aFTRKVwuxN 0+CiKE/XiWrdIzeBqXkxMv+FmeCwkJK4tXmrqgEvsrJyJpjMM+AqUlPK7/x8EGMHegC9fZIR7PyGy kC14Gm4w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkw-000IYS-0s; Mon, 27 Feb 2023 17:57:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v2 23/30] sparc64: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:34 +0000 Message-Id: <20230227175741.71216-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008294471738109?= X-GMAIL-MSGID: =?utf-8?q?1759008294471738109?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Convert the PG_dcache_dirty flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_64.h | 18 ++++-- arch/sparc/include/asm/pgtable_64.h | 25 +++++++-- arch/sparc/kernel/smp_64.c | 56 +++++++++++------- arch/sparc/mm/init_64.c | 78 +++++++++++++++----------- arch/sparc/mm/tlb.c | 5 +- 5 files changed, 117 insertions(+), 65 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index b9341836597e..a9a719f04d06 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -35,20 +35,26 @@ void flush_icache_range(unsigned long start, unsigned long end); void __flush_icache_page(unsigned long); void __flush_dcache_page(void *addr, int flush_icache); -void flush_dcache_page_impl(struct page *page); +void flush_dcache_folio_impl(struct folio *folio); #ifdef CONFIG_SMP -void smp_flush_dcache_page_impl(struct page *page, int cpu); -void flush_dcache_page_all(struct mm_struct *mm, struct page *page); +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu); +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio); #else -#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) -#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) +#define smp_flush_dcache_folio_impl(folio, cpu) flush_dcache_folio_impl(folio) +#define flush_dcache_folio_all(mm, folio) flush_dcache_folio_impl(folio) #endif void __flush_dcache_range(unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_icache_page(vma, pg) do { } while(0) +#define flush_icache_pages(vma, pg, nr) do { } while(0) void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 2dc8d4641734..d5c0088e0c6a 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -911,8 +911,20 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, maybe_tlb_batch_add(mm, addr, ptep, orig, fullmm, PAGE_SHIFT); } -#define set_pte_at(mm,addr,ptep,pte) \ - __set_pte_at((mm), (addr), (ptep), (pte), 0) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + addr += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1); #define pte_clear(mm,addr,ptep) \ set_pte_at((mm), (addr), (ptep), __pte(0UL)) @@ -931,8 +943,8 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, \ if (pfn_valid(this_pfn) && \ (((old_addr) ^ (new_addr)) & (1 << 13))) \ - flush_dcache_page_all(current->mm, \ - pfn_to_page(this_pfn)); \ + flush_dcache_folio_all(current->mm, \ + page_folio(pfn_to_page(this_pfn))); \ } \ newpte; \ }) @@ -947,7 +959,10 @@ struct seq_file; void mmu_info(struct seq_file *); struct vm_area_struct; -void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); +void update_mmu_cache_range(struct vm_area_struct *, unsigned long addr, + pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #ifdef CONFIG_TRANSPARENT_HUGEPAGE void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd); diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index a55295d1b924..90ef8677ac89 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -921,20 +921,26 @@ extern unsigned long xcall_flush_dcache_page_cheetah; #endif extern unsigned long xcall_flush_dcache_page_spitfire; -static inline void __local_flush_dcache_page(struct page *page) +static inline void __local_flush_dcache_folio(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + unsigned long pfn = folio_pfn(folio) + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } -void smp_flush_dcache_page_impl(struct page *page, int cpu) +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu) { int this_cpu; @@ -948,14 +954,14 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) this_cpu = get_cpu(); if (cpu == this_cpu) { - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); } else if (cpu_online(cpu)) { - void *pg_addr = page_address(page); + void *pg_addr = folio_address(folio); u64 data0 = 0; if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -963,18 +969,23 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpumask_of(cpu)); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpumask_of(cpu)); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } } put_cpu(); } -void flush_dcache_page_all(struct mm_struct *mm, struct page *page) +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio) { void *pg_addr; u64 data0; @@ -988,10 +999,10 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) atomic_inc(&dcpage_flushes); #endif data0 = 0; - pg_addr = page_address(page); + pg_addr = folio_address(folio); if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -999,13 +1010,18 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpu_online_mask); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpu_online_mask); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); preempt_enable(); } diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 04f9db0c3111..ab9aacbaf43c 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -195,21 +195,26 @@ atomic_t dcpage_flushes_xcall = ATOMIC_INIT(0); #endif #endif -inline void flush_dcache_page_impl(struct page *page) +inline void flush_dcache_folio_impl(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + BUG_ON(tlb_type == hypervisor); #ifdef CONFIG_DEBUG_DCFLUSH atomic_inc(&dcpage_flushes); #endif #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), - ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, + ((tlb_type == spitfire) && + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } @@ -218,10 +223,10 @@ inline void flush_dcache_page_impl(struct page *page) #define PG_dcache_cpu_mask \ ((1UL<flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) +#define dcache_dirty_cpu(folio) \ + (((folio)->flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) -static inline void set_dcache_dirty(struct page *page, int this_cpu) +static inline void set_dcache_dirty(struct folio *folio, int this_cpu) { unsigned long mask = this_cpu; unsigned long non_cpu_bits; @@ -238,11 +243,11 @@ static inline void set_dcache_dirty(struct page *page, int this_cpu) "bne,pn %%xcc, 1b\n\t" " nop" : /* no outputs */ - : "r" (mask), "r" (non_cpu_bits), "r" (&page->flags) + : "r" (mask), "r" (non_cpu_bits), "r" (&folio->flags) : "g1", "g7"); } -static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) +static inline void clear_dcache_dirty_cpu(struct folio *folio, unsigned long cpu) { unsigned long mask = (1UL << PG_dcache_dirty); @@ -260,7 +265,7 @@ static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) " nop\n" "2:" : /* no outputs */ - : "r" (cpu), "r" (mask), "r" (&page->flags), + : "r" (cpu), "r" (mask), "r" (&folio->flags), "i" (PG_dcache_cpu_mask), "i" (PG_dcache_cpu_shift) : "g1", "g7"); @@ -284,9 +289,10 @@ static void flush_dcache(unsigned long pfn) page = pfn_to_page(pfn); if (page) { + struct folio *folio = page_folio(page); unsigned long pg_flags; - pg_flags = page->flags; + pg_flags = folio->flags; if (pg_flags & (1UL << PG_dcache_dirty)) { int cpu = ((pg_flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask); @@ -296,11 +302,11 @@ static void flush_dcache(unsigned long pfn) * in the SMP case. */ if (cpu == this_cpu) - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); else - smp_flush_dcache_page_impl(page, cpu); + smp_flush_dcache_folio_impl(folio, cpu); - clear_dcache_dirty_cpu(page, cpu); + clear_dcache_dirty_cpu(folio, cpu); put_cpu(); } @@ -388,12 +394,14 @@ bool __init arch_hugetlb_valid_size(unsigned long size) } #endif /* CONFIG_HUGETLB_PAGE */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { struct mm_struct *mm; unsigned long flags; bool is_huge_tsb; pte_t pte = *ptep; + unsigned int i; if (tlb_type != hypervisor) { unsigned long pfn = pte_pfn(pte); @@ -440,15 +448,21 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * } } #endif - if (!is_huge_tsb) - __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, - address, pte_val(pte)); + if (!is_huge_tsb) { + for (i = 0; i < nr; i++) { + __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, + address, pte_val(pte)); + address += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } + } spin_unlock_irqrestore(&mm->context.lock, flags); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { + unsigned long pfn = folio_pfn(folio); struct address_space *mapping; int this_cpu; @@ -459,35 +473,35 @@ void flush_dcache_page(struct page *page) * is merely the zero page. The 'bigcore' testcase in GDB * causes this case to run millions of times. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; this_cpu = get_cpu(); - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) { - int dirty = test_bit(PG_dcache_dirty, &page->flags); + bool dirty = test_bit(PG_dcache_dirty, &folio->flags); if (dirty) { - int dirty_cpu = dcache_dirty_cpu(page); + int dirty_cpu = dcache_dirty_cpu(folio); if (dirty_cpu == this_cpu) goto out; - smp_flush_dcache_page_impl(page, dirty_cpu); + smp_flush_dcache_folio_impl(folio, dirty_cpu); } - set_dcache_dirty(page, this_cpu); + set_dcache_dirty(folio, this_cpu); } else { /* We could delay the flush for the !page_mapping * case too. But that case is for exec env/arg * pages and those are %99 certainly going to get * faulted into the tlb (and thus flushed) anyways. */ - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); } out: put_cpu(); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void __kprobes flush_icache_range(unsigned long start, unsigned long end) { @@ -2280,10 +2294,10 @@ void __init paging_init(void) setup_page_offset(); /* These build time checkes make sure that the dcache_dirty_cpu() - * page->flags usage will work. + * folio->flags usage will work. * * When a page gets marked as dcache-dirty, we store the - * cpu number starting at bit 32 in the page->flags. Also, + * cpu number starting at bit 32 in the folio->flags. Also, * functions like clear_dcache_dirty_cpu use the cpu mask * in 13-bit signed-immediate instruction fields. */ diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 9a725547578e..3fa6a070912d 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -118,6 +118,7 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, unsigned long paddr, pfn = pte_pfn(orig); struct address_space *mapping; struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) goto no_cache_flush; @@ -127,13 +128,13 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, goto no_cache_flush; /* A real file page? */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) goto no_cache_flush; paddr = (unsigned long) page_address(page); if ((paddr ^ vaddr) & (1 << 13)) - flush_dcache_page_all(mm, page); + flush_dcache_folio_all(mm, folio); } no_cache_flush: From patchwork Mon Feb 27 17:57:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62074 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2562205wrd; Mon, 27 Feb 2023 10:01:23 -0800 (PST) X-Google-Smtp-Source: AK7set+JntIc1Xcaw4GrdfOr/c2bjklQUN7TQKPSKOmub6NpjD6j5d1/9V3I3psucMcw1ijqt0nW X-Received: by 2002:a17:90a:3ec7:b0:234:106a:3490 with SMTP id k65-20020a17090a3ec700b00234106a3490mr21404pjc.40.1677520883327; Mon, 27 Feb 2023 10:01:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520883; cv=none; d=google.com; s=arc-20160816; b=Cg03AUl5w6p76V8uuJUiN/cyN3RIFxCcVh5AA1OjuJkuQwS9c7qo89MsslT3Dgq1k3 wftMIGRWgIrLTT/hnMjkdUUqf8lvp6Kt3TijwS4mGb4lIBzrpBKbEPC9+eOYul00GBxh Srmg4IuexNORcBstlyTlFnu3nibFUNbZ/nt/wEr0NX0qoQqIYtyp5ITGH6OvqLol8q/S eMzIXEerNwxFF5ZbbcsiPCq/9vlEI2yibh3S0Hx4cPC+OiUw1qblurtCyVWPv9gHxtUd pZ38VvISKnBG9DgwrCs/zHFU/Y0gLhssaxBW6/sPzGikTqauEXgpNJhHZUma+t46LFR1 U74w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=dtiP3hUvA6nDur052lXOuZXkJQbzfHLCDQRu6Y0/CbA=; b=EoxYKZc31ylGdDx/nGLEcsCPwwP15rGUHaka/Ph9YVe6Ui63UX51q8cjSuXlMT4h5f Cszfak7It2YacZ3qwZm+UrPszA4vfyej4u29ZFOmZuAXRl4c+Ao5IFACXBv+rYaPfe3N qgBWwqaY6smXT/PJtYPbre+2P9wBYKTFFPzFLCPZHk+Ma+YH2RytLVlpWILo6HoBG//s 5FYPSp91XbL9DovgOSPB5Pyg+ZGSW9J2GOjDOzxBDBJWY1JRPnB9QnkuC5R6MDn/zWL7 YxwgdS9zfeABcAXHjSK3+XWtOBX9qrYLGmonsjrlaXwqfU9L2uj07d0SYj5ChgAFENC7 trCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=XBOe25D8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z15-20020a17090a398f00b00234bf8d76f7si8151664pjb.2.2023.02.27.10.01.10; Mon, 27 Feb 2023 10:01:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=XBOe25D8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230483AbjB0R7B (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230294AbjB0R6Q (ORCPT ); Mon, 27 Feb 2023 12:58:16 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 792E8269E; Mon, 27 Feb 2023 09:57:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dtiP3hUvA6nDur052lXOuZXkJQbzfHLCDQRu6Y0/CbA=; b=XBOe25D8ILBHFTarSSJcSeNA9I 1ho9edMN4WMOW+STKHU1Fjrea3rZodyLYFfxge7XDJMmfzMLJ5pCij5ce+T8U8wF/UdpEm7/Bz/qs 9nM2/FbCHKKykm8iTTCuAlGyTgQE/GycxrAz4ODI7DXc3tJUA+rJSGPORBHP1Q1tDn6els/rAcXNu nyYSpLPPd/TdxuXWCWgvTme17TJDDIgRhLDaoyZ4+jc4P1FBu1uKCgWDbTmRzt9YGRbp3MINfC48D n14ezuCiX8fXSt33c5Q2tTPA1dQpdTi3Rtl9mrNc5frwFSRQaNc0nMoT7sy55aFkqJjuw/TB8oH1V MaBmNMxg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkw-000IYY-7M; Mon, 27 Feb 2023 17:57:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Richard Weinberger , Anton Ivanov , Johannes Berg , linux-um@lists.infradead.org Subject: [PATCH v2 24/30] um: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:35 +0000 Message-Id: <20230227175741.71216-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008137789590239?= X-GMAIL-MSGID: =?utf-8?q?1759008137789590239?= Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Richard Weinberger Cc: Anton Ivanov Cc: Johannes Berg Cc: linux-um@lists.infradead.org --- arch/um/include/asm/pgtable.h | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index a70d1618eb35..ca78c90ae74f 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -242,12 +242,20 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *pteptr, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - set_pte(pteptr, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + #define __HAVE_ARCH_PTE_SAME static inline int pte_same(pte_t pte_a, pte_t pte_b) { @@ -290,6 +298,7 @@ struct mm_struct; extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); #define update_mmu_cache(vma,address,ptep) do {} while (0) +#define update_mmu_cache_range(vma, address, ptep, nr) do {} while (0) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that From patchwork Mon Feb 27 17:57:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62071 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561605wrd; Mon, 27 Feb 2023 10:00:29 -0800 (PST) X-Google-Smtp-Source: AK7set9nS5kSRJA9RhGRd8PhXdUUN5zXAsZX6YtMXRSYJCv7T+rPDPM/lqUrnfHRROCPD3wRL9oE X-Received: by 2002:a05:6a21:9988:b0:c2:fae6:caea with SMTP id ve8-20020a056a21998800b000c2fae6caeamr374484pzb.21.1677520829141; Mon, 27 Feb 2023 10:00:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520829; cv=none; d=google.com; s=arc-20160816; b=Fwf0yIFlFkO+Ye3oKbqdZfrUSh2cbH1u0G2aGgCpOKRrph4Jm5FNmtAPdv42Jz6LdM nxm2Ufmqj6tpJQZ9biZScsTdx7El8pSQyfHhKr/ukDoMBEPan8ao9ANjyiJafu5tpiGz rplb7jVTrpkFcOJ0b3kVSJ+q0yz889VYESLg7gIfe6JrXHwy2L+BNmLGepclsbBhqKRS PkjgIPSRGPRgZmGfMYcYOvks1KvZ3p2Glt/jdvTTKskkZ+04nDan7uDDygd6w35fb/44 rnKHj5+KgQavCXOjK+/AD74K23y4DK7C5zgQoL38zHMT6HeFTKJfpFo5mmD5hnuo4HbF gfjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fpcPwGn1cnZkifuBnwEfVFMB8mrQdavgisIkUPDNgF0=; b=EOwKG+qGt6OmH3J2PyP/lVDeUDJbmd/iIN3oKauvlwtO+cVyK7XnuAD0uvk7bcngUN 6KTEuCGX4gqUYzcO7tI3HTPusowD0eOq2tTKmouVGP2yGGVOxc6b0AroAP+M7hg0dipY dYe/XTp2u0AZgVMqoxgaX5nKYr57vEL6R9KsdqpiYKV53gCVSJnpkWxzXb2t4KH1scvc VJ6dcU4jcRzITDrS5f6GzbUOHPU8prfmrkpm68kRAw6qLUZNbOqPz+j5de+GSHj+Qvv8 n9vEao8ES4wT++rv2vjaJ0KOLkIHdtOYuwqi0BBu1pIO2j/vi2W7IL6vw2PP9EePc3zX MwOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=BPvJtFBX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q130-20020a632a88000000b00502fdc84f37si7127608pgq.320.2023.02.27.10.00.16; Mon, 27 Feb 2023 10:00:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=BPvJtFBX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229944AbjB0R6l (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230231AbjB0R6L (ORCPT ); Mon, 27 Feb 2023 12:58:11 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FD5BC2; Mon, 27 Feb 2023 09:57:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fpcPwGn1cnZkifuBnwEfVFMB8mrQdavgisIkUPDNgF0=; b=BPvJtFBXN5FBI3a+YdN/S+KXij CZC8/u9DVo/Jz2rikzcHYdR9c2R6G5GvkYUW1rY20m/CxR8T9HVedX/AMYVAYCyoWzbC6+7POOhKp bM0vmU+9cIf1jMUR8zuMVaImiRY1qglepGK8RTFQLlGlB8E62VOqb2+KMuMKAcNYqOcEGvIGOa9qH +0hBslig/tp4FzJr1ARvRtcgvDi3ZucOaB1+/JAUtZOZlMHdqNd1N7yf9HKbjbM0VgJ20KEVs9ECg Buy6noityfK1azMHwiTgSqn9+6o1Xez817HEYWztVxExzt8hVeVFhLGF0PvtxwqQJ61XZrr/hKn6V IIWZBuFg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkw-000IYd-B3; Mon, 27 Feb 2023 17:57:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v2 25/30] x86: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:36 +0000 Message-Id: <20230227175741.71216-26-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008080820374497?= X-GMAIL-MSGID: =?utf-8?q?1759008080820374497?= Convert set_pte_at() into set_ptes() and add a noop update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" --- arch/x86/include/asm/pgtable.h | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 84be3e07b112..f424371ea143 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1019,13 +1019,22 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) return res; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - set_pte(ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { @@ -1291,6 +1300,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ +} static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { From patchwork Mon Feb 27 17:57:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62057 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561046wrd; Mon, 27 Feb 2023 09:59:28 -0800 (PST) X-Google-Smtp-Source: AK7set+WyfMNXEDH6EsaISxjTqmA6HJN5dRK0C0EeGEcCif9b3FNZC6ejbcLPF89qg4533goyNct X-Received: by 2002:a17:902:da91:b0:19a:7648:512 with SMTP id j17-20020a170902da9100b0019a76480512mr33383168plx.30.1677520768292; Mon, 27 Feb 2023 09:59:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520768; cv=none; d=google.com; s=arc-20160816; b=SayKS7gyRT/qrmBhbaMjYcC9G/IcZQw1Zqn1/zSV+pKsRAtohxpSIF8VEiKC4z26P3 xfBv4swn/c4Ef9FGGJ1U9ZM9nVWb4JZlYrh6dZRo2lEpdEHv82PFJFXKW6bKrHpbnCfa nEp2jRhx9agGENi7apdkePCMWuXVz45c17gOFNu/y61xbBwthLBm3hQMC28V21Kkdy1s CbRsUD0+WIAaXbOajZYMMnZeDuSJxtvuKtB0ato8kRftw0Xp7NuvB7gciYhrOvaamYVq WWz2EfO0lB/8BkpGLyxm8+IsBRTc6nuhcqPQqm92i3jbFtdNxUk1CSBf+9sB+chUt/N9 mRpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FcAVSqRhOvKHjYDKfG+YKts/V6VftTlUNnoYvCjS+is=; b=aCOF88J9pq7YfSP7uOkQYJSPL9isd7ootBd6zaVBzc0YDtjvmsyUDhjjfLo1Ys9GMj qwKGMJ/d90CrjuxAp6ZPkC65SAaHcqhZAa9iAbKsMqOEQngph/DzssWF5SHjscsEWf5r 2NlaS2X6p3kbdtfzAzVWo7W/xp6o5O+z5LV5SslDUKUzGcCBMi/0MMZuO6/yT1MdPCty ZIsQNey3X6gAonOA+QKFdOMZohXyK7mYqcSjN2HPXTCOPLIXgqd8YPOFuu4zJc0XXsUr fPJWUs83iY9iQ7QZFJWsYLBdiBR85I2gL7Tho0fFrsa/weRCVaSfAIlhbWITQSHHGhNi ZE0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="oduE/873"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ks4-20020a170903084400b00192721d6a97si7080233plb.499.2023.02.27.09.59.13; Mon, 27 Feb 2023 09:59:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="oduE/873"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230005AbjB0R61 (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230127AbjB0R54 (ORCPT ); Mon, 27 Feb 2023 12:57:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 658E42449B; Mon, 27 Feb 2023 09:57:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FcAVSqRhOvKHjYDKfG+YKts/V6VftTlUNnoYvCjS+is=; b=oduE/873LY4TwSZyoAOMOKJOyP xXxts9JED1MF554jNMWSsOvEsnO0EYuncvAN9hjmrTJayQ7UIXPyYXZZKT6a79DJGzELdLksphJDy 5W+dTaVULNh+n0yLtuQvxjwgSn7gdjgmj0Q1dgtw79FhvhrjJC9yanZ7AOlHIUwnAkl1C1xUDWAqp C8nv7A4sS7Jx43txb8g3DOXFYnsd7dEhPX+y8Vu96gdfUu6Yd32TcgZ2nUR7DQ+u7Fp6rV/Z5zVjD lImQ23LRH//blmVTQ0btFdu/WJMWymPs+f3H9BABVs/jMbMwtxdJHobktTIBKoL+BbMDepenzzCAr CBDR2xng==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkw-000IYf-EB; Mon, 27 Feb 2023 17:57:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH v2 26/30] xtensa: Implement the new page table range API Date: Mon, 27 Feb 2023 17:57:37 +0000 Message-Id: <20230227175741.71216-27-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008016952450089?= X-GMAIL-MSGID: =?utf-8?q?1759008016952450089?= Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Max Filippov Cc: linux-xtensa@linux-xtensa.org --- arch/xtensa/include/asm/cacheflush.h | 9 ++- arch/xtensa/include/asm/pgtable.h | 24 +++++--- arch/xtensa/mm/cache.c | 83 ++++++++++++++++------------ 3 files changed, 72 insertions(+), 44 deletions(-) diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 7b4359312c25..35153f6725e4 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -119,8 +119,14 @@ void flush_cache_page(struct vm_area_struct*, #define flush_cache_vmap(start,end) flush_cache_all() #define flush_cache_vunmap(start,end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} void local_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); @@ -156,6 +162,7 @@ void local_flush_cache_page(struct vm_area_struct *vma, /* This is not required, see Documentation/core-api/cachetlb.rst */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index fc7a14884c6c..293101530541 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -301,17 +301,25 @@ static inline void update_pte(pte_t *ptep, pte_t pteval) struct mm_struct; -static inline void -set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_pte(pte_t *ptep, pte_t pte) { - update_pte(ptep, pteval); + update_pte(ptep, pte); } -static inline void set_pte(pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - update_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void set_pmd(pmd_t *pmdp, pmd_t pmdval) { @@ -407,8 +415,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) #else -extern void update_mmu_cache(struct vm_area_struct * vma, - unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, address, ptep) \ + update_mmu_cache_range(vma, address, ptep, 1) typedef pte_t *pte_addr_t; diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c index 19e5a478a7e8..65c0d5298041 100644 --- a/arch/xtensa/mm/cache.c +++ b/arch/xtensa/mm/cache.c @@ -121,9 +121,9 @@ EXPORT_SYMBOL(copy_user_highpage); * */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); /* * If we have a mapping but the page is not mapped to user-space @@ -132,14 +132,14 @@ void flush_dcache_page(struct page *page) */ if (mapping && !mapping_mapped(mapping)) { - if (!test_bit(PG_arch_1, &page->flags)) - set_bit(PG_arch_1, &page->flags); + if (!test_bit(PG_arch_1, &folio->flags)) + set_bit(PG_arch_1, &folio->flags); return; } else { - - unsigned long phys = page_to_phys(page); - unsigned long temp = page->index << PAGE_SHIFT; + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; + unsigned long temp = folio_pos(folio); + unsigned int i, nr = folio_nr_pages(folio); unsigned long alias = !(DCACHE_ALIAS_EQ(temp, phys)); unsigned long virt; @@ -154,22 +154,26 @@ void flush_dcache_page(struct page *page) return; preempt_disable(); - virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(virt, phys); + for (i = 0; i < nr; i++) { + virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(virt, phys); - virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); + virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); - if (alias) - __flush_invalidate_dcache_page_alias(virt, phys); + if (alias) + __flush_invalidate_dcache_page_alias(virt, phys); - if (mapping) - __invalidate_icache_page_alias(virt, phys); + if (mapping) + __invalidate_icache_page_alias(virt, phys); + phys += PAGE_SIZE; + temp += PAGE_SIZE; + } preempt_enable(); } /* There shouldn't be an entry in the cache for this page anymore. */ } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* * For now, flush the whole cache. FIXME?? @@ -207,45 +211,52 @@ EXPORT_SYMBOL(local_flush_cache_page); #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */ -void -update_mmu_cache(struct vm_area_struct * vma, unsigned long addr, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; + unsigned int i; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); - /* Invalidate old entry in TLBs */ - - flush_tlb_page(vma, addr); + /* Invalidate old entries in TLBs */ + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); + nr = folio_nr_pages(folio); #if (DCACHE_WAY_SIZE > PAGE_SIZE) - if (!PageReserved(page) && test_bit(PG_arch_1, &page->flags)) { - unsigned long phys = page_to_phys(page); + if (!folio_test_reserved(folio) && test_bit(PG_arch_1, &folio->flags)) { + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; unsigned long tmp; preempt_disable(); - tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - __invalidate_icache_page_alias(tmp, phys); + for (i = 0; i < nr; i++) { + tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + __invalidate_icache_page_alias(tmp, phys); + phys += PAGE_SIZE; + } preempt_enable(); - clear_bit(PG_arch_1, &page->flags); + clear_bit(PG_arch_1, &folio->flags); } #else - if (!PageReserved(page) && !test_bit(PG_arch_1, &page->flags) + if (!folio_test_reserved(folio) && !test_bit(PG_arch_1, &folio->flags) && (vma->vm_flags & VM_EXEC) != 0) { - unsigned long paddr = (unsigned long)kmap_atomic(page); - __flush_dcache_page(paddr); - __invalidate_icache_page(paddr); - set_bit(PG_arch_1, &page->flags); - kunmap_atomic((void *)paddr); + for (i = 0; i < nr; i++) { + void *paddr = kmap_local_folio(folio, i * PAGE_SIZE); + __flush_dcache_page((unsigned long)paddr); + __invalidate_icache_page((unsigned long)paddr); + kunmap_atomic(paddr); + } + set_bit(PG_arch_1, &folio->flags); } #endif } From patchwork Mon Feb 27 17:57:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62073 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2562034wrd; Mon, 27 Feb 2023 10:01:08 -0800 (PST) X-Google-Smtp-Source: AK7set982MTbihz5GHH3cTxHfl6Gz2apfBDQflmNIr7mCYqWBB658RxvETqE1mUGZdRxvEFjqaLz X-Received: by 2002:a17:903:2948:b0:19c:dd2e:d50b with SMTP id li8-20020a170903294800b0019cdd2ed50bmr9757426plb.11.1677520868533; Mon, 27 Feb 2023 10:01:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520868; cv=none; d=google.com; s=arc-20160816; b=JmkFEF7SgLRUIctCqegSWcFD280weDiTzTp8QvvkBcZKerXJV4jHTG+ehozdauHckf IbMci7+le/uJCSho6DHRUbVKCEJje0GIvY5Yt0yGR1MzDNFy04Dtso/IT3yL5/jK8iH5 ritrelzSq2IEWACKx28/C0zq0v33mzpCt8g4tHcdjY1d0qpu6pFh7L4GBPBmynlf5bJi MAiXeqhJj21pV43WC1wXQUiXjCoKwssCKzPELssb5MwWxrhWySy632RIsRfUCbJBWb7a gosHH8NXE80VczVIdPyiG3tGb4KzMvPVh3jJ0Vi1Mw8d8Ieq53bFVmVirnO/XR6GHsRD Yiiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=i9ts6rBAYolQ1/FBCYccIPnlyslRrUlDUNhaTG0F47Y=; b=b0/yZ1Qn3H7g6XvCkc6PeMWeo/zymWBdCYwFrAW+RGV3OKfAf4mpnlEi8T1KpWEEbT RyiVBF4Y184+bBbTny4wKqND/vyFX40ZHxuLPzEts1e4DzvhEscta4zPHDcRn8X4obTk 9hW9dd/3MpaIcAO08SlURyxW50zUQL3E9+jogmEoAsHhPtFQqWmFRFt9JyNTc+j8KBmL oITOwOUvXu2tLswBrvJ9uqIVI+mUPsIDkNHjCRaWSGCzcLPsG6uhPxMpCR+UILjbK4Bq jFWDMqbsLbxjJOOZ134quU00JDBLZEyjbsPHISb1ifF6GcnGhOkUSHGEnED2yn8Vn9pI IcaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=QtMpbNCS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t186-20020a6381c3000000b004e48198c1b0si8074559pgd.654.2023.02.27.10.00.55; Mon, 27 Feb 2023 10:01:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=QtMpbNCS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230474AbjB0R66 (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230303AbjB0R6Q (ORCPT ); Mon, 27 Feb 2023 12:58:16 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4CF540F1; Mon, 27 Feb 2023 09:57:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=i9ts6rBAYolQ1/FBCYccIPnlyslRrUlDUNhaTG0F47Y=; b=QtMpbNCSVu4JKKMG0QgiDe3O8b zrRrZY9W8ngOU1X3jxbL7EApreJb8qUkDPJT19/fxOodt64vAkKWFfFaLNqfHZ9+sdL3+c1vWQrRj 2NE/0nH7B7nIR7QjcKZ0zYypB/xIPiolYiv8qRfxdPwrCrunm+GkZ0xcxRNnurK4JbdwEf6ySQSHe HgSLXgm6E8tMHFZeRIGzBIAn2fm9y7NXWTGF9RquGRacqm8JXvQimkq9D7yOlxauD3VPs9qbFF9qD QXolgPXHEX6cV8/guJRIeLHspJekp6ZNQK2p3ze3D8Es1HH1LLmJRIDlpPgh8//zrfMP0c+XO5YtU YsXZXUvw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkw-000IYu-KK; Mon, 27 Feb 2023 17:57:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v2 27/30] filemap: Add filemap_map_folio_range() Date: Mon, 27 Feb 2023 17:57:38 +0000 Message-Id: <20230227175741.71216-28-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008122192756072?= X-GMAIL-MSGID: =?utf-8?q?1759008122192756072?= From: Yin Fengwei filemap_map_folio_range() maps partial/full folio. Comparing to original filemap_map_pages(), it updates refcount once per folio instead of per page and gets minor performance improvement for large folio. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]), got 2% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 98 +++++++++++++++++++++++++++++----------------------- 1 file changed, 54 insertions(+), 44 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 2723104cc06a..db86e459dde6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2202,16 +2202,6 @@ unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, } EXPORT_SYMBOL(filemap_get_folios); -static inline -bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) -{ - if (!folio_test_large(folio) || folio_test_hugetlb(folio)) - return false; - if (index >= max) - return false; - return index < folio->index + folio_nr_pages(folio) - 1; -} - /** * filemap_get_folios_contig - Get a batch of contiguous folios * @mapping: The address_space to search @@ -3483,6 +3473,53 @@ static inline struct folio *next_map_page(struct address_space *mapping, mapping, xas, end_pgoff); } +/* + * Map page range [start_page, start_page + nr_pages) of folio. + * start_page is gotten from start by folio_page(folio, start) + */ +static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, + struct folio *folio, unsigned long start, + unsigned long addr, unsigned int nr_pages) +{ + vm_fault_t ret = 0; + struct vm_area_struct *vma = vmf->vma; + struct file *file = vma->vm_file; + struct page *page = folio_page(folio, start); + unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); + unsigned int ref_count = 0, count = 0; + + do { + if (PageHWPoison(page)) + continue; + + if (mmap_miss > 0) + mmap_miss--; + + /* + * NOTE: If there're PTE markers, we'll leave them to be + * handled in the specific fault path, and it'll prohibit the + * fault-around logic. + */ + if (!pte_none(*vmf->pte)) + continue; + + if (vmf->address == addr) + ret = VM_FAULT_NOPAGE; + + ref_count++; + do_set_pte(vmf, page, addr); + update_mmu_cache(vma, addr, vmf->pte); + } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + + /* Restore the vmf->pte */ + vmf->pte -= nr_pages; + + folio_ref_add(folio, ref_count); + WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); + + return ret; +} + vm_fault_t filemap_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff) { @@ -3493,9 +3530,9 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, unsigned long addr; XA_STATE(xas, &mapping->i_pages, start_pgoff); struct folio *folio; - struct page *page; unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); vm_fault_t ret = 0; + int nr_pages = 0; rcu_read_lock(); folio = first_map_page(mapping, &xas, end_pgoff); @@ -3510,45 +3547,18 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, addr = vma->vm_start + ((start_pgoff - vma->vm_pgoff) << PAGE_SHIFT); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); do { -again: - page = folio_file_page(folio, xas.xa_index); - if (PageHWPoison(page)) - goto unlock; - - if (mmap_miss > 0) - mmap_miss--; + unsigned long end; addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; + end = folio->index + folio_nr_pages(folio) - 1; + nr_pages = min(end, end_pgoff) - xas.xa_index + 1; - /* - * NOTE: If there're PTE markers, we'll leave them to be - * handled in the specific fault path, and it'll prohibit the - * fault-around logic. - */ - if (!pte_none(*vmf->pte)) - goto unlock; + ret |= filemap_map_folio_range(vmf, folio, + xas.xa_index - folio->index, addr, nr_pages); + xas.xa_index += nr_pages; - /* We're about to handle the fault */ - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; - - do_set_pte(vmf, page, addr); - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, addr, vmf->pte); - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - folio_ref_inc(folio); - goto again; - } - folio_unlock(folio); - continue; -unlock: - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - goto again; - } folio_unlock(folio); folio_put(folio); } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL); From patchwork Mon Feb 27 17:57:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62080 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2563849wrd; Mon, 27 Feb 2023 10:03:50 -0800 (PST) X-Google-Smtp-Source: AK7set/KK7RmJM1+c46p2U6pRusqUntybpOHCNRVBIwKPN0P8AnaklqYyrhI60KgfZxGYsebYWDa X-Received: by 2002:aa7:8ecc:0:b0:5a9:b910:6d98 with SMTP id b12-20020aa78ecc000000b005a9b9106d98mr23943994pfr.13.1677521030534; Mon, 27 Feb 2023 10:03:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677521030; cv=none; d=google.com; s=arc-20160816; b=CUUcC2QBtdcApY7OrXNu+bdjn4Itbe4Pmosb0rstkTm2YgqYW6xWlatr3kds4xps5S qISA6G7SmuUL1d5gU+AAYWaXeov8iEppyrnJmPb0XwseYpFBNZTxg+vw2n/rSos9p+TA bYqRgOaM4wezsKrz1XonFPe4QqGkAD0lPll9RMCPHbS1m4PhoQ3uDSkgkMbMDU9U20T9 ZbGHYaDsrt0KyGRvoSct4YOK/lFv6E7ErJFGxRue+IonKbDHrV+EMHGJzDltMCVRfC24 WurOquT4wCQyrMW9mZB3+fV4o0KN/u0Icbuazn0aggjdC7yFDQQ6DpdPxkOV5BDsXp9E h/yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=y0P5OnBh3qG6d3noGaCIYdmYizhMJwO+EWfJPW8MG4E=; b=jZVKXVrNl2YQRDd6mvTdERiVpyNH4Doh33JJzeaeoBuhDlbM2SO4O3MKIejW89UOLI +fu3zX6zyHHatpM1g/is6q/Rz2OUpru0duVOGl7fE5YTpvDjzKU53pPNg5tE8xBgx/ZF nTCxWUqo0dENzNyLJd1dw6V2a8yGmSuF7TxrMx+y0JGOeNhZPWqgQ2/zdyC0krjElMW3 Zmo/09RMhjE4WP+iXhKfn0Xr2PaVNRMduvPZc1ZNM2Zmxz9NdS0Clwt8AogLJzIOYTix MB0oHbYCJdeYUZO7KRMpB7581uamQYNOJAaofLxMCFELrGVTWKt5bmYMKdKEx2SSFDpH r2Yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=jwBU661f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k3-20020aa79d03000000b00593cda276b6si7339762pfp.322.2023.02.27.10.03.37; Mon, 27 Feb 2023 10:03:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=jwBU661f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230344AbjB0R7a (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230339AbjB0R6Y (ORCPT ); Mon, 27 Feb 2023 12:58:24 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C10F493E5; Mon, 27 Feb 2023 09:57:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=y0P5OnBh3qG6d3noGaCIYdmYizhMJwO+EWfJPW8MG4E=; b=jwBU661fwhRTx2pgFu1JSo99dB erx3YENzwwMH9d3DroGaXG1W3RItbbfo9N/DjwKcMsslZ1iA6/kEJdx+svqAVHunopbTc0MdNpb0S MteRpNQE6UgvDKVAx1XMc6ulBgvJsz+MAWxgHoDDWFBdTxcsw+yg1D/7VIIyDKKMXMOEwc0IYLpjh lPWUaQDHTUSju4AzIT8dPRw6m37hGC2S3p0Zl3MJRTZCL3YDPoE2LSA9gebfbdxhGX7Au2L83t6du d87poBinrfCUnYaWN/yhrROEAEFY65qWz9g7vhEBHUv6K5uX5RzrKDPcQd9TWEtay0u2DbvEG8pfC ScoXa+cg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkw-000IZ5-QH; Mon, 27 Feb 2023 17:57:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v2 28/30] rmap: add folio_add_file_rmap_range() Date: Mon, 27 Feb 2023 17:57:39 +0000 Message-Id: <20230227175741.71216-29-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008292157257029?= X-GMAIL-MSGID: =?utf-8?q?1759008292157257029?= From: Yin Fengwei folio_add_file_rmap_range() allows to add pte mapping to a specific range of file folio. Comparing to page_add_file_rmap(), it batched updates __lruvec_stat for large folio. Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 2 ++ mm/rmap.c | 60 +++++++++++++++++++++++++++++++++----------- 2 files changed, 48 insertions(+), 14 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b87d01660412..a3825ce81102 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -198,6 +198,8 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, + struct vm_area_struct *, bool compound); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); diff --git a/mm/rmap.c b/mm/rmap.c index bacdb795d5ee..fffdb85a3b3d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1303,31 +1303,39 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } /** - * page_add_file_rmap - add pte mapping to a file page - * @page: the page to add the mapping to + * folio_add_file_rmap_range - add pte mapping to page range of a folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @nr_pages: The number of pages which will be mapped * @vma: the vm area in which the mapping is added * @compound: charge the page as compound or small page * + * The page range of folio is defined by [first_page, first_page + nr_pages) + * * The caller needs to hold the pte lock. */ -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, - bool compound) +void folio_add_file_rmap_range(struct folio *folio, struct page *page, + unsigned int nr_pages, struct vm_area_struct *vma, + bool compound) { - struct folio *folio = page_folio(page); atomic_t *mapped = &folio->_nr_pages_mapped; - int nr = 0, nr_pmdmapped = 0; - bool first; + unsigned int nr_pmdmapped = 0, first; + int nr = 0; - VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); + VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio); /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { - first = atomic_inc_and_test(&page->_mapcount); - nr = first; - if (first && folio_test_large(folio)) { - nr = atomic_inc_return_relaxed(mapped); - nr = (nr < COMPOUND_MAPPED); - } + do { + first = atomic_inc_and_test(&page->_mapcount); + if (first && folio_test_large(folio)) { + first = atomic_inc_return_relaxed(mapped); + first = (nr < COMPOUND_MAPPED); + } + + if (first) + nr++; + } while (page++, --nr_pages > 0); } else if (folio_test_pmd_mappable(folio)) { /* That test is redundant: it's for safety or to optimize out */ @@ -1356,6 +1364,30 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma, compound); } +/** + * page_add_file_rmap - add pte mapping to a file page + * @page: the page to add the mapping to + * @vma: the vm area in which the mapping is added + * @compound: charge the page as compound or small page + * + * The caller needs to hold the pte lock. + */ +void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, + bool compound) +{ + struct folio *folio = page_folio(page); + unsigned int nr_pages; + + VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page); + + if (likely(!compound)) + nr_pages = 1; + else + nr_pages = folio_nr_pages(folio); + + folio_add_file_rmap_range(folio, page, nr_pages, vma, compound); +} + /** * page_remove_rmap - take down pte mapping from a page * @page: page to remove mapping from From patchwork Mon Feb 27 17:57:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62065 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2561173wrd; Mon, 27 Feb 2023 09:59:45 -0800 (PST) X-Google-Smtp-Source: AK7set8jnvdrhe3e+9jJeaJn1wAiOPgq1b+92D4MdvOy/mpmSX8/MUqQY/ehq0mC8fueCkHUIlAw X-Received: by 2002:a17:902:f68f:b0:19c:1433:5fba with SMTP id l15-20020a170902f68f00b0019c14335fbamr30776621plg.0.1677520784842; Mon, 27 Feb 2023 09:59:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677520784; cv=none; d=google.com; s=arc-20160816; b=jNYBwq+yDcBADr8/DacM9Ufu4CZP0OqJXLBZrb94+jMkrkeAiaCQimW2e6jHefUBIH 4IXey3WrkUTVz1A6iQca0tcYkreAek5JTHJ0hyhWoQWLXPV/Uo+4RJxcpXbpLvIZg7FY 39F2dioL+ln577XziIZ2qDgUddejX6rf6S4v9mmBHC90wDumdvQaY7ZMBIYLIRFa+Npi Fjm0q8X2esTCzxkivbQwHSFNnnlzRAa0lWsYlDzX61/4iRRZdNT2vdUZ7rSjkhh8LfNF oX3pFwXubGmorExjZdQtd3pPkhf36d95mwPb+jlo9G42c1TiaSuRU2YfJ1imELR5u2rg Kvfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=KIqr0aCyEx062KWcG5/83cpLc+JVw8Xr3v2+CmV2f1Y=; b=e+zk+TxWI25YcDY+u8b/Bk8KZEldZJwyEekIOpTN5GIexc9U7LqLMvKveZ55ISLurU STBpjVifFKuCoyp1RyV9N6uLEYtp0If3U5eC5qXVEra+XL/hWY4KqyDrhH8NlqEs2Ybo JCMRiTRH1OezOYjM96OFUZYhTMn9z50P96hJPTbjdGPT6hpDZszu77EbqE6U2mPn9HgX shZhvtSsCrsBK2BCXFci2LrBhOMWme61ziMORIJOQra9nkZ+2zvPJnkGdXpj0BHEiaLy wrIvw5YcpIhcmhJPVNCuZvoOCtzQMV1ZGuvcTxWF94SsaHqqi/Fri/y92wFtxg+B7w8W cSJQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Z+LTDLwh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kh11-20020a170903064b00b0019ca645f1f7si7271494plb.325.2023.02.27.09.59.32; Mon, 27 Feb 2023 09:59:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Z+LTDLwh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230423AbjB0R6q (ORCPT + 99 others); Mon, 27 Feb 2023 12:58:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230264AbjB0R6P (ORCPT ); Mon, 27 Feb 2023 12:58:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D67A3123; Mon, 27 Feb 2023 09:57:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KIqr0aCyEx062KWcG5/83cpLc+JVw8Xr3v2+CmV2f1Y=; b=Z+LTDLwhpAZ/LkkSshP8BqtWK9 GvCUh7zVNB81K/zHb5r0xh1tkiEtDL5DreJSd1vCmZ+maVisyV8moAdHcgLIQoro0xeJocCxxMW1h 07/DUg6/HwBtfR/neV/6yqEpoe3wregTwCbWK7EGLcIKIsJaciJFuojA2bu3n14aS74+T8hK2me4a fMvBszqIFtN7LIGzg+5oi2IX/RHE6efwb390UdznnyX2n0l3j1Yw7TXm9d05dYkrF1NUexScZkmCq xBk97OIeLHpx6Jn73c+8aALQudr3eC/cX+YBdYLvA3cSoT0uHJV5xwZA/Vg2jyfxeKviCh1ObIVJQ nkxHUeeg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkw-000IZB-V5; Mon, 27 Feb 2023 17:57:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v2 29/30] mm: Convert do_set_pte() to set_pte_range() Date: Mon, 27 Feb 2023 17:57:40 +0000 Message-Id: <20230227175741.71216-30-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008034325259482?= X-GMAIL-MSGID: =?utf-8?q?1759008034325259482?= From: Yin Fengwei set_pte_range() allows to setup page table entries for a specific range. It takes advantage of batched rmap update for large folio. It now takes care of calling update_mmu_cache_range(). Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 2 +- include/linux/mm.h | 3 ++- mm/filemap.c | 3 +-- mm/memory.c | 27 +++++++++++++++------------ 4 files changed, 19 insertions(+), 16 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 7de7a7272a5e..922886fefb7f 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -663,7 +663,7 @@ locked. The VM will unlock the page. Filesystem should find and map pages associated with offsets from "start_pgoff" till "end_pgoff". ->map_pages() is called with page table locked and must not block. If it's not possible to reach a page without blocking, -filesystem should skip it. Filesystem should use do_set_pte() to setup +filesystem should skip it. Filesystem should use set_pte_range() to setup page table entry. Pointer to entry associated with the page is passed in "pte" field in vm_fault structure. Pointers to entries for other offsets should be calculated relative to "pte". diff --git a/include/linux/mm.h b/include/linux/mm.h index 1f79667824eb..568ebe7058d4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1168,7 +1168,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); diff --git a/mm/filemap.c b/mm/filemap.c index db86e459dde6..07ebd90967a3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3507,8 +3507,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, ret = VM_FAULT_NOPAGE; ref_count++; - do_set_pte(vmf, page, addr); - update_mmu_cache(vma, addr, vmf->pte); + set_pte_range(vmf, folio, page, 1, addr); } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); /* Restore the vmf->pte */ diff --git a/mm/memory.c b/mm/memory.c index bfa3100ec5a3..b09c03e4fadb 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4256,7 +4256,8 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) } #endif -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; bool uffd_wp = pte_marker_uffd_wp(vmf->orig_pte); @@ -4264,7 +4265,7 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) bool prefault = vmf->address != addr; pte_t entry; - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr); entry = mk_pte(page, vma->vm_page_prot); if (prefault && arch_wants_old_prefaulted_pte()) @@ -4278,14 +4279,18 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) entry = pte_mkuffd_wp(entry); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr); - lru_cache_add_inactive_or_unevictable(page, vma); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); + VM_BUG_ON_FOLIO(nr != 1, folio); + folio_add_new_anon_rmap(folio, vma, addr); + folio_add_lru_vma(folio, vma); } else { - inc_mm_counter(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page, vma, false); + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); + folio_add_file_rmap_range(folio, page, nr, vma, false); } - set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vma, addr, vmf->pte, nr); } static bool vmf_pte_changed(struct vm_fault *vmf) @@ -4358,11 +4363,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) /* Re-check under ptl */ if (likely(!vmf_pte_changed(vmf))) { - do_set_pte(vmf, page, vmf->address); - - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, vmf->address, vmf->pte); + struct folio *folio = page_folio(page); + set_pte_range(vmf, folio, page, 1, vmf->address); ret = 0; } else { update_mmu_tlb(vma, vmf->address, vmf->pte); From patchwork Mon Feb 27 17:57:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 62082 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2563967wrd; Mon, 27 Feb 2023 10:04:05 -0800 (PST) X-Google-Smtp-Source: AK7set8ua1+VXhUbZN28bQHebgG+9OCp84DKDNnJ/L4tE4xAdCMezCpJEipMeFUjuZ8BPfQO4lDv X-Received: by 2002:a05:6a20:6a03:b0:bf:58d1:cea0 with SMTP id p3-20020a056a206a0300b000bf58d1cea0mr7174978pzk.31.1677521045408; Mon, 27 Feb 2023 10:04:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677521045; cv=none; d=google.com; s=arc-20160816; b=oCKlhot1qTU9Hp77+iqHjsMgIJamutq+bYQ4NeOtZ4wtrozrlWRdYgwlMUsWCYf0Rw fK/Rsh5hTGGKqRojhdr0t71MUMhhiC0cqfhEmN5/EXRobOZR/L508tSZqIE9mAH+yTg3 YHMdpXrn8arorUQsC+d/9WJo28IQcEB2KTCJ4Sqla+6Zh5GTiXVQkQqOyoTPG4nr3c09 Txfe+wqxL6r0tkS3/9A2wkWVUEfjRROTtbwSRcKzsa4K0uBKnSlEIjI3b4+O5y4148J/ OYNiiwiE7jnRbmBswE2iGx7eicJTv276ilulLjXDvAKCPXRs4Q6mSNswGgvjBMGrIPsG jAvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GeBhKEWLK25eomw3eGszIimIpcpbCe83zHF885nubCM=; b=IgWfLvHQKexeMHfpI6ss7Petal28/6XgGVp4MaR8RbN5ATrl9PHGC24jtZzo7UHpLd F/pFliuBmrdcOSUvH3h5T7SFXyzgDQ8ZsLeghgYbA5ELwRc6kjRV74eeryWx1QlBOpJu 1mCHAx0Kw/ZBZR4LzOnhLrHRjbIBNdy7Gge28ZLTtPcbiBH3ejEOrcU/wq2uN/juoz0E wrrHRin6/v3+wYTN5pFNsOq99CzqYr5V/jnRs0FPAbVkB9+cCj0PfM7ARhQnUcgh1NFQ SInZ4mL+2fBYs+VEJx3iZKSTRRwaAlydp6QCDEG2Cw06NtC5PIlC9aoVgq4juqZg2myB AQUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=YZy5sGlb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x14-20020a63f70e000000b00502f1395a67si7713846pgh.190.2023.02.27.10.03.23; Mon, 27 Feb 2023 10:04:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=YZy5sGlb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230521AbjB0R7U (ORCPT + 99 others); Mon, 27 Feb 2023 12:59:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230222AbjB0R6W (ORCPT ); Mon, 27 Feb 2023 12:58:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAEFA6A56; Mon, 27 Feb 2023 09:57:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GeBhKEWLK25eomw3eGszIimIpcpbCe83zHF885nubCM=; b=YZy5sGlbchptjVnsHDhmqDLamz iI/XWDhaNjL1I5o/wDXF8+Hd1sJ8vPb8lgaxzq+nWjdl3NuQAQAWPBwkuxjzRpRK/tw4ulnjkHFS6 E6c4iQOW8FeTvq5rEdi7YJVS+lyJD++uqIZpwqazTMsTM7CVIda9T43KVS9Ddtju01TmM3hnMwLiP 17eln+GkT115lhsgM6I+hrlbBLyLineK4pv86jVfPh/n0wci7kfdtFC9L2p6P/pZvJknIp3vZZ3Ie FOgcFZel3hHCfVxNRCENmCV1jTsYrw05DCKah8ESK/AWYCfPLoP8itHYwSCTm35eYY3OtZ75EQrEX RFTWrSVA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pWhkx-000IZG-2e; Mon, 27 Feb 2023 17:57:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v2 30/30] filemap: Batch PTE mappings Date: Mon, 27 Feb 2023 17:57:41 +0000 Message-Id: <20230227175741.71216-31-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230227175741.71216-1-willy@infradead.org> References: <20230227175741.71216-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759008307288258115?= X-GMAIL-MSGID: =?utf-8?q?1759008307288258115?= From: Yin Fengwei Call set_pte_range() once per contiguous range of the folio instead of once per page. This batches the updates to mm counters and the rmap. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]) got 15% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. Perf data collected before/after the change: 18.73%--page_add_file_rmap | --11.60%--__mod_lruvec_page_state | |--7.40%--__mod_memcg_lruvec_state | | | --5.58%--cgroup_rstat_updated | --2.53%--__mod_lruvec_state | --1.48%--__mod_node_page_state 9.93%--page_add_file_rmap_range | --2.67%--__mod_lruvec_page_state | |--1.95%--__mod_memcg_lruvec_state | | | --1.57%--cgroup_rstat_updated | --0.61%--__mod_lruvec_state | --0.54%--__mod_node_page_state The running time of __mode_lruvec_page_state() is reduced about 9%. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 36 +++++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 07ebd90967a3..40be33b5ee46 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3486,11 +3486,12 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct file *file = vma->vm_file; struct page *page = folio_page(folio, start); unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); - unsigned int ref_count = 0, count = 0; + unsigned int count = 0; + pte_t *old_ptep = vmf->pte; do { - if (PageHWPoison(page)) - continue; + if (PageHWPoison(page + count)) + goto skip; if (mmap_miss > 0) mmap_miss--; @@ -3500,20 +3501,33 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, * handled in the specific fault path, and it'll prohibit the * fault-around logic. */ - if (!pte_none(*vmf->pte)) - continue; + if (!pte_none(vmf->pte[count])) + goto skip; if (vmf->address == addr) ret = VM_FAULT_NOPAGE; - ref_count++; - set_pte_range(vmf, folio, page, 1, addr); - } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + count++; + continue; +skip: + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + } - /* Restore the vmf->pte */ - vmf->pte -= nr_pages; + count++; + page += count; + vmf->pte += count; + addr += count * PAGE_SIZE; + count = 0; + } while (--nr_pages > 0); + + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + } - folio_ref_add(folio, ref_count); + vmf->pte = old_ptep; WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret;