From patchwork Mon Jul 10 20:43:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118090 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp65921vqm; Mon, 10 Jul 2023 13:53:36 -0700 (PDT) X-Google-Smtp-Source: APBJJlE8YDzuf3L166KMLtL6IaVsYKUGxFovJrJSql1awpbXSW1RH6R+ehEOb5L35vuOjHlGGodv X-Received: by 2002:a2e:80d6:0:b0:2b6:c790:150a with SMTP id r22-20020a2e80d6000000b002b6c790150amr12144888ljg.22.1689022415941; Mon, 10 Jul 2023 13:53:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022415; cv=none; d=google.com; s=arc-20160816; b=K3H/ei1lldBvZGzY7Zn3k+5B2JRPU8lw1PeWpAh6cE2U6AmQn0FvMnfqZIMaMVb65T lPprvw6tVmeJyW1sTsn2mrWiEUEZ1VoW0gNRWkxOwh8VIwoNhH2I083MIJoTmo5Z76bo Z6Lxj80+TJCXePUeubpQ+5/Pe6gxkgFYt4oJqukA/Qk8LmkS3j5oa/eZqUQbKIS7QXgQ jkGrKSdJzFLAlaMk8xyQA80jvIJngyy+KdTMi+WQydQXmgnirVA8Ix7q8rou+E20BjXm E301qpzfscqcwFIMyhPEko586i22+iqP4PKeZmdhk8M64+VpXarV0g7NEnlp2bDTCoVI RXog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=A7RcetqmNW3btMImHl/BZfdcqRl9coUa9L9XZZv1508=; fh=XtKYrrV24qpJDEOQ96O/Lmd5LL6GP9mEWUOmxLG8cXI=; b=HDmxJnh2nxB4VrSFpPKVcu5oIWd/m6EMoMCvxjgeRjORnRofsO2aeKEi/AgHHyHsDY mABldx3Zqqo5uicsPrZaBnZoeo8gg05iYvcrQ9H8UcKPfyF2qv6JvtONm6oy27eSYKB5 zxUaV9ElHX6caisvgeuYhkDjs5AXsExiw5p6UiC62jqh97BO+5ejjO9Jl0ols5Kok7qS X5Z6zj5XBXT8/PjzM1YqcOwMVMEC1T66/JPZ1UGEWyWgl7HxYUIFPNjpocxY/tdFBeDc SZ7a1dIaaU9lSZpNo/iCiIIEV4s/ohAjOizwmMkjSnI7IJuWm7rpMnEc7n4Vc5Hz1OBC 1GFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=WgSSkXyo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p9-20020a170906140900b00993689daad1si441564ejc.116.2023.07.10.13.53.11; Mon, 10 Jul 2023 13:53:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=WgSSkXyo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233043AbjGJUqd (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231656AbjGJUpA (ORCPT ); Mon, 10 Jul 2023 16:45:00 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C5A31BF; Mon, 10 Jul 2023 13:43:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=A7RcetqmNW3btMImHl/BZfdcqRl9coUa9L9XZZv1508=; b=WgSSkXyo6wHIO77DHamsAtL5yS QPw+3CNlMjYUWUL/cmn1npceGfaf+29pfTh1AWihDxltm3qKIFPVy5zVYjvclsnFpXKYetM/YBD+t pDQiNiNl9LpEoGB8KmBRwWmkDxMJDuHj5GlmENIK0nNp8MTh/wrwhCI2z1kpRHwF5n0P8/2o0TOIw lnX0H/DLnXPWpzmVouo1J7c0g3n5+cegiGdOuSGV97zLi9NShKpz20OELw2TBUrUBocDgTJYn7oY0 d4t9+1dGZsxpyK/6oTDLRRXJ10U4eXy3U5L8PQIq7FLntXSPG6i3HPw2LOBx3gqBtNDv1YMvvQTgb wQnxk6dA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjR-00EuoE-3L; Mon, 10 Jul 2023 20:43:41 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 01/38] minmax: Add in_range() macro Date: Mon, 10 Jul 2023 21:43:02 +0100 Message-Id: <20230710204339.3554919-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068369084429364 X-GMAIL-MSGID: 1771068369084429364 Determine if a value lies within a range more efficiently (subtraction + comparison vs two comparisons and an AND). It also has useful (under some circumstances) behaviour if the range exceeds the maximum value of the type. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/minmax.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/include/linux/minmax.h b/include/linux/minmax.h index 396df1121bff..028069a1f7ef 100644 --- a/include/linux/minmax.h +++ b/include/linux/minmax.h @@ -158,6 +158,32 @@ */ #define clamp_val(val, lo, hi) clamp_t(typeof(val), val, lo, hi) +static inline bool in_range64(u64 val, u64 start, u64 len) +{ + return (val - start) < len; +} + +static inline bool in_range32(u32 val, u32 start, u32 len) +{ + return (val - start) < len; +} + +/** + * in_range - Determine if a value lies within a range. + * @val: Value to test. + * @start: First value in range. + * @len: Number of values in range. + * + * This is more efficient than "if (start <= val && val < (start + len))". + * It also gives a different answer if @start + @len overflows the size of + * the type by a sufficient amount to encompass @val. Decide for yourself + * which behaviour you want, or prove that start + len never overflow. + * Do not blindly replace one form with the other. + */ +#define in_range(val, start, len) \ + sizeof(start) <= sizeof(u32) ? in_range32(val, start, len) : \ + in_range64(val, start, len) + /** * swap - swap values of @a and @b * @a: first value From patchwork Mon Jul 10 20:43:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118091 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp66506vqm; Mon, 10 Jul 2023 13:55:12 -0700 (PDT) X-Google-Smtp-Source: APBJJlGyMsagxO6PeUrfCC4+ySbXuzUAEOHCpRCDQKw9HMHB2+So2Ad7xWqYXZFpgBthYXW9yw2U X-Received: by 2002:ac2:5f9c:0:b0:4f8:58af:ebd7 with SMTP id r28-20020ac25f9c000000b004f858afebd7mr10108350lfe.39.1689022512687; Mon, 10 Jul 2023 13:55:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022512; cv=none; d=google.com; s=arc-20160816; b=VU9Sth3kqcK81UCS4LIbA3Z5pWHLUhkG958qhdUddms8vQlNiAchoco1eCOzEWQFJz VH4/kzt6s7BB2fRfMpLdrCKmlGN3CwnLwCAPd+O2qGSJtDr5hFSmhQxISXAgz41yxvip nnvQxKVysLt2pKca5pTPwzcD7koFoTsvRObjVTFbqnbjtC+jG8beGwfOZ6oeFz+xSbqj mBQ9cMttyzCq1gm30QjkSaRR4pJSzc8sTjkyEl8qlfbUSgW/2ahE37SmCT9V6BjIJz/z qmhw/riaa5EQ3lFxD7SN559tdwDtHGYSP7BtU6xiRkfU0GJVL48ualxVWORmASXPeQ1C KgWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eJelxPmopSzMoFe/Y1tUG0sLrpgQ5fNaEOVpyq/ur+g=; fh=o5Gq1WLGnnFTlNdb8t4YCP7HM0H9d2hN3YCEdtj1R/Q=; b=k8gjbkG2ed+VdWxmoLhtn5aCO0PYtGZ+juD7/afGHvGLabV0IBM3sLaTLGVv15TplP /XspaaibsGTQVkvCJ6GGThkRRe1nNL/YIlx4hiNzgNo8c8NRXsD6gCqfBhKtpGoiD+9G 8jw2dlZmcE9Lv4a9SyWYjlsJ+qHbgKLL9u8BNsZ+4AAqwCp1UMtw72zQDY5pMtDy9Hwn XkTqQNfOUi8w9ajaB4EkB2NuyHY8J9Nm20q97dsWxR3YW5KW/MLvvC8Hty9VqVJ3VJdp k5BoAEl8PG2i0D3GFu8SnCFsFgo5IR/XdJ6kL7byh08HoB11MelXSAjGtztKtL7/I9sB Xdzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=P8TzmlbA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g20-20020aa7d1d4000000b0051e2a1ef7dbsi349481edp.495.2023.07.10.13.54.49; Mon, 10 Jul 2023 13:55:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=P8TzmlbA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233389AbjGJUql (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232792AbjGJUpF (ORCPT ); Mon, 10 Jul 2023 16:45:05 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DEE910C8; Mon, 10 Jul 2023 13:44:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eJelxPmopSzMoFe/Y1tUG0sLrpgQ5fNaEOVpyq/ur+g=; b=P8TzmlbAwVhkTAwvj62gZ58sNq O6eAnY9zcJYyxX99bxIBcqqcHA6SMUID/AoEDWb8lvwCDN9uLo8mbJTdZ0/7ngKUF61ujoqPsmp6j epZ0CN27OLg0Mzv/zsUzJXIdj7am3CWhIQiHbO0Zg800SSUO6BirLjovPQaZ2eOrspS61MEF8cufx AMzNbAAW9ZGEHCK0fIQ2bnmZmZWx9kM7Kjt8Hd4NOwitJp41cFz1PkENpBLJ1fjO/gp6ChaO0g6DC vvX6k0YrZzXjjD0DWD04HKzOoy4ap9z9w1vMUYtmdsiKtnjv/TkcUelAXmFKRhjGgkbqAE+P2S9pk c04rTxwQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjR-00EuoG-77; Mon, 10 Jul 2023 20:43:41 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Pasha Tatashin , Anshuman Khandual Subject: [PATCH v5 02/38] mm: Convert page_table_check_pte_set() to page_table_check_ptes_set() Date: Mon, 10 Jul 2023 21:43:03 +0100 Message-Id: <20230710204339.3554919-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068470689699682 X-GMAIL-MSGID: 1771068470689699682 Tell the page table check how many PTEs & PFNs we want it to check. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport (IBM) Acked-by: Pasha Tatashin Reviewed-by: Anshuman Khandual --- arch/arm64/include/asm/pgtable.h | 2 +- arch/riscv/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 2 +- include/linux/page_table_check.h | 14 +++++++------- mm/page_table_check.c | 14 ++++++++------ 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index e8a252e62b12..a44a150e0318 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -348,7 +348,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); return __set_pte_at(mm, addr, ptep, pte); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 75970ee2bda2..2137e36595b3 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -499,7 +499,7 @@ static inline void __set_pte_at(struct mm_struct *mm, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) { - page_table_check_pte_set(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, 1); __set_pte_at(mm, addr, ptep, pteval); } diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 5700bb337987..c6242bc58a71 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1023,7 +1023,7 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); set_pte(ptep, pte); } diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h index 01e16c7696ec..ba269c7009e4 100644 --- a/include/linux/page_table_check.h +++ b/include/linux/page_table_check.h @@ -20,8 +20,8 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr, pmd_t pmd); void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, pud_t pud); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte); +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, @@ -73,14 +73,14 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, __page_table_check_pud_clear(mm, addr, pud); } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; - __page_table_check_pte_set(mm, addr, ptep, pte); + __page_table_check_ptes_set(mm, addr, ptep, pte, nr); } static inline void page_table_check_pmd_set(struct mm_struct *mm, @@ -138,9 +138,9 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, { } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { } diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 93ec7690a0d8..662b3f5d31b4 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -190,20 +190,22 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(__page_table_check_pud_clear); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + if (&init_mm == mm) return; - __page_table_check_pte_clear(mm, addr, ptep_get(ptep)); + for (i = 0; i < nr; i++) + __page_table_check_pte_clear(mm, addr, ptep_get(ptep + i)); if (pte_user_accessible_page(pte)) { - page_table_check_set(mm, addr, pte_pfn(pte), - PAGE_SIZE >> PAGE_SHIFT, + page_table_check_set(mm, addr, pte_pfn(pte), nr, pte_write(pte)); } } -EXPORT_SYMBOL(__page_table_check_pte_set); +EXPORT_SYMBOL(__page_table_check_ptes_set); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) From patchwork Mon Jul 10 20:43:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118088 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp63784vqm; Mon, 10 Jul 2023 13:48:20 -0700 (PDT) X-Google-Smtp-Source: APBJJlEFfr/XQe2zvhSu3ifN5C0NjhTBQQQfg+dbIcSXyXOyczQL0ACLqHHVIXD3knlKpjfmXCUc X-Received: by 2002:a2e:964a:0:b0:2b7:310:65da with SMTP id z10-20020a2e964a000000b002b7031065damr7083378ljh.24.1689022100353; Mon, 10 Jul 2023 13:48:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022100; cv=none; d=google.com; s=arc-20160816; b=Z3+K1eWwK4XgmPY5IgFCn8qlEm4+TRs3cNhkgIB1HHlgeJaCLSAq5c6baW3wfaEi1T X4kBPi+Qs/PZxAu1wloj7478r0f8P0V5LwuZahhdgqf4KyDLJGdC97B8Q/5PJV2udrcR qq4UmYse7oDE3gthairpFCW9rOGngaUEEDK0xuWW819k0Hr93IwDW/+roQhxLVdH2Fay KZPycDBL1SPRXcCJNWtMtOVpHi/LgGAX0TSYYFkcRTMZZMQdHwAFYu4ORcGTdQwxOzlk 9LV2Z4wCqlWs8JLw8B+J5+3m1hzfTtFDU6I6bU6ph05d5qDbGNTPe15WE6WiQ5mkKxAp g5Vg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YVryiR3XtoiyFiUlRMFGMnu0rhdwK+sF8PTJzbuMMJE=; fh=Sw3T8HVdJ2SNfjr8InIonpqb67K59zr0WnKWm9/Paxo=; b=Br0iPPtPnVo7Y1nN8LwnZN9eOihY1BKJnd9pAZ1EIQwrgDtiU0gVkaQDqQwu5SW0+w rfYl2DMsTmLo1A3sE4Mynf4wE/Zy+yjGT8IYQIXM9ZwssCyFQel0UDJXhScJVVPehx3+ vgoOGcRhyNGWXyQK2S4fC79sycy8YY7okWQATqkf74xUOoVgKkchGUZO5CpazvJdkYkX ILGc0+qH2b7d27zIt/keTTyHaKJlXMK+2PTyg6g9CmYQp/qRi5lJhE9uuzeZKM+MN0Hv rZURoHCqdkLtjsS1Dw5POFB3Qtt42qWXFAtIwJlZh8Yh15fDGyHSLylpdbG1wCbz6W49 Fs+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cQSu1fAY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gu21-20020a170906f29500b0099318060d55si395168ejb.516.2023.07.10.13.47.57; Mon, 10 Jul 2023 13:48:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cQSu1fAY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232999AbjGJUqS (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231653AbjGJUo6 (ORCPT ); Mon, 10 Jul 2023 16:44:58 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75FAA1BC; Mon, 10 Jul 2023 13:43:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YVryiR3XtoiyFiUlRMFGMnu0rhdwK+sF8PTJzbuMMJE=; b=cQSu1fAYjR+TiMmvRB+FLhQ9VZ dsKNLEH2eO+40qj1H1+SavyWnV5+amvjA/AhrbeWX/K4/wtT+3j2fFa69/prVbe1UZSyQaBq6jzmB H2+cJXFFy8UHtpl6N4Gch1Ixd0IFCKgOgHptrgwAHYlt6KpxiO1ldUzQP+osBB96120oB0vtwE+i/ oi078x9gLqcljH1xnPXdEDEYPQ9BN5HMqcnU+FGPZlsQZA5HCrT1PEZNqax5MGPGON1xh2zrdXya4 6Q7uEFlqiFvJJJ26lw+kK9j153qORPoU8+H+huhdweYsbKw8Ta5ZKrlR7R7ObA2g9/E/P7+T+bJLA PUYGo1AQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjR-00EuoI-Db; Mon, 10 Jul 2023 20:43:41 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Anshuman Khandual Subject: [PATCH v5 03/38] mm: Add generic flush_icache_pages() and documentation Date: Mon, 10 Jul 2023 21:43:04 +0100 Message-Id: <20230710204339.3554919-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068038006033845 X-GMAIL-MSGID: 1771068038006033845 flush_icache_page() is deprecated but not yet removed, so add a range version of it. Change the documentation to refer to update_mmu_cache_range() instead of update_mmu_cache(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- Documentation/core-api/cachetlb.rst | 39 ++++++++++++++++------------- include/asm-generic/cacheflush.h | 5 ++++ 2 files changed, 27 insertions(+), 17 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index 5c0552e78c58..b645947954fb 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -88,13 +88,17 @@ changes occur: This is used primarily during fault processing. -5) ``void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep)`` +5) ``void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, pte_t *ptep, + unsigned int nr)`` - At the end of every page fault, this routine is invoked to - tell the architecture specific code that a translation - now exists at virtual address "address" for address space - "vma->vm_mm", in the software page tables. + At the end of every page fault, this routine is invoked to tell + the architecture specific code that translations now exists + in the software page tables for address space "vma->vm_mm" + at virtual address "address" for "nr" consecutive pages. + + This routine is also invoked in various other places which pass + a NULL "vmf". A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -306,17 +310,18 @@ maps this page at its virtual address. private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. - This allows these interfaces to be implemented much more efficiently. - It allows one to "defer" (perhaps indefinitely) the actual flush if - there are currently no user processes mapping this page. See sparc64's - flush_dcache_page and update_mmu_cache implementations for an example - of how to go about doing this. + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) the + actual flush if there are currently no user processes mapping this + page. See sparc64's flush_dcache_page and update_mmu_cache_range + implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if page_file_mapping() - returns a mapping, and mapping_mapped on that mapping returns %false, - just mark the architecture private page flag bit. Later, in - update_mmu_cache(), a check is made of this flag bit, and if set the - flush is done and the flag bit is cleared. + The idea is, first at flush_dcache_page() time, if + page_file_mapping() returns a mapping, and mapping_mapped on that + mapping returns %false, just mark the architecture private page + flag bit. Later, in update_mmu_cache_range(), a check is made + of this flag bit, and if set the flush is done and the flag bit + is cleared. .. important:: @@ -369,7 +374,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache. In the future, the hope + flush_dcache_page and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index f46258d1a080..09d51a680765 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #endif #ifndef flush_icache_page +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} + static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) { From patchwork Mon Jul 10 20:43:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118079 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp62742vqm; Mon, 10 Jul 2023 13:46:03 -0700 (PDT) X-Google-Smtp-Source: APBJJlF7PY8A65Tn/tq6rs8dFiP7nfbYOpuNUWXMCY8AwlxaGME2xtwl7rAw++Bzt3XVc/cK+FqK X-Received: by 2002:aa7:cd5a:0:b0:51d:a181:d78e with SMTP id v26-20020aa7cd5a000000b0051da181d78emr13887854edw.27.1689021963364; Mon, 10 Jul 2023 13:46:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689021963; cv=none; d=google.com; s=arc-20160816; b=i1rOjbvqK/wBgPh7qmJpkmw/ZFh6XlapbucjIP5zkZ6qSb9Rv+qvUKE/8CKO4HFe8I uN5H09pLvdnkpmy/Gpn6JOrpDwltU2ppvkbRFbhRDIMK+W6x5DJj4lxclqUL5mKCi1QE QjuzQg3kZ5RSXKBH/cGZLUg2veUeu5JgWZuSrfgnPfrBKdhA/0a/JVy7mAiwOltCtnKZ o8z9vlLJwmDPYXc4peU9gCCcuMF/a+4Zamk+OiGHkC/xolkNKVSY1erfWRoDOUNH8Zxw vGq+/4g2WoeWigyRQP4nHB/NM/lmOa8HyaQJToqldCkeWL+Fj+3pVin5h7tsvlpRf/sc mwrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=aGQ2GHttWzJ+Nk3XqgE1qMHvrVsWPVFdkiJHip5axuY=; fh=Sw3T8HVdJ2SNfjr8InIonpqb67K59zr0WnKWm9/Paxo=; b=PlwYysg0QTeyg9EpyGm6pQ793LoRQnNCVBEdzd9DOIWpQS87Q1PhbGv6k/C8Pb9lNd YRfwlJzBelbEJUk5oNzoICKf1SP6Yr6i+WrcBBOKPEhD509p08wp0foo79zYYIZ16OO2 W/YV7ACn9wgNa2c9K1K6D419QdougKjZPnHyifjfrH3TuN+o0zjKpN2B0GnHPLgbD5n6 wphHkvcDSMAJOBMe6jTI31PMX8G7HtqlgyEAC6L1cKvWAQzg79pzWBxhiKJsBC5MUTSg C2Udx/sv/1MYrWJxkrDol09fgVCMocdjeGcqkxoTYMvKqE1vCrUKaovtsy16kVBho+A3 QlRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hnJ+NuJo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f14-20020aa7d84e000000b0051a3fb6c050si332784eds.187.2023.07.10.13.45.39; Mon, 10 Jul 2023 13:46:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hnJ+NuJo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232158AbjGJUny (ORCPT + 99 others); Mon, 10 Jul 2023 16:43:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232051AbjGJUnx (ORCPT ); Mon, 10 Jul 2023 16:43:53 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88648E42; Mon, 10 Jul 2023 13:43:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aGQ2GHttWzJ+Nk3XqgE1qMHvrVsWPVFdkiJHip5axuY=; b=hnJ+NuJoHrGWnFQIHj5mwikc3Y f+ynvhq+4B/aEwy8FCPsgFshOQWVrWerLCxpuQBbeUnTyyz1X0U1Nqmo5shKPFaE2oMcX+P2NwLlm Hx6TYojYAi/XeM3hRmgQle7OUe0P4VkJ/Tt345eZQ34NwKAAmjbTCS8gD+17k8LCVxA5Uw2bKNxGK 2pBd5RYRhgtJ8EnVR9p6jVnQj2YQK6aJ3WeDQX+ZdDJGeDEqNezcYGk0XiuPM9EbIuKOpAhQmmDC0 VsWF9mJJKPpNAkiMNKSDiBc99gmwms10qIVtaqz0HiRd1zfD+6eV29vnvCGa1faKf7Aum+Nodg8wh gzQnfLWQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjR-00EuoL-Fj; Mon, 10 Jul 2023 20:43:41 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Anshuman Khandual Subject: [PATCH v5 04/38] mm: Add folio_flush_mapping() Date: Mon, 10 Jul 2023 21:43:05 +0100 Message-Id: <20230710204339.3554919-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771067894395101444 X-GMAIL-MSGID: 1771067894395101444 This is the folio equivalent of page_mapping_file(), but rename it to make it clear that it's very different from page_file_mapping(). Theoretically, there's nothing flush-only about it, but there are no other users today, and I doubt there will be; it's almost always more useful to know the swapfile's mapping or the swapcache's mapping. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- include/linux/pagemap.h | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 69b99b61ed72..794e4e55dc38 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -389,6 +389,26 @@ static inline struct address_space *folio_file_mapping(struct folio *folio) return folio->mapping; } +/** + * folio_flush_mapping - Find the file mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Anonymous folios return NULL, even if they're in + * the swap cache. Other kinds of folio also return NULL. + * + * This is ONLY used by architecture cache flushing code. If you aren't + * writing cache flushing code, you want either folio_mapping() or + * folio_file_mapping(). + */ +static inline struct address_space *folio_flush_mapping(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return NULL; + + return folio_mapping(folio); +} + static inline struct address_space *page_file_mapping(struct page *page) { return folio_file_mapping(page_folio(page)); @@ -399,11 +419,7 @@ static inline struct address_space *page_file_mapping(struct page *page) */ static inline struct address_space *page_mapping_file(struct page *page) { - struct folio *folio = page_folio(page); - - if (unlikely(folio_test_swapcache(folio))) - return NULL; - return folio_mapping(folio); + return folio_flush_mapping(page_folio(page)); } /** From patchwork Mon Jul 10 20:43:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118107 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp75492vqm; Mon, 10 Jul 2023 14:13:25 -0700 (PDT) X-Google-Smtp-Source: APBJJlH6Q2SMtW9mY6PCQgqkmhntCw7f5NgGMgJk9ilnZHKhA1pJsk1Lyc8OvmvLH1T0QPjEeO6S X-Received: by 2002:a17:902:efd3:b0:1b1:a9e7:5d4b with SMTP id ja19-20020a170902efd300b001b1a9e75d4bmr15143523plb.22.1689023605482; Mon, 10 Jul 2023 14:13:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023605; cv=none; d=google.com; s=arc-20160816; b=wNUziCZyrClgw2Xusyu9ttWbxVTxN9eXGHyBWJiDGdRvjkg+uPAqBDUujbamTsW9yW bwqP2DbJMgg9kzielm/AogbpM3uWYfezDaELznIALURJBuUCCv4dLK0PYUUQMx/0xxfd RtrRbZr7gQKDbTuaetMZYbcqwAIWA1GDh0+/34tnYTCuze4ogZqNPDGBXjkdSDwXqKQO UfZ65FWwGsfdDyeI3MElRi24PMLTH7YNaeo5VL0JGoTEOQWxDaDsJsrXP8NseSyji+py BFW6MV6nGh7zQN+KVH7CYobN7vMLLPwlkoJswASYoD3N5xa/cZAXItYM8oQKSDLmdidh 86vg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ynZRvvhfxJbLcCtmfVp5saL88qGUbGsuSyWQAo7RARA=; fh=Sw3T8HVdJ2SNfjr8InIonpqb67K59zr0WnKWm9/Paxo=; b=AFUqrCpKJ9WVa39tX2kWY82YKk8GjOUHFzXnOtLACjF9zdpnKPAxuwNkMj4fmSELE4 +agwS8XKOqyM5STKlzbZxy3RiZToxS3OuuDdYHk8llezlh0Ln2SbLwGpuX0vboCJ++lZ dgqxsSqdrSB/e4Ualbfzi14B6DX0Aq0s+tbGbrp/gO1RAAFZVCBJMaIhiokSF/sz2hWA zDYJ6dBCDXl030fWROee+MRCmEc9rp2zCEj5fK1k54uzApbOcozDAmjdWtqlGdEJ4xi2 mmkrwkZ1ld7209aP0fX0euARmqblmhe8iAFPyPXrS8LPnvlI7N7aa7cqlYqVU5xDoYEy aEyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=ppPLHwkc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d9-20020a170902cec900b001b9e9b21249si349171plg.649.2023.07.10.14.13.12; Mon, 10 Jul 2023 14:13:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=ppPLHwkc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233353AbjGJUqW (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232582AbjGJUo7 (ORCPT ); Mon, 10 Jul 2023 16:44:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59D3CE49; Mon, 10 Jul 2023 13:43:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ynZRvvhfxJbLcCtmfVp5saL88qGUbGsuSyWQAo7RARA=; b=ppPLHwkc8PkE5KTNl7AKQdgBW4 c4iELF8cC+JKAEjLxJ+VvwuPt8I1Bkn/wUOZt/lGDQP7PmZZnQhjzSldS6l+jFtPadqFz6Fvj/yYo CA0CNCuv0zIxgNykGwxtwMyEh08s8iMfnFLwJTefF5Ke2lQ7ewfGPUSt5V0bbquZ7bvpaM6DXkjBd hBCicFnOyUrKcbJBMtKHYfXsI3f6jNl9lSF7XUDik389BDsG8ZBHYld/8T3ZmWJI+PMvsxUTZLRB3 1MOctuRbVKfNuo9Jtc964irIvPwm5z5eL5eBbWoSRYGK3ADd8udnc98e94on+6qKF02DI0aV3K4yr mB2ZZIxA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjR-00EuoN-Hy; Mon, 10 Jul 2023 20:43:41 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Anshuman Khandual Subject: [PATCH v5 05/38] mm: Remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO Date: Mon, 10 Jul 2023 21:43:06 +0100 Message-Id: <20230710204339.3554919-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069616029852274 X-GMAIL-MSGID: 1771069616029852274 Current best practice is to reuse the name of the function as a define to indicate that the function is implemented by the architecture. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- Documentation/core-api/cachetlb.rst | 24 +++++++++--------------- include/linux/cacheflush.h | 4 ++-- mm/util.c | 2 +- 3 files changed, 12 insertions(+), 18 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index b645947954fb..889fc84ccd1b 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -273,7 +273,7 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, these two routines may simply call memcpy/memset directly and do nothing more. - ``void flush_dcache_page(struct page *page)`` + ``void flush_dcache_folio(struct folio *folio)`` This routines must be called when: @@ -281,7 +281,7 @@ maps this page at its virtual address. and / or in high memory b) the kernel is about to read from a page cache page and user space shared/writable mappings of this page potentially exist. Note - that {get,pin}_user_pages{_fast} already call flush_dcache_page + that {get,pin}_user_pages{_fast} already call flush_dcache_folio on any page found in the user address space and thus driver code rarely needs to take this into account. @@ -295,7 +295,7 @@ maps this page at its virtual address. The phrase "kernel writes to a page cache page" means, specifically, that the kernel executes store instructions that dirty data in that - page at the page->virtual mapping of that page. It is important to + page at the kernel virtual mapping of that page. It is important to flush here to handle D-cache aliasing, to make sure these kernel stores are visible to user space mappings of that page. @@ -306,18 +306,18 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, this routine may simply be defined as a nop on that architecture. - There is a bit set aside in page->flags (PG_arch_1) as "architecture + There is a bit set aside in folio->flags (PG_arch_1) as "architecture private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. This allows these interfaces to be implemented much more efficiently. It allows one to "defer" (perhaps indefinitely) the actual flush if there are currently no user processes mapping this - page. See sparc64's flush_dcache_page and update_mmu_cache_range + page. See sparc64's flush_dcache_folio and update_mmu_cache_range implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if - page_file_mapping() returns a mapping, and mapping_mapped on that + The idea is, first at flush_dcache_folio() time, if + folio_flush_mapping() returns a mapping, and mapping_mapped() on that mapping returns %false, just mark the architecture private page flag bit. Later, in update_mmu_cache_range(), a check is made of this flag bit, and if set the flush is done and the flag bit @@ -331,12 +331,6 @@ maps this page at its virtual address. dirty. Again, see sparc64 for examples of how to deal with this. - ``void flush_dcache_folio(struct folio *folio)`` - This function is called under the same circumstances as - flush_dcache_page(). It allows the architecture to - optimise for flushing the entire folio of pages instead - of flushing one page at a time. - ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long user_vaddr, void *dst, void *src, int len)`` ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page, @@ -357,7 +351,7 @@ maps this page at its virtual address. When the kernel needs to access the contents of an anonymous page, it calls this function (currently only - get_user_pages()). Note: flush_dcache_page() deliberately + get_user_pages()). Note: flush_dcache_folio() deliberately doesn't work for an anonymous page. The default implementation is a nop (and should remain so for all coherent architectures). For incoherent architectures, it should flush @@ -374,7 +368,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache_range. In the future, the hope + flush_dcache_folio and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index a6189d21f2ba..82136f3fcf54 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -7,14 +7,14 @@ struct folio; #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio); #endif #else static inline void flush_dcache_folio(struct folio *folio) { } -#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 +#define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* _LINUX_CACHEFLUSH_H */ diff --git a/mm/util.c b/mm/util.c index 5e9305189c3f..cde229b05eb3 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1119,7 +1119,7 @@ void page_offline_end(void) } EXPORT_SYMBOL(page_offline_end); -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio) { long i, nr = folio_nr_pages(folio); From patchwork Mon Jul 10 20:43:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118115 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp79067vqm; Mon, 10 Jul 2023 14:21:11 -0700 (PDT) X-Google-Smtp-Source: APBJJlFtdXmYJjLjCYu7iDxugXbxCez40/FlT9lgPw46lyPqW5W2QbJps+hGPl8zhflmHe6zLAmi X-Received: by 2002:a17:903:2441:b0:1b8:a39e:ff4b with SMTP id l1-20020a170903244100b001b8a39eff4bmr25107156pls.32.1689024071321; Mon, 10 Jul 2023 14:21:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689024071; cv=none; d=google.com; s=arc-20160816; b=00dIToGaDRGOOkthrLN5Wl0ip0bZAXBFOa+GGjfWIFpR76gkZQKq4Cp0d+4b0t/+/G XUyqoFD7UQ2RKkXhrAtk1xO0LuWx9CBEfRp8/ThwlulbpKeq1LCYG4L7qHt8Hhjjj1Ad utxLgJiVul6ZzIlRMbkpmL9GLz4CuQg0KFKHPGGTXjUGhyD0R94KYKde6oMpFcf1Zxr8 Qa7xhJHvshrukdS9vxOAOo1ZFutX0aHDQYtKlvqCSUP+bEQgGgz9hDE+uY/OIkDoL7V6 p3BdcI+Wdd/W+iyVGsQvggGSNRPXV2KCxH5otmEw4mb8hkUJ+5vMHzTIxrQhlfRaJWsj NsDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MleiOB+O8ZU6a0dknzCAyzLly2run5bVw1iBj0akiwE=; fh=wWC1SXvMoe0DNkeabIUy9Bh6clPuvng1++evCi/x+a0=; b=M/9oSy0l8tcOjDlzMeJy22IE8ITa/zQGSvWSIAzEtRmqqUmcnw8s9f9JCG8BAmV2As biUMYlUN8wsGhA/0IVbg4tpMWwFcoFAm+H6vtAxCScW3r4VFNTkQ4usvGpIxghBIBknx RSnKwq8tBWGntBgdJVeKVuxw0qjYV1gyDO1etSa2p3EaTMd5szjLGPqL1UKa1iZzbrV1 PfWDjW314O4FypqWSH1VDL914Mcs+3lsaADMqzd/oKT1M9gsCONEojygKPXekmJvk52p g7igs3wD3COiGjKXWVZnOWCB48EeXN1GrCJPpWUb240kKsn4/onliAFB4Cd00eeHWCSE gs5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hOrOl8ZD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y15-20020a17090322cf00b001b9e38b8167si401455plg.169.2023.07.10.14.20.57; Mon, 10 Jul 2023 14:21:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hOrOl8ZD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233155AbjGJUqj (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232752AbjGJUpE (ORCPT ); Mon, 10 Jul 2023 16:45:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73ACFE7C; Mon, 10 Jul 2023 13:43:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MleiOB+O8ZU6a0dknzCAyzLly2run5bVw1iBj0akiwE=; b=hOrOl8ZD1/O2pyYUqReAOMC00Z Ieo+CK8byYZFLkPXVV02AsvgNqzM+M+5Hs5ddbEByzw4jdlmGhQdp/hYbYc0Chtr6YA7X44zkC7cb pe1kzZ6zaYXY/tk5Mknp8cFObie3KrY7PSTLOEQQfbeyPHvptTFJwVuzbRs9qu2Ux/zoeLgExVs3H qBCGNpyIdQBTjcwA8Q/IVBK/VUN1JxCd6Hmdi8nqnJI7p7gN2OpW9rnno24Xe89fsiFv+1diNHG6N fi+zCWa8Rzk4rDRFMEoz3N6WnIjgreib+8dORaFDF9C9KDTy6dgeIOpB8RoUxuYjip9hX3iORp+Xt tu/fo7mw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjR-00EuoP-KM; Mon, 10 Jul 2023 20:43:41 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport Subject: [PATCH v5 06/38] mm: Add default definition of set_ptes() Date: Mon, 10 Jul 2023 21:43:07 +0100 Message-Id: <20230710204339.3554919-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771070104904919804 X-GMAIL-MSGID: 1771070104904919804 Most architectures can just define set_pte() and PFN_PTE_SHIFT to use this definition. It's also a handy spot to document the guarantees provided by the MM. Suggested-by: Mike Rapoport (IBM) Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport (IBM) --- include/linux/pgtable.h | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 5063b482e34f..22f48f9997d5 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -180,6 +180,43 @@ static inline int pmd_young(pmd_t pmd) } #endif +#ifndef set_ptes +#ifdef PFN_PTE_SHIFT +/** + * set_ptes - Map consecutive pages to a contiguous range of addresses. + * @mm: Address space to map the pages into. + * @addr: Address to map the first page at. + * @ptep: Page table pointer for the first entry. + * @pte: Page table entry for the first page. + * @nr: Number of pages to map. + * + * May be overridden by the architecture, or the architecture can define + * set_pte() and PFN_PTE_SHIFT. + * + * Context: The caller holds the page table lock. The pages all belong + * to the same folio. The PTEs are all in the same PMD. + */ +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); + } +} +#ifndef set_pte_at +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#endif +#endif +#else +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#endif + #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, From patchwork Mon Jul 10 20:43:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118080 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp62769vqm; Mon, 10 Jul 2023 13:46:05 -0700 (PDT) X-Google-Smtp-Source: APBJJlFW2YGG58iWIgsluJcxn16BnIW1umGhMBRu6TqM8XHGSur35OOuOzDxnUb8yYSie54dFqEN X-Received: by 2002:a17:906:943:b0:992:a9ba:b8da with SMTP id j3-20020a170906094300b00992a9bab8damr11409873ejd.70.1689021965568; Mon, 10 Jul 2023 13:46:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689021965; cv=none; d=google.com; s=arc-20160816; b=0y5eGv7+XjMvrCaxBjmeQWUGKT0TmTZtcdiaeIE5diCbh3pdAKiePTexkrO6vdCv9j glUyEGlTpfF1rZKne6dIbkIw4IVs9+UMqyJ7MVgKhC2I2j4EFvzZYJJ6J/vCdMBDsFA9 aQhLwRrsGPX0crwXm0yCG7uH7g+ApRyeoidjJpjiOtQ1IFc02kgcR0nQZGfdm0J2jdrQ SdvCz+H+3uyIeOoYMdXUZws2yfg5ZE+HF099ofMD114hrO8j7T2QB+w9JzBt5WB4TPdJ vinv5i8pqbrvvpZHMc6YX12+i/WtQgUbv0gJ49Hsbv24qS97J643sL0RomhVxV3nerUO QoHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NROTlL6IHKcHVr5IJzwMhJiAK5XHIL/ixXU1imRUdQ4=; fh=A2IIXLju8vZGuiKKMuDOP9dLbPZgj0b2ovD+3xAxuYI=; b=ndl4mWaPZYCLuf11RI38f+AC2K4HEuBofxIjb4UUI4aUFIHtirJO+DCTYK50XS7Asw vRZ1V1mG3tUgDnxcbyBOyaaNtQTfC6nVQdafFFtIAiISa3MRPzjrl3h/FQsylSR5XTf4 uA+PfQ6eCXVSLuxyKpVIroTQ+cMumLG25/Ya9+UZmpkYrpfUjC55h/kaEUQOrpOLea9t qgwxOqgQkbnBk9MWbImhsNdV5y1urORB5tX7blarDByHZbYrLvdUzAPRLoENSdYr+D11 9n+McE7qd/ZkfXy7NPdFzvnn1AwmK/OButU4t0yEyZ+jtQlcSFvB6JtuNzUKVKiPTbvk tFAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="E8a7/eIM"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u8-20020a1709063b8800b0098d2f716c75si462185ejf.597.2023.07.10.13.45.42; Mon, 10 Jul 2023 13:46:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="E8a7/eIM"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232392AbjGJUoF (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230487AbjGJUn4 (ORCPT ); Mon, 10 Jul 2023 16:43:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E526E69; Mon, 10 Jul 2023 13:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NROTlL6IHKcHVr5IJzwMhJiAK5XHIL/ixXU1imRUdQ4=; b=E8a7/eIMgmOjsWrBHysNQXo1Ni ggvEuhfIneD9rq41HR/VUHUxXTJXr/lSG839FvzdGonh++4SxvKlk43T6taeXoogmOYrwjQbJwbnH E+5D6BjuHj4ufh0T8ojX9/RPH2n4+tDnWdrTf2UwwdZGxe/V04uHxgGLik7rPhCZstHGKAt7jbdQX w49Chx2oydHYYjGyyXLvrCsLKhhb8ZMySs9Dnt44+D1TB7jlCCGiLYpHM78Ol1RFiA1yIwML3s9JR 6bhbxgAAtiAyQST7teIYKdsEOFVgso9+5RoiI7smGYFo9KanW2IL36S7lXKxlSJFLpuzp1Kfjr1ip 7p9jiTQw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjR-00EuoR-Md; Mon, 10 Jul 2023 20:43:41 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH v5 07/38] alpha: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:08 +0100 Message-Id: <20230710204339.3554919-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771067896654736828 X-GMAIL-MSGID: 1771067896654736828 Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: linux-alpha@vger.kernel.org --- arch/alpha/include/asm/cacheflush.h | 10 ++++++++++ arch/alpha/include/asm/pgtable.h | 10 ++++++++-- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 9945ff483eaf..3956460e69e2 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -57,6 +57,16 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_page(vma, page) \ flush_icache_user_page((vma), (page), 0, 0) +/* + * Both implementations of flush_icache_user_page flush the entire + * address space, so one call, no matter how many pages. + */ +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ + flush_icache_user_page(vma, page, 0, 0); +} + #include #endif /* _ALPHA_CACHEFLUSH_H */ diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index ba43cb841d19..747b5f706c47 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -26,7 +26,6 @@ struct vm_area_struct; * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3)) @@ -189,7 +188,8 @@ extern unsigned long __zero_page(void); * and a page entry and page directory to the page they refer to. */ #define page_to_pa(page) (page_to_pfn(page) << PAGE_SHIFT) -#define pte_pfn(pte) (pte_val(pte) >> 32) +#define PFN_PTE_SHIFT 32 +#define pte_pfn(pte) (pte_val(pte) >> PFN_PTE_SHIFT) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) #define mk_pte(page, pgprot) \ @@ -303,6 +303,12 @@ extern inline void update_mmu_cache(struct vm_area_struct * vma, { } +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) +{ +} + /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that * are !pte_none() && !pte_present(). From patchwork Mon Jul 10 20:43:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118081 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp62802vqm; Mon, 10 Jul 2023 13:46:09 -0700 (PDT) X-Google-Smtp-Source: APBJJlEfR3JuFHr2TaFX9350md2GCjTRFWJTw0r1OJ/QF4Zgh+/2jloYCRaWWiXdGDfbgStWmOOw X-Received: by 2002:a5d:6b47:0:b0:314:152d:f8db with SMTP id x7-20020a5d6b47000000b00314152df8dbmr10440293wrw.58.1689021969071; Mon, 10 Jul 2023 13:46:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689021969; cv=none; d=google.com; s=arc-20160816; b=0S9TNzb+kNoaC36/TNGGvm3k2kN7ofB7WQz3Fl7XOSLxKNyUrIC1aujEURm5Wa+yx9 g9nl1j/74CgufjEf1NhaXSqOpKNhp3+5obLxYvK62AzxvzoGzrTjWuWfotBGY5TBY/+J p70NlRTmZvZmPi4gX1bV8RFgvnFUduUIplvjFPLBw4DDeRcSmf+S25jQ7filQ/fztgUq 2zbXb60h5CMjmgJ5L4jj9cSwvf2rAveLLkpMQ9diVl8NFx++atrJ9h9E8PEdmx54KD5R 364Zm79bUAYGwrQmFvNwVcy59Zl6PF2TTB8N/SReA7GL6ieUwuTnYcUIElnGytGEPalo 5w4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7+64kl8AtPPKnIA0PMupGgsqWzV9xsZVv5WpgpJ16tY=; fh=1XarQ4zZFE02+uKIIqTWmHHzLznXKxDwwV1B42uvmU0=; b=RF0fPbrIDK/OZAjKPrCi6tfxa2DvOrE3IqN6tPGPJaaDNMFTJ6DWi0EyeMUU//1AMa djLqFT1gtWllYtobJVsruKTppQl6XjA9MHdCwlA7BakPGvc6Id9b4hsOoc8a+VqBJA4l F4ZiLS5F3QPpsclowU2OaBWoVSdOfh8PNnfXN/TNYtNOO8tQU4o82P7P65+cWVJNuP74 7zYyATNhLDHVVDN8D93bmT4alWfEEhMh96ue7KnCS1Krjh+aF41UNKgzNWE919CVG5xp tj6PZB+EzbcWWVqYD7fhC4WCuG/p9wU6+7VY0+AheTRF+Y6m+sCcAwxZ4mYGealwCdrB p3Bw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=H7ttB9il; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id si29-20020a170906cedd00b00992765dcbd1si378235ejb.866.2023.07.10.13.45.45; Mon, 10 Jul 2023 13:46:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=H7ttB9il; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232402AbjGJUoI (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232181AbjGJUn4 (ORCPT ); Mon, 10 Jul 2023 16:43:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73EBFE59; Mon, 10 Jul 2023 13:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7+64kl8AtPPKnIA0PMupGgsqWzV9xsZVv5WpgpJ16tY=; b=H7ttB9ilSWCZ8hi9ISL9t+VLpn IbMF8/ekgcgFarpPdL+gTuB6TrDPdZstwSQXntyb68j3d0lD9k7WUKootTOe8buILYOiN2hD2HQ3k BG5Fo+xpj2VP5Cgb8r9DYnzmX0E6moXGMonP5kvQwgw+gVBp2EthW4SNZ+WHc0xwoj3IWve6ozdDe 79lTJj4tVIFnipXaGEFXlCE/27hJk6TyuHvo8Yc6gpraeUVkRmrKCN1vnhIPR4S/5T9Pun57NLqpV U8wiT1DLjjTBG1xFABEXXxjp9xzG51LnVJJpLd81Y5NgVi9OY2f29oWffPhZguDSqXCSb+7y0ZP3P 1B5aWK+A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00Euoc-18; Mon, 10 Jul 2023 20:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH v5 08/38] arc: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:09 +0100 Message-Id: <20230710204339.3554919-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771067900291518038 X-GMAIL-MSGID: 1771067900291518038 Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dc_clean flag from being per-page to per-folio (which means it cannot always be set as we don't know that all pages in this folio were cleaned). Enhance the internal flush routines to take the number of pages to flush. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Vineet Gupta Cc: linux-snps-arc@lists.infradead.org --- arch/arc/include/asm/cacheflush.h | 7 ++- arch/arc/include/asm/pgtable-bits-arcv2.h | 12 ++--- arch/arc/include/asm/pgtable-levels.h | 1 + arch/arc/mm/cache.c | 61 ++++++++++++++--------- arch/arc/mm/tlb.c | 18 ++++--- 5 files changed, 59 insertions(+), 40 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index e201b4b1655a..04f65f588510 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -25,17 +25,20 @@ * in update_mmu_cache() */ #define flush_icache_page(vma, page) +#define flush_icache_pages(vma, page, nr) void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len); -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr); -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr); +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio void dma_cache_wback_inv(phys_addr_t start, unsigned long sz); void dma_cache_inv(phys_addr_t start, unsigned long sz); diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 6e9f8ca6d6a1..ee78ab30958d 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -100,14 +100,12 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); -} +struct vm_fault; +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index ef68758b69f7..fc417c75c24d 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -169,6 +169,7 @@ #define pte_ERROR(e) \ pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_none(x) (!pte_val(x)) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_clear(mm,addr,ptep) set_pte_at(mm, addr, ptep, __pte(0)) diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index 55c6de138eae..3c16ee942a5c 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -752,17 +752,17 @@ static inline void arc_slc_enable(void) * There's a corollary case, where kernel READs from a userspace mapped page. * If the U-mapping is not congruent to K-mapping, former needs flushing. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; if (!cache_is_vipt_aliasing()) { - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); return; } /* don't handle anon pages here */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) return; @@ -771,17 +771,27 @@ void flush_dcache_page(struct page *page) * Make a note that K-mapping is dirty */ if (!mapping_mapped(mapping)) { - clear_bit(PG_dc_clean, &page->flags); - } else if (page_mapcount(page)) { - + clear_bit(PG_dc_clean, &folio->flags); + } else if (folio_mapped(folio)) { /* kernel reading from page with U-mapping */ - phys_addr_t paddr = (unsigned long)page_address(page); - unsigned long vaddr = page->index << PAGE_SHIFT; + phys_addr_t paddr = (unsigned long)folio_address(folio); + unsigned long vaddr = folio_pos(folio); + /* + * vaddr is not actually the virtual address, but is + * congruent to every user mapping. + */ if (addr_not_cache_congruent(paddr, vaddr)) - __flush_dcache_page(paddr, vaddr); + __flush_dcache_pages(paddr, vaddr, + folio_nr_pages(folio)); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + return flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); /* @@ -921,18 +931,18 @@ void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len) } /* wrapper to compile time eliminate alignment checks in flush loop */ -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr) +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE); + __ic_line_inv_vaddr(paddr, vaddr, nr * PAGE_SIZE); } /* * wrapper to clearout kernel or userspace mappings of a page * For kernel mappings @vaddr == @paddr */ -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr) +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __dc_line_op(paddr, vaddr & PAGE_MASK, PAGE_SIZE, OP_FLUSH_N_INV); + __dc_line_op(paddr, vaddr & PAGE_MASK, nr * PAGE_SIZE, OP_FLUSH_N_INV); } noinline void flush_cache_all(void) @@ -962,10 +972,10 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long u_vaddr, u_vaddr &= PAGE_MASK; - __flush_dcache_page(paddr, u_vaddr); + __flush_dcache_pages(paddr, u_vaddr, 1); if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, u_vaddr); + __inv_icache_pages(paddr, u_vaddr, 1); } void flush_cache_range(struct vm_area_struct *vma, unsigned long start, @@ -978,9 +988,9 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long u_vaddr) { /* TBD: do we really need to clear the kernel mapping */ - __flush_dcache_page((phys_addr_t)page_address(page), u_vaddr); - __flush_dcache_page((phys_addr_t)page_address(page), - (phys_addr_t)page_address(page)); + __flush_dcache_pages((phys_addr_t)page_address(page), u_vaddr, 1); + __flush_dcache_pages((phys_addr_t)page_address(page), + (phys_addr_t)page_address(page), 1); } @@ -989,6 +999,8 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); + struct folio *dst = page_folio(to); void *kfrom = kmap_atomic(from); void *kto = kmap_atomic(to); int clean_src_k_mappings = 0; @@ -1005,7 +1017,7 @@ void copy_user_highpage(struct page *to, struct page *from, * addr_not_cache_congruent() is 0 */ if (page_mapcount(from) && addr_not_cache_congruent(kfrom, u_vaddr)) { - __flush_dcache_page((unsigned long)kfrom, u_vaddr); + __flush_dcache_pages((unsigned long)kfrom, u_vaddr, 1); clean_src_k_mappings = 1; } @@ -1019,17 +1031,17 @@ void copy_user_highpage(struct page *to, struct page *from, * non copied user pages (e.g. read faults which wire in pagecache page * directly). */ - clear_bit(PG_dc_clean, &to->flags); + clear_bit(PG_dc_clean, &dst->flags); /* * if SRC was already usermapped and non-congruent to kernel mapping * sync the kernel mapping back to physical page */ if (clean_src_k_mappings) { - __flush_dcache_page((unsigned long)kfrom, (unsigned long)kfrom); - set_bit(PG_dc_clean, &from->flags); + __flush_dcache_pages((unsigned long)kfrom, + (unsigned long)kfrom, 1); } else { - clear_bit(PG_dc_clean, &from->flags); + clear_bit(PG_dc_clean, &src->flags); } kunmap_atomic(kto); @@ -1038,8 +1050,9 @@ void copy_user_highpage(struct page *to, struct page *from, void clear_user_page(void *to, unsigned long u_vaddr, struct page *page) { + struct folio *folio = page_folio(page); clear_page(to); - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); } EXPORT_SYMBOL(clear_user_page); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 5f71445f26bd..6f40f37e6550 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -467,8 +467,8 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * Note that flush (when done) involves both WBACK - so physical page is * in sync as well as INV - so any non-congruent aliases don't remain */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, - pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr) { unsigned long vaddr = vaddr_unaligned & PAGE_MASK; phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; @@ -491,15 +491,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, */ if ((vma->vm_flags & VM_EXEC) || addr_not_cache_congruent(paddr, vaddr)) { - - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(page); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); if (dirty) { + unsigned long offset = offset_in_folio(folio, paddr); + nr = folio_nr_pages(folio); + paddr -= offset; + vaddr -= offset; /* wback + inv dcache lines (K-mapping) */ - __flush_dcache_page(paddr, paddr); + __flush_dcache_pages(paddr, paddr, nr); /* invalidate any existing icache lines (U-mapping) */ if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, vaddr); + __inv_icache_pages(paddr, vaddr, nr); } } } @@ -531,7 +535,7 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { pte_t pte = __pte(pmd_val(*pmd)); - update_mmu_cache(vma, addr, &pte); + update_mmu_cache_range(NULL, vma, addr, &pte, HPAGE_PMD_NR); } void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, From patchwork Mon Jul 10 20:43:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118116 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp79082vqm; Mon, 10 Jul 2023 14:21:14 -0700 (PDT) X-Google-Smtp-Source: APBJJlFL1yvt4CsGQQdCx0uGLXbUSu9vsPdwDiiywLVoBGjxNqT3FSLbZvgmNuoaYhfyKPdAh7cP X-Received: by 2002:a17:90b:400a:b0:263:f5a5:fb98 with SMTP id ie10-20020a17090b400a00b00263f5a5fb98mr11613854pjb.28.1689024074094; Mon, 10 Jul 2023 14:21:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689024074; cv=none; d=google.com; s=arc-20160816; b=n22j+0DzHfDKYNlcR0nI5+vgpaQ8DJm+zaXLY4YKhTVyr8XzZ4+J0XwNrn6YLZsBwr OZTpAuqxKIcmQi6KgVCobSuRc6yUcLiYgcqoQnzUOHlg+vrrT+TSVSVXiFs+4oqbL46N LpS4iXSXyr6DjfD0inq4YGoI5Q/8Wb50821mLJd4OI7qhp5KbL032UbILVHFXpUS7rmo 3gIxXG4DxEP9l7Oly5ghhGRyNyocR27l7ljQB7h1lmaMv+AYrYv1QtKKLJOnNI62v/U3 CP91xNv2F3FxNAAb97VdqcZfGpHftgLEE3EgY26OYQviLVW5HTeCyI8ETaSuqPsTChwq mdDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cTS1arWDtVYlUGrue9zB009GiMjCb8eg49buEtJoGPA=; fh=wixTLxdSI5diCdG1fwgvztc2pmAKbHsJD2H900vA4nc=; b=BzFwkbQGWLUQjss7WSfDt4aOTGgGnRqUG0GU4BvSpZ9gl6IS5Ky+RZdrsL9A1hoksT 2VJg9HFj878rfmJqqankYVAQ03FVXrQ3abNOgBxkh3tWxg5xJXw/NVqdVOqHh8Am2RHG Eq4m+dQb9omOiDllMcGfuQaczYJwoRAnd0YapVBFCIFynTfwOfvDCQjKOANrH++qeUun ZrCeHZcQUl6FbiXPa7x94TMsnxd8Cxx6ZD6ag+eufwoTtz7y7nPIXPx089EOMpOXKufJ YGSRlVyUzzoLtq98uVRJkiYXx1PV4ZwFbzW/S/N2CtSs4YnmILtMdTDuHkV7L2eAJu6M zoZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="HNhvvT/8"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ft15-20020a17090b0f8f00b0026349debbdfsi7741096pjb.136.2023.07.10.14.21.01; Mon, 10 Jul 2023 14:21:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="HNhvvT/8"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232262AbjGJUok (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232295AbjGJUoE (ORCPT ); Mon, 10 Jul 2023 16:44:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB165E6A; Mon, 10 Jul 2023 13:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cTS1arWDtVYlUGrue9zB009GiMjCb8eg49buEtJoGPA=; b=HNhvvT/8F6pyDRuyUXtdCeZ4XO exX87aVgsKwY0R0a1Hyc8zlFH4V9Hxnw9siYGyad7io+m4Hr1N20rPpx2TDjCsOYvSt6mH6oE1y1M 4zGRbKefT6Ydup0b3QSjdldyhxWOECa2kZD/91xtY+1izjJZwLQqp3qshMuB+FskPBG5c5JtZ1h3X wJX8O5yc/xsrtIoUdVda18Hruq2CKx9f5K7N0ofZyWadE5AHpkLTSjmxYnNXuGW4flbAMwHWO9tZ0 CVAzSQXpAZ3zaYkEBhToQscCXMHthlN8HJgOPZlNWBqYZ3i7FwFNa3LdND0E9VNOCFB0nzaxeQDlc AwrRCmFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00Euoh-5H; Mon, 10 Jul 2023 20:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 09/38] arm: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:10 +0100 Message-Id: <20230710204339.3554919-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771070107236227707 X-GMAIL-MSGID: 1771070107236227707 Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clear flag from being per-page to per-folio which makes __dma_page_dev_to_cpu() a bit more exciting. Also add flush_cache_pages(), even though this isn't used by generic code (yet?) Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Russell King (Oracle) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm/include/asm/cacheflush.h | 24 +++++--- arch/arm/include/asm/pgtable.h | 5 +- arch/arm/include/asm/tlbflush.h | 14 +++-- arch/arm/mm/copypage-v4mc.c | 5 +- arch/arm/mm/copypage-v6.c | 5 +- arch/arm/mm/copypage-xscale.c | 5 +- arch/arm/mm/dma-mapping.c | 24 ++++---- arch/arm/mm/fault-armv.c | 16 ++--- arch/arm/mm/flush.c | 99 +++++++++++++++++++------------ arch/arm/mm/mm.h | 2 +- arch/arm/mm/mmu.c | 14 +++-- arch/arm/mm/nommu.c | 6 ++ 12 files changed, 133 insertions(+), 86 deletions(-) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index a094f964c869..841e268d2374 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -231,14 +231,15 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned vma->vm_flags); } -static inline void -vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +static inline void vivt_flush_cache_pages(struct vm_area_struct *vma, + unsigned long user_addr, unsigned long pfn, unsigned int nr) { struct mm_struct *mm = vma->vm_mm; if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { unsigned long addr = user_addr & PAGE_MASK; - __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); + __cpuc_flush_user_range(addr, addr + nr * PAGE_SIZE, + vma->vm_flags); } } @@ -247,15 +248,17 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig vivt_flush_cache_mm(mm) #define flush_cache_range(vma,start,end) \ vivt_flush_cache_range(vma,start,end) -#define flush_cache_page(vma,addr,pfn) \ - vivt_flush_cache_page(vma,addr,pfn) +#define flush_cache_pages(vma, addr, pfn, nr) \ + vivt_flush_cache_pages(vma, addr, pfn, nr) #else -extern void flush_cache_mm(struct mm_struct *mm); -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); -extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn); +void flush_cache_mm(struct mm_struct *mm); +void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, + unsigned long pfn, unsigned int nr); #endif #define flush_cache_dup_mm(mm) flush_cache_mm(mm) +#define flush_cache_page(vma, addr, pfn) flush_cache_pages(vma, addr, pfn, 1) /* * flush_icache_user_range is used when we want to ensure that the @@ -289,7 +292,9 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr * See update_mmu_cache for the user space part. */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -extern void flush_dcache_page(struct page *); +void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 static inline void flush_kernel_vmap_range(void *addr, int size) @@ -321,6 +326,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, * duplicate cache flushing elsewhere performed by flush_dcache_page(). */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index 34662a9d4cab..ba573f22d7cc 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -207,8 +207,9 @@ static inline void __sync_icache_dcache(pte_t pteval) extern void __sync_icache_dcache(pte_t pteval); #endif -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr); +#define set_ptes set_ptes static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) { diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 0ccc985b90af..38c6e4a2a0b6 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -619,18 +619,22 @@ extern void flush_bp_all(void); * If PG_dcache_clean is not set for the page, we need to ensure that any * cache entries for the kernels virtual memory range are written * back to the page. On ARMv6 and later, the cache coherency is handled via - * the set_pte_at() function. + * the set_ptes() function. */ #if __LINUX_ARM_ARCH__ < 6 -extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep); +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr); #else -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, + unsigned int nr) { } #endif +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #endif diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c index f1da3b439b96..7ddd82b9fe8b 100644 --- a/arch/arm/mm/copypage-v4mc.c +++ b/arch/arm/mm/copypage-v4mc.c @@ -64,10 +64,11 @@ static void mc_copy_user_page(void *from, void *to) void v4_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c index d8a115de5507..a1a71f36d850 100644 --- a/arch/arm/mm/copypage-v6.c +++ b/arch/arm/mm/copypage-v6.c @@ -69,11 +69,12 @@ static void discard_old_kernel_data(void *kto) static void v6_copy_user_highpage_aliasing(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); unsigned int offset = CACHE_COLOUR(vaddr); unsigned long kfrom, kto; - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); /* FIXME: not highmem safe */ discard_old_kernel_data(page_address(to)); diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c index bcb485620a05..f1e29d3e8193 100644 --- a/arch/arm/mm/copypage-xscale.c +++ b/arch/arm/mm/copypage-xscale.c @@ -84,10 +84,11 @@ static void mc_copy_user_page(void *from, void *to) void xscale_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 033a1bce2b17..70cb7e63a9a5 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -695,6 +695,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { + struct folio *folio = page_folio(page); phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ @@ -709,19 +710,18 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - unsigned long pfn; - size_t left = size; - - pfn = page_to_pfn(page) + off / PAGE_SIZE; - off %= PAGE_SIZE; - if (off) { - pfn++; - left -= PAGE_SIZE - off; + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); } - while (left >= PAGE_SIZE) { - page = pfn_to_page(pfn++); - set_bit(PG_dcache_clean, &page->flags); - left -= PAGE_SIZE; + + while (left >= (ssize_t)folio_size(folio)) { + set_bit(PG_dcache_clean, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } } diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index ca5302b0b7ee..36f26f38cb30 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -181,12 +181,12 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, * * Note that the pte lock will be held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); struct address_space *mapping; - struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) return; @@ -195,13 +195,13 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(folio); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (mapping) { if (cache_is_vivt()) make_coherent(mapping, vma, addr, ptep, pfn); diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 2508be91b7a0..d19d140a10c7 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -95,10 +95,10 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned __flush_icache_all(); } -void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr) { if (cache_is_vivt()) { - vivt_flush_cache_page(vma, user_addr, pfn); + vivt_flush_cache_pages(vma, user_addr, pfn, nr); return; } @@ -196,29 +196,31 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, #endif } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - if (!PageHighMem(page)) { - __cpuc_flush_dcache_area(page_address(page), page_size(page)); + if (!folio_test_highmem(folio)) { + __cpuc_flush_dcache_area(folio_address(folio), + folio_size(folio)); } else { unsigned long i; if (cache_is_vipt_nonaliasing()) { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_atomic(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_local_folio(folio, + i * PAGE_SIZE); __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_atomic(addr); + kunmap_local(addr); } } else { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_high_get(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_high_get(folio_page(folio, i)); if (addr) { __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_high(page + i); + kunmap_high(folio_page(folio, i)); } } } @@ -230,15 +232,14 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) * userspace colour, which is congruent with page->index. */ if (mapping && cache_is_vipt_aliasing()) - flush_pfn_alias(page_to_pfn(page), - page->index << PAGE_SHIFT); + flush_pfn_alias(folio_pfn(folio), folio_pos(folio)); } -static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) +static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; - pgoff_t pgoff; + struct vm_area_struct *vma; + pgoff_t pgoff, pgoff_end; /* * There are possible user space mappings of this page: @@ -246,21 +247,36 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p * data in the current VM view associated with this page. * - aliasing VIPT: we only need to find one mapping of this page. */ - pgoff = page->index; + pgoff = folio->index; + pgoff_end = pgoff + folio_nr_pages(folio) - 1; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff_end) { + unsigned long start, offset, pfn; + unsigned int nr; /* * If this VMA is not in our MM, we can ignore it. */ - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page)); + + start = vma->vm_start; + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + offset = pgoff - vma->vm_pgoff; + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + start += offset * PAGE_SIZE; + } + if (start + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - start) / PAGE_SIZE; + + flush_cache_pages(vma, start, pfn, nr); } flush_dcache_mmap_unlock(mapping); } @@ -269,7 +285,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p void __sync_icache_dcache(pte_t pteval) { unsigned long pfn; - struct page *page; + struct folio *folio; struct address_space *mapping; if (cache_is_vipt_nonaliasing() && !pte_exec(pteval)) @@ -279,14 +295,14 @@ void __sync_icache_dcache(pte_t pteval) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); if (cache_is_vipt_aliasing()) - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); else mapping = NULL; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (pte_exec(pteval)) __flush_icache_all(); @@ -312,7 +328,7 @@ void __sync_icache_dcache(pte_t pteval) * Note that we disable the lazy flush for SMP configurations where * the cache maintenance operations are not automatically broadcasted. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -320,31 +336,36 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); return; } - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!cache_ops_need_broadcast() && - mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping && cache_is_vivt()) - __flush_dcache_aliases(mapping, page); + __flush_dcache_aliases(mapping, folio); else if (mapping) __flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_page); /* * Flush an anonymous page so that users of get_user_pages() * can safely access the data. The expected sequence is: diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index d7ffccb7fea7..419316316711 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -45,7 +45,7 @@ struct mem_type { const struct mem_type *get_mem_type(unsigned int type); -extern void __flush_dcache_page(struct address_space *mapping, struct page *page); +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio); /* * ARM specific vm_struct->flags bits. diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 13fc4bb5f792..c9981c23e8e9 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1788,7 +1788,7 @@ void __init paging_init(const struct machine_desc *mdesc) bootmem_init(); empty_zero_page = virt_to_page(zero_page); - __flush_dcache_page(NULL, empty_zero_page); + __flush_dcache_folio(NULL, page_folio(empty_zero_page)); } void __init early_mm_init(const struct machine_desc *mdesc) @@ -1797,8 +1797,8 @@ void __init early_mm_init(const struct machine_desc *mdesc) early_paging_init(mdesc); } -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { unsigned long ext = 0; @@ -1808,5 +1808,11 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, ext |= PTE_EXT_NG; } - set_pte_ext(ptep, pteval, ext); + for (;;) { + set_pte_ext(ptep, pteval, ext); + if (--nr == 0) + break; + ptep++; + pte_val(pteval) += PAGE_SIZE; + } } diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 43cfd06bbeba..c415f3859b20 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -180,6 +180,12 @@ void setup_mm_for_reboot(void) { } +void flush_dcache_folio(struct folio *folio) +{ + __cpuc_flush_dcache_area(folio_address(folio), folio_size(folio)); +} +EXPORT_SYMBOL(flush_dcache_folio); + void flush_dcache_page(struct page *page) { __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); From patchwork Mon Jul 10 20:43:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118105 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp75442vqm; Mon, 10 Jul 2023 14:13:19 -0700 (PDT) X-Google-Smtp-Source: APBJJlEnf8Kk4CAIU4WmkV8mwMwFne3Scov8aG1RJwL9YLZJFV8RbnKe+XIEE0WSrYrDMLjIs0wj X-Received: by 2002:a05:6808:f90:b0:3a0:6949:c884 with SMTP id o16-20020a0568080f9000b003a06949c884mr15152234oiw.34.1689023598847; Mon, 10 Jul 2023 14:13:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023598; cv=none; d=google.com; s=arc-20160816; b=RJ2ejdrrgP5E602o5IkXklJKqHmwEAm06V0PyCHJkb/oEmRaeFD3v8cDpKs1V8Bt3f rMr8EfWdUnnJMIsx40ZvRPT/UtChfHJ22XFdu097TjSYE8gXqdpuYPhsgwWJq0HF07Vk aKhC7mA/y2y5hafPwPbCFcavHTJXMkWaoinTG6clF4kbYk2tr3e6bjGbf7O2d4HbpeOC y+mAHGbu/859AZRQbLwWqUGUWXEgUzNaIuUQpZF5O8X5VR0SXmFVnnILXmB1Bj0YU5jO Wv03VhVU0hUkEDQE8LaePZLTfF4GUyyaxXbQL0XBPUnd9vTjm28X5W/MoQ2xLslr5rKr hUfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Ppdmkbhn3pdci2A5T+8/PDoAOO4tRHFDpMfgtHEjIi8=; fh=jIG8Kq6BlCBOlz2wxW5x2NElmVC6F63cW4aU+DtfN8w=; b=f/yj1ZVZ+SBHsckAe3ugQmVUj4Fy+kYHTATNY62ileV7jH5tWCLr8b4GFBk7WuGWs8 rkNyScLNTU3E0FOvVA/BHfTWLHQzOg/Bp3xd6zGiIeFWL8/4DsmPW1KppIBbyPKT9nbQ E+Z2tHpvN2GOW+L8SmO/fCGcO6IBqT/PzUz0Ayqe05fSDzXpH80VY0UH0MZr/rF9THx0 Q3W9SPrsFu2UMewOCdLYxUgK/7nsX8E9ihZyLWH5jVU6qEGM6U7baKYTgdP2Y00ZpKxE Q/KpGeNUeMIcVHzra7T6oD2S0mhYuC/6Vvcavho2g6zXZpo8Yexul8DE7WTwrpIPOmmJ I8Hw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="lPuyuN9/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z19-20020a63e113000000b00542d2508ac8si231304pgh.89.2023.07.10.14.13.05; Mon, 10 Jul 2023 14:13:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="lPuyuN9/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229954AbjGJUoe (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229998AbjGJUoE (ORCPT ); Mon, 10 Jul 2023 16:44:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 011FFE6D; Mon, 10 Jul 2023 13:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Ppdmkbhn3pdci2A5T+8/PDoAOO4tRHFDpMfgtHEjIi8=; b=lPuyuN9/kfzzegkfuFmcsaU9fF Lc/9z5kp4JaqUHiiCtO4bct9lDeuZEuBbGVmD75jHyod+q5OS9/E33T+yVfXQq56C7GARJp6DyjIR o3UuxEY0HJslmfgdR+bF9veKcWu1b4C+HRsH7/UYSDnz5jP8cDnvYdYrQQcgREzKagDeWJNfCOHL1 ZwfyY7vuDOw4waS2NW2CPa2NjUUcSm0wjYA9qAvsyhqfVtNiBplyeqAYGg44dbIwl3lVtJ3jc3T7f oQ1o/qpbmedQ7/dlvUjfDX9dJeYTDR6bleepVfHIh67aOcZYvrfvBEeXRe2a+li5dwtQXgjm2SZ/3 VDeFgVJQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00Euon-AG; Mon, 10 Jul 2023 20:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Catalin Marinas , Mike Rapoport , linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 10/38] arm64: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:11 +0100 Message-Id: <20230710204339.3554919-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069609300217219 X-GMAIL-MSGID: 1771069609300217219 Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Catalin Marinas Acked-by: Mike Rapoport (IBM) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/cacheflush.h | 4 +++- arch/arm64/include/asm/pgtable.h | 26 +++++++++++++++------ arch/arm64/mm/flush.c | 36 +++++++++++------------------ 3 files changed, 36 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 37185e978aeb..d115451ed263 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -114,7 +114,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, #define copy_to_user_page copy_to_user_page /* - * flush_dcache_page is used when the kernel has written to the page + * flush_dcache_folio is used when the kernel has written to the page * cache page at virtual address page->virtual. * * If this page isn't mapped (ie, page_mapping == NULL), or it might @@ -127,6 +127,8 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio static __always_inline void icache_inval_all_pou(void) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index a44a150e0318..c1c4abf75217 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -345,12 +345,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pte); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - return __set_pte_at(mm, addr, ptep, pte); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } } +#define set_ptes set_ptes /* * Huge pte definitions. @@ -1049,8 +1058,9 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, + unsigned int nr) { /* * We don't do anything here, so there's a very small chance of @@ -1059,6 +1069,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #ifdef CONFIG_ARM64_PA_BITS_52 diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 4e6476094952..013eead9b695 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -51,20 +51,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, void __sync_icache_dcache(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { - sync_icache_aliases((unsigned long)page_address(page), - (unsigned long)page_address(page) + - page_size(page)); - set_bit(PG_dcache_clean, &page->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + sync_icache_aliases((unsigned long)folio_address(folio), + (unsigned long)folio_address(folio) + + folio_size(folio)); + set_bit(PG_dcache_clean, &folio->flags); } } EXPORT_SYMBOL_GPL(__sync_icache_dcache); @@ -74,17 +67,16 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); * it as dirty for later flushing when mapped in user space (if executable, * see __sync_icache_dcache). */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in __sync_icache_dcache()). - */ - if (PageHuge(page)) - page = compound_head(page); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +EXPORT_SYMBOL(flush_dcache_folio); - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } EXPORT_SYMBOL(flush_dcache_page); From patchwork Mon Jul 10 20:43:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118097 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp73425vqm; Mon, 10 Jul 2023 14:09:06 -0700 (PDT) X-Google-Smtp-Source: APBJJlHoi+wBTB3a+Dqllce4lbt+h/ClKevLfM5jHk8Ko3cmmKop8m7U7rXevGoejwwaSDAeyCUx X-Received: by 2002:a05:6a00:24d3:b0:674:8fe0:126f with SMTP id d19-20020a056a0024d300b006748fe0126fmr16675938pfv.27.1689023346028; Mon, 10 Jul 2023 14:09:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023346; cv=none; d=google.com; s=arc-20160816; b=zOEjk5f65J3rcKA9UgRN/u0mz2ZNUyzxO9zHHJP5E8vn2QnLDzwS6QHAoqAcxG7FH6 SC0MjKWXSloQUd9z1W3ffqumo3ZFYHmqab+kiXQgk+QHZdvGYe4JKH8WEDNy4sfXaqt0 le3xkFTJFJ8xgHGvVvXUBEMkbuJGgcqMmylM5Qlj39fRL3aR7UH1op1naTdU5u8RnL85 fRlocDE4ap59/0LvdpAT7OpJCbQ/OsaZYDjAjpa/MTwzg8PlAaQz+c5vagUfdM7U+vTw tqCRcG/kyqMgiMZhW3hrxGT7bTUKytllqBV0mcIfbF/bdsKxi/zPFg9Dja4Z+88/esU9 uTEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+EcwG3CtD3KBn2kUGDuNHyIiThEOsdBdS/s6lb4peVI=; fh=LFU68CZ9PlQL+UFy/1kRz/1hnqgCvg4wA1dH85SydVc=; b=gmcqhbWbP3PUObFLAJuKG+vJ+hO7UI1NovmEhS5BgUrUl17O7QROvaI5VA33cNYjJC 2sy+Sr8cxDdF5kt2qpXtnnAJsgQjcgafTqYeVR8HXXiojN4Ptw0157ULoUYNtGsymHIs TNTi4UFZqFrGIHIwu9ql16Sg/LbeVN4wqBo04SxcekgmDgv50ZvRfFmotYcXorVIVQOS X6GE0GasEgYJUxWclPpTeWYjT0FK404rMdy2NQbNf7gcd4/LeJ7mm6wI+Sg8sl3rCwUH pjVlqMfbinu92si0y7/wt7WuVFOepPDQfIyJDo1YMe7XmfYKCiJWoWnEATqK5Hr0ClyG irjA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=MHajTy9m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 36-20020a631364000000b0055b618c77f6si219958pgt.180.2023.07.10.14.08.51; Mon, 10 Jul 2023 14:09:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=MHajTy9m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229760AbjGJUn6 (ORCPT + 99 others); Mon, 10 Jul 2023 16:43:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229998AbjGJUnz (ORCPT ); Mon, 10 Jul 2023 16:43:55 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B7FBE56; Mon, 10 Jul 2023 13:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+EcwG3CtD3KBn2kUGDuNHyIiThEOsdBdS/s6lb4peVI=; b=MHajTy9mQDBZfuIizARiN1nkqJ 1lwGZUAXEWBiZydy6ZJaVH97vWtG3lC4o+VEGkLa/d6pKVr5cL8UlmzoF1QTOB4otT7/ttE/TP6Y/ kPyvvuLpuOjr8Rc9TtbCJq6jxdrxbdEBmyd12QDDuiaLqky/O+Ah4wyCF9Sd4DUb8H4VAEjZsA6o5 ZnyliBzDLtoQKZk2qLcY33XL8iROFBhgBb/EIEJtTl7uj4TVaBHUNB7afZhMbIVnWZUsk+r4zMbEo jESnCZXmpjdwFLTGisHvw3BlVrR8wTEmYs0cI0/qMVaDAdHsjOzDPcTApY26OLCgVzZNf9114zLEk OzSg4SOg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00Euoq-Dc; Mon, 10 Jul 2023 20:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Guo Ren , Mike Rapoport , linux-csky@vger.kernel.org Subject: [PATCH v5 11/38] csky: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:12 +0100 Message-Id: <20230710204339.3554919-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069343731928831 X-GMAIL-MSGID: 1771069343731928831 Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Guo Ren Acked-by: Mike Rapoport (IBM) Cc: linux-csky@vger.kernel.org --- arch/csky/abiv1/cacheflush.c | 32 +++++++++++++++++----------- arch/csky/abiv1/inc/abi/cacheflush.h | 2 ++ arch/csky/abiv2/cacheflush.c | 32 ++++++++++++++-------------- arch/csky/abiv2/inc/abi/cacheflush.h | 10 +++++++-- arch/csky/include/asm/pgtable.h | 8 ++++--- 5 files changed, 50 insertions(+), 34 deletions(-) diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c index 94fbc03cbe70..171e8fb32285 100644 --- a/arch/csky/abiv1/cacheflush.c +++ b/arch/csky/abiv1/cacheflush.c @@ -15,45 +15,51 @@ #define PG_dcache_clean PG_arch_1 -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); - if (mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + if (mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { dcache_wbinv_all(); if (mapping) icache_inv_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; flush_tlb_page(vma, addr); if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) dcache_wbinv_all(); - if (page_mapping_file(page)) { + if (folio_flush_mapping(folio)) { if (vma->vm_flags & VM_EXEC) icache_inv_all(); } diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..0d6cb65624c4 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,8 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 9923cd24db58..d05a551af5d5 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -7,32 +7,32 @@ #include #include -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *pte, unsigned int nr) { - unsigned long addr; - struct page *page; + unsigned long pfn = pte_pfn(*pte); + struct folio *folio; + unsigned int i; flush_tlb_page(vma, address); - if (!pfn_valid(pte_pfn(*pte))) + if (!pfn_valid(pfn)) return; - page = pfn_to_page(pte_pfn(*pte)); - if (page == ZERO_PAGE(0)) - return; + folio = page_folio(pfn_to_page(pfn)); - if (test_and_set_bit(PG_dcache_clean, &page->flags)) + if (test_and_set_bit(PG_dcache_clean, &folio->flags)) return; - addr = (unsigned long) kmap_atomic(page); - - dcache_wb_range(addr, addr + PAGE_SIZE); + for (i = 0; i < folio_nr_pages(folio); i++) { + unsigned long addr = (unsigned long) kmap_local_folio(folio, + i * PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - - kunmap_atomic((void *) addr); + dcache_wb_range(addr, addr + PAGE_SIZE); + if (vma->vm_flags & VM_EXEC) + icache_inv_range(addr, addr + PAGE_SIZE); + kunmap_local((void *) addr); + } } void flush_icache_deferred(struct mm_struct *mm) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..9c728933a776 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -18,11 +18,17 @@ #define PG_dcache_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..42405037c871 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -28,6 +28,7 @@ #define pgd_ERROR(e) \ pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pmd_pfn(pmd) (pmd_phys(pmd) >> PAGE_SHIFT) #define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT)) #define pte_clear(mm, addr, ptep) set_pte((ptep), \ @@ -90,7 +91,6 @@ static inline void set_pte(pte_t *p, pte_t pte) /* prevent out of order excution */ smp_mb(); } -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { @@ -263,8 +263,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte); +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *pte, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot) From patchwork Mon Jul 10 20:43:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118082 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp62831vqm; Mon, 10 Jul 2023 13:46:12 -0700 (PDT) X-Google-Smtp-Source: APBJJlHpgoyOGrsUVGUgr4NX5N2gkxBHW3exEJ2zOW8Rs4xdD4wdHBoeC9XMvnL2ldT3ogMTaSh/ X-Received: by 2002:a05:6402:150:b0:51d:a012:ec2 with SMTP id s16-20020a056402015000b0051da0120ec2mr18849039edu.3.1689021972571; Mon, 10 Jul 2023 13:46:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689021972; cv=none; d=google.com; s=arc-20160816; b=FUFcPxqOy2dGXcCSeHk7RYU5RGO4hKgP3TbCvxZgIOls0OGehlBdp/z1l8KVWeTe8F dZ8YqQQjYh1KbxHSNaCk02QJP7uy/QdtxrkbWf2L4drOglV8st7Rl4hyjd1oxJVqJLWt f2GjhblsfR/rD+XoQW4fxnDuUeu5FnnJYI4iL9ZhqhJLdVCRRhlTsx5hiCCmPPfHtOQs AGQW5e3QCsvtGulaxCdhRK+WuWioHTfGB9KkKobHtBxoUJ1K0fZcRxSUW0UXyVSpzQZD XDCNK9RpR9UPB7QaipwaVzOthmGC6Kh3bIdmsnPBeG/MLiFAjvXw++Bal9VAA1XHFDlK qZPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5+4G9ihnAATxhE+mN/TPHluvKQf1KT1lzbSAkqtjj8A=; fh=UUuhNXrTB5xq/wTzzqVGF6zdWlcRjxdcC8mNGsBXO4w=; b=FXZz3Xu9VzH3unnMowDqyyEtX02+MxfSiGDHT9QPAZxZJQkCqh9+gG8ioLhw5SdmUz R3kKnw2QsPFfnFtCoSSSvsbJ41hnFKN1hYBLYPxD2gCbLMPEuvnTq5BZJM9T5FJWLPgv a+9vL+tk3wwWlAzHAJkSvLaJi1MmUooBIfIJgRGLgVyEk/KmZ/2RPeybR3/E2Aj7GQr0 l2OHs+Xz8D+H0Wcxi4ieDr8CgO7/N/4IJkmwzCRBEQHuazcFDpGWkaI5KxU7CFrsv3om 3s/8V/iWbseyC46G3aXbRS8wc550xWcucHK+n+Wpi3+mXdoryd37W7c9DR7p+zpCTDGC ys7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=l4H0Sp2I; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o16-20020a17090608d000b00992f1a3b9c0si410619eje.414.2023.07.10.13.45.49; Mon, 10 Jul 2023 13:46:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=l4H0Sp2I; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232452AbjGJUoP (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229669AbjGJUn5 (ORCPT ); Mon, 10 Jul 2023 16:43:57 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3DDBE6B; Mon, 10 Jul 2023 13:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5+4G9ihnAATxhE+mN/TPHluvKQf1KT1lzbSAkqtjj8A=; b=l4H0Sp2I2O9RCZTb8AEFCido2i /8+Ddvb+sUKF9z6mUxpBDYoYWqmnn4aOWQoC7IexYAgrTe0or6WgViwvP1KZrum7pKC0NOyGJMjKc jS3v+NHyWt+f7/mkUsWrCpSYRHNryr7mAx2T4t4pn1KFR9CzjiR8sVoYxSpyp2I8S0LcdEgx28c8P FTRsQ/uuPuwZVhuPDfVoBpNPH5Htib4iP2kNco+O4QZOTn2boDt/9vMQrP9jZcj11dptab20jY6Vo gIYwsr6sUvBHhjFaYk8ylIhtnJewm+KSd+85pjtQ6UTRdq3blHrsC3Q/rhYDK9ngjZGucFuR3KFwq 4E6+Do7A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00Euoz-H5; Mon, 10 Jul 2023 20:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brian Cain , Mike Rapoport Subject: [PATCH v5 12/38] hexagon: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:13 +0100 Message-Id: <20230710204339.3554919-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771067904291256099 X-GMAIL-MSGID: 1771067904291256099 Add PFN_PTE_SHIFT and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Brian Cain Acked-by: Mike Rapoport (IBM) --- arch/hexagon/include/asm/cacheflush.h | 8 ++++++-- arch/hexagon/include/asm/pgtable.h | 9 +-------- 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 6eff0730e6ef..dc3f500a5a01 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -58,12 +58,16 @@ extern void flush_cache_all_hexagon(void); * clean the cache when the PTE is set. * */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { /* generic_ptrace_pokedata doesn't wind up here, does it? */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, void *src, int len); #define copy_to_user_page copy_to_user_page diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..dd05dd71b8ec 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -338,6 +338,7 @@ static inline int pte_exec(pte_t pte) /* __swp_entry_to_pte - extract PTE from swap entry */ #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) +#define PFN_PTE_SHIFT PAGE_SHIFT /* pfn_pte - convert page number and protection value to page table entry */ #define pfn_pte(pfn, pgprot) __pte((pfn << PAGE_SHIFT) | pgprot_val(pgprot)) @@ -345,14 +346,6 @@ static inline int pte_exec(pte_t pte) #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) -/* - * set_pte_at - update page table and do whatever magic may be - * necessary to make the underlying hardware/firmware take note. - * - * VM may require a virtual instruction to alert the MMU. - */ -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) - static inline unsigned long pmd_page_vaddr(pmd_t pmd) { return (unsigned long)__va(pmd_val(pmd) & PAGE_MASK); From patchwork Mon Jul 10 20:43:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118083 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp62833vqm; Mon, 10 Jul 2023 13:46:12 -0700 (PDT) X-Google-Smtp-Source: APBJJlEFS0+Ca0eCBSbEhR3CxeHNVS5Ok4+ey7DleL/Zu9w8tAxNqSTHJJLLAAlQVYrXNcKIa9BJ X-Received: by 2002:aa7:c493:0:b0:51e:4439:f476 with SMTP id m19-20020aa7c493000000b0051e4439f476mr10000493edq.25.1689021972622; Mon, 10 Jul 2023 13:46:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689021972; cv=none; d=google.com; s=arc-20160816; b=AnZfcdMcVKbcbJd0lSJ9pz5jtfaTzuPjfWk2nqx7o4D7CZykm507eeLO4sxoew1Tau Zolo2qhLwSmXqi+vZl8ISwLycMHeG1h16ZIB+YssRGFKELx8JfV5p2FpxpTnrX4Mk7kh a2HcVCBsMxCEptCcvyqfyQv8HytGRbl7fHazk8zM3L1s0NqiAhGDFOp2bO50EHG7/62l YNZxRyMwIvqH8pBYPigFxGbfnKllie/27ChhORx0HDwgCx7sEFt/M5XRkO/ob5Nne0mU 7tu6gfUS9wxOxqH5rsJQO0z0di16G8BYhkz3btbFuUcSM7dZz11wCekD5magGv0jEr70 w29g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5gt+3Xe9De1Yff1oJDu3SmVusJ4BF3xtLN8hDiFU6E0=; fh=KaGo/En5RwG4RJi/cwaj2nPPRaXJW1d8yW1xixx2E+I=; b=XHPVUKuqP/1jdMqAVHeNNOUHYy5UzCXblh2XdmT4ImddBSpVxV5mzNUjAktd2dELq7 vuYTap7L84R46+TwGv6cep+RhktnxX2m7PSTWDS+zhbJzgZo3BdFX3syyr5jjhoJlR2A jBqVy0VIV/n+5kn9SL1u+dSmGkIvQE28rjdYn+Kr4h87QRH0/06t3J9SPPmdA5oxkPPw V8Logz+JqlK8q4LTMXP9PWl47Il06LKMeC5Udy9GeDQ0+FyB0jlDXPOCzdf83Z6z/d75 Byo+l/NNGruehX96rNuRhqvu0a2eQSiAZhJR7YkwhcXPTwbLZ7/y02ARpFUdM4yWyqXq uWnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=dGHT51LM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m4-20020a50ef04000000b0051dd307727dsi348061eds.59.2023.07.10.13.45.48; Mon, 10 Jul 2023 13:46:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=dGHT51LM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232428AbjGJUoM (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232196AbjGJUn4 (ORCPT ); Mon, 10 Jul 2023 16:43:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73DC1E48; Mon, 10 Jul 2023 13:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5gt+3Xe9De1Yff1oJDu3SmVusJ4BF3xtLN8hDiFU6E0=; b=dGHT51LMU6SlQXei6rkZXyExJi p1aFtkyf4pEDSYgYiZ/k2gJQpuEtvPQuWOrzfE7rqxmCY0lOiBf3KNuaByGkvb2JKVrxNOYDk8HKJ x2pM848J4epxGkcJGcefxsEHJvwSCEM8aHiumMhIWD4J9OGQsVh1o0gsIDgaK84JaxtSxWw0T+CiQ NnV5uJ372hxvssehPX0qG4WVDx0cELUoDYmiZtlfScLXJ1WqS0llR5Pe7y5/k9vifGRLPvc4zSYwB M6HPvdZ3lv6t+Uhrhjy1GY7lCRbQfjPaADhpVyNLhzYU2zL/7VO3wz1rFSy0dNv1bMtGw9xIMEZ2k GaMyVtIg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00Eup5-Kh; Mon, 10 Jul 2023 20:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , linux-ia64@vger.kernel.org Subject: [PATCH v5 13/38] ia64: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:14 +0100 Message-Id: <20230710204339.3554919-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771067904024101493 X-GMAIL-MSGID: 1771067904024101493 Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_clean) flag from being per-page to per-folio, which makes arch_dma_mark_clean() and mark_clean() a little more exciting. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: linux-ia64@vger.kernel.org --- arch/ia64/hp/common/sba_iommu.c | 26 +++++++++++++++----------- arch/ia64/include/asm/cacheflush.h | 14 ++++++++++---- arch/ia64/include/asm/pgtable.h | 4 ++-- arch/ia64/mm/init.c | 28 +++++++++++++++++++--------- 4 files changed, 46 insertions(+), 26 deletions(-) diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index 8ad6946521d8..48d475f10003 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -798,22 +798,26 @@ sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba) #endif #ifdef ENABLE_MARK_CLEAN -/** +/* * Since DMA is i-cache coherent, any (complete) pages that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ -static void -mark_clean (void *addr, size_t size) +static void mark_clean(void *addr, size_t size) { - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page((void *)pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; + struct folio *folio = virt_to_folio(addr); + ssize_t left = size; + size_t offset = offset_in_folio(folio, addr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= folio_size(folio)) { + set_bit(PG_arch_1, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } #endif diff --git a/arch/ia64/include/asm/cacheflush.h b/arch/ia64/include/asm/cacheflush.h index 708c0fa5d975..eac493fa9e0d 100644 --- a/arch/ia64/include/asm/cacheflush.h +++ b/arch/ia64/include/asm/cacheflush.h @@ -13,10 +13,16 @@ #include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) \ -do { \ - clear_bit(PG_arch_1, &(page)->flags); \ -} while (0) +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_arch_1, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_range flush_icache_range diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 21c97e31a28a..4e5dd800ce1f 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -206,6 +206,7 @@ ia64_phys_addr_valid (unsigned long addr) #define RGN_MAP_SHIFT (PGDIR_SHIFT + PTRS_PER_PGD_SHIFT - 3) #define RGN_MAP_LIMIT ((1UL << RGN_MAP_SHIFT) - PAGE_SIZE) /* per region addr limit */ +#define PFN_PTE_SHIFT PAGE_SHIFT /* * Conversion functions: convert page frame number (pfn) and a protection value to a page * table entry (pte). @@ -303,8 +304,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* * Make page protection values cacheable, uncacheable, or write- * combining. Note that "protection" is really a misnomer here as the @@ -396,6 +395,7 @@ pte_same (pte_t a, pte_t b) return pte_val(a) == pte_val(b); } +#define update_mmu_cache_range(vmf, vma, address, ptep, nr) do { } while (0) #define update_mmu_cache(vma, address, ptep) do { } while (0) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 7f5353e28516..b95debabdc2a 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -50,30 +50,40 @@ void __ia64_sync_icache_dcache (pte_t pte) { unsigned long addr; - struct page *page; + struct folio *folio; - page = pte_page(pte); - addr = (unsigned long) page_address(page); + folio = page_folio(pte_page(pte)); + addr = (unsigned long)folio_address(folio); - if (test_bit(PG_arch_1, &page->flags)) + if (test_bit(PG_arch_1, &folio->flags)) return; /* i-cache is already coherent with d-cache */ - flush_icache_range(addr, addr + page_size(page)); - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ + flush_icache_range(addr, addr + folio_size(folio)); + set_bit(PG_arch_1, &folio->flags); /* mark page as clean */ } /* - * Since DMA is i-cache coherent, any (complete) pages that were written via + * Since DMA is i-cache coherent, any (complete) folios that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ void arch_dma_mark_clean(phys_addr_t paddr, size_t size) { unsigned long pfn = PHYS_PFN(paddr); + struct folio *folio = page_folio(pfn_to_page(pfn)); + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); - do { + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= (ssize_t)folio_size(folio)) { set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); - } while (++pfn <= PHYS_PFN(paddr + size - 1)); + left -= folio_size(folio); + folio = folio_next(folio); + } } inline void From patchwork Mon Jul 10 20:43:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118108 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp75679vqm; Mon, 10 Jul 2023 14:13:46 -0700 (PDT) X-Google-Smtp-Source: APBJJlECa/XI/JjIqVDGVJEa4CUq/ZEADWyZOkytj0J4Wcy5vKpxjXdQscH7W8g8gTV9ug+tqbMk X-Received: by 2002:a17:902:ecc8:b0:1b6:b1f3:add5 with SMTP id a8-20020a170902ecc800b001b6b1f3add5mr14821173plh.27.1689023626557; Mon, 10 Jul 2023 14:13:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023626; cv=none; d=google.com; s=arc-20160816; b=sEPmkoIskcVwmchiO3FL3nR19LYdWDMGVaeucVOyoxgWVot74LA0XbAVLeFuRCp+H/ EgAnyEjpSkQdnN9Etv84y5FBL6t0oKtfzxb9OWERfbQ45SuS7ihPnOlEe6FPijxQna1x gV20ecMNdTcGnB6McRZyQtZL4KQE9kEu+UcveKgPjW+/GNwFfoBs1IMoXhhwD+hJnamT +5+8UHQvPqwQXWQD64qE4zt7Ox+jxvqLyCmKibHAa3kkTtvFwFk6hShZqwIB1OwrfJ+z BsvdCUNZ1WJe3eb3u+c/aq28ndVoUHBFFglu9ND9bbtzX+7cfcP/auEP5xyQzEAGZXx0 YtAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xDSJ2LI5HRZuClnGzt1hdoG45riL6n6McweKjX1Jt7o=; fh=mp5o+DAboO3IY+riYv9HKtR+bQmoAVL1dX2VXLzBs20=; b=aU6znIXHCKSEr+0fY3W0epdgTA8xqShAg+/PjKar++ABvSBX4XoamOoDAiT2nvzpcU Rr5lNLUGVv4sk3pgXuHjXFiBUL49Uq4mVzciEZ5bxgfFkldU0hS4CQ/PtKwQNk7cDurh PSxOr2daqbxh2O67uOmfxe3U1ztAJetCnl6/L4oK9NdnW6ga2H1jjcJN3Q7LTGSvqYjJ KHmvcJCozt0LBoZvFJnqFx6InRpAexTDxQBTfyXyw/dz9YvR/xXASlhmKl0LCrOgLm4a xs8nvvcSrtXxZrbEDri+huPK8dCG2dqJmLuYXj1DVeVqvqm0Jr2razD89g1QTQxR9Wsx 6Zag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=QnpOdXIt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d9-20020a170902cec900b001b9e9b21249si349171plg.649.2023.07.10.14.13.33; Mon, 10 Jul 2023 14:13:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=QnpOdXIt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232199AbjGJUoh (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231224AbjGJUoE (ORCPT ); Mon, 10 Jul 2023 16:44:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85CCCE46; Mon, 10 Jul 2023 13:43:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xDSJ2LI5HRZuClnGzt1hdoG45riL6n6McweKjX1Jt7o=; b=QnpOdXItCSyXnIcQN4lLoMCi9G APaag4+NpG4VceTEAmfTs2AOr/sDMZ9JgR+lDE7sfAZT/YiLnVm22GAjpbaGpR8xtfeOhQzCMjKl1 r0gg/00IPWfEt6lwVEPvjMR1uhgVz+a2WkI9MD+ql+GK0Oc82um2HYSxnWMhQSpOszmb3/OWThI0C d8GUGPnQzwS9007VurYnDf3NiHRiSoR6hTvrTPW3iDyRonBG5R0gD2+fLJX63iJSMNgdxcjw6L0Mu 7S57uDsLbv1vM8m1P2Op8/b5WltnHiilk1ZZXyNsfar3UJejwaHfmnxtaK4ql6GgGTXpTZDklziHi ovW2HfbA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00EupA-Nc; Mon, 10 Jul 2023 20:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev Subject: [PATCH v5 14/38] loongarch: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:15 +0100 Message-Id: <20230710204339.3554919-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069638258600434 X-GMAIL-MSGID: 1771069638258600434 Add update_mmu_cache_range() and change _PFN_SHIFT to PFN_PTE_SHIFT. It would probably be more efficient to implement __update_tlb() by flushing the entire folio instead of calling __update_tlb() N times, but I'll leave that for someone who understands the architecture better. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Huacai Chen Cc: WANG Xuerui Cc: loongarch@lists.linux.dev --- arch/loongarch/include/asm/cacheflush.h | 1 + arch/loongarch/include/asm/pgtable-bits.h | 4 +-- arch/loongarch/include/asm/pgtable.h | 33 ++++++++++++----------- arch/loongarch/mm/pgtable.c | 2 +- arch/loongarch/mm/tlb.c | 2 +- 5 files changed, 23 insertions(+), 19 deletions(-) diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 0681788eb474..88a44da50a3b 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -47,6 +47,7 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define flush_icache_page(vma, page) do { } while (0) +#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/loongarch/include/asm/pgtable-bits.h b/arch/loongarch/include/asm/pgtable-bits.h index de46a6b1e9f1..35348d4c4209 100644 --- a/arch/loongarch/include/asm/pgtable-bits.h +++ b/arch/loongarch/include/asm/pgtable-bits.h @@ -50,12 +50,12 @@ #define _PAGE_NO_EXEC (_ULCAST_(1) << _PAGE_NO_EXEC_SHIFT) #define _PAGE_RPLV (_ULCAST_(1) << _PAGE_RPLV_SHIFT) #define _CACHE_MASK (_ULCAST_(3) << _CACHE_SHIFT) -#define _PFN_SHIFT (PAGE_SHIFT - 12 + _PAGE_PFN_SHIFT) +#define PFN_PTE_SHIFT (PAGE_SHIFT - 12 + _PAGE_PFN_SHIFT) #define _PAGE_USER (PLV_USER << _PAGE_PLV_SHIFT) #define _PAGE_KERN (PLV_KERN << _PAGE_PLV_SHIFT) -#define _PFN_MASK (~((_ULCAST_(1) << (_PFN_SHIFT)) - 1) & \ +#define _PFN_MASK (~((_ULCAST_(1) << (PFN_PTE_SHIFT)) - 1) & \ ((_ULCAST_(1) << (_PAGE_PFN_END_SHIFT)) - 1)) /* diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index 38afeb7dd58b..e7cf25e452c0 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -237,9 +237,9 @@ extern pmd_t mk_pmd(struct page *page, pgprot_t prot); extern void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); #define pte_page(x) pfn_to_page(pte_pfn(x)) -#define pte_pfn(x) ((unsigned long)(((x).pte & _PFN_MASK) >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)(((x).pte & _PFN_MASK) >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) /* * Initialize a new pgd / pud / pmd table with invalid pointers. @@ -334,19 +334,13 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) } } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); -} - static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { /* Preserve global status for the pair */ if (pte_val(*ptep_buddy(ptep)) & _PAGE_GLOBAL) - set_pte_at(mm, addr, ptep, __pte(_PAGE_GLOBAL)); + set_pte(ptep, __pte(_PAGE_GLOBAL)); else - set_pte_at(mm, addr, ptep, __pte(0)); + set_pte(ptep, __pte(0)); } #define PGD_T_LOG2 (__builtin_ffs(sizeof(pgd_t)) - 1) @@ -445,11 +439,20 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { - __update_tlb(vma, address, ptep); + for (;;) { + __update_tlb(vma, address, ptep); + if (--nr == 0) + break; + address += PAGE_SIZE; + ptep++; + } } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -462,7 +465,7 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT; + return (pmd_val(pmd) & _PFN_MASK) >> PFN_PTE_SHIFT; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c index 36a6dc0148ae..1260cf30e3ee 100644 --- a/arch/loongarch/mm/pgtable.c +++ b/arch/loongarch/mm/pgtable.c @@ -107,7 +107,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c index 00bb563e3c89..eb8572e201ea 100644 --- a/arch/loongarch/mm/tlb.c +++ b/arch/loongarch/mm/tlb.c @@ -252,7 +252,7 @@ static void output_pgtable_bits_defines(void) pr_define("_PAGE_WRITE_SHIFT %d\n", _PAGE_WRITE_SHIFT); pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT); pr_define("_PAGE_NO_EXEC_SHIFT %d\n", _PAGE_NO_EXEC_SHIFT); - pr_define("_PFN_SHIFT %d\n", _PFN_SHIFT); + pr_define("PFN_PTE_SHIFT %d\n", PFN_PTE_SHIFT); pr_debug("\n"); } From patchwork Mon Jul 10 20:43:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118093 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp67487vqm; Mon, 10 Jul 2023 13:57:41 -0700 (PDT) X-Google-Smtp-Source: APBJJlFR56XgHba35sK1BmksIQgcY8l80Q07SW+m63crkaYeXJxtgNij/lfZieRaUSGFkca922qI X-Received: by 2002:a17:902:f814:b0:1ad:f138:b2f6 with SMTP id ix20-20020a170902f81400b001adf138b2f6mr14770141plb.16.1689022660965; Mon, 10 Jul 2023 13:57:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022660; cv=none; d=google.com; s=arc-20160816; b=G4X7ZkDwPBofOdeL68njS5jYDwyrqv2uMNyymvyM6AEnhFAt0bO5+n+xq06rZ1BN9G cNESZX+BwjlJHiJq+P2r5AeHnalhWS9kZ7UVy0yyM6d2hjuvRULk2YI88L6Jvaj8ZLa/ UrCQyQHgjC3rsMHal+uTOST/4Ii+XI37juBXj11vj0pg/ScLXqtrLj9YhYJrCnsGgwE+ UWrhahig9dAU4q/noSRNKOJ1Qa9hY1dT5pVCWBnVog3UQ1i1Wf21KumpIeWrngKgQGd1 b70hl/08lMIETVk2XB2KTPGrRyKydhCaqOuznDabS5lTqEIJAkusPfQvv5siewn9i9/F egAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=3eckwhASlJ5kB1LQ/if3Fo0Hxrj5Tti58CwpH+WVOLg=; fh=fShwcmfwDmuRegfUJGqgFs6mGrcJve/VX8B/iC1e0Ck=; b=i/jtl4oGnSWvad6sqky8lQBeYpDFFSz9AWnADFw7EOtBcNqWgqoaSEj84eRJ1E3OKN GuKEQq947BoJIi0bbunSZWnv+FMPZcA9koaIyIWEaj47VxQWthBr5Z06/fdwpO49I1bn ZAbEhnkcDWTQqArrMovnTcxs2JqQEUlFdPEZ/vyLlg4XgwSx2As0edvQCdK/hqUmOqwk FvH4P3NvNT9BWzF1EcnB5QKhiLXGaQA8S3zQqnDdzXbmwM9Rk9+sVteatirndEmgIW7/ s/wDzLtb7am0WIVNe8T2S7eu5fK/g1+088Kdangl1KCWuvriwmlDTmSjGecFmeuGTfLW fz4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Cvh1UbtT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e15-20020a17090301cf00b001b9c3dc22d8si368729plh.155.2023.07.10.13.57.27; Mon, 10 Jul 2023 13:57:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Cvh1UbtT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233308AbjGJUqI (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230494AbjGJUo6 (ORCPT ); Mon, 10 Jul 2023 16:44:58 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B80AE79; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3eckwhASlJ5kB1LQ/if3Fo0Hxrj5Tti58CwpH+WVOLg=; b=Cvh1UbtToEDk9/QV9A82rA+pgO ZTVGf7qezP8ORX82Pd9066sJeOM5dE7LJVSdU72VBPOJJePCnnmx2XPj4ovyJ6l2tseqYrpT30eCK U+OfkAb85sMFzp4gY6kAmZ3KHrVUGYUv1fcbj6hZfNkAAMJWS9kq6rLqMWHlXH3D8AU8Y4Fzo2ksa rYBNjIHm+ngWuRRZxot/g5zHA3FmaazW2bANzxCp7m6yvQvNfshhbM4XpxK/qAQ9ssoe/TgzTYC2B 1gy6zazOwafB2OLz5giRT6S2BdKpDM2fRB9CnJTd+dM1btzR+GYIvpVPyQG3p8ZOUcLjb7krJ5nyw 9ExiL9KQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00EupG-Rz; Mon, 10 Jul 2023 20:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Geert Uytterhoeven , Mike Rapoport , linux-m68k@lists.linux-m68k.org Subject: [PATCH v5 15/38] m68k: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:16 +0100 Message-Id: <20230710204339.3554919-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068625430869195 X-GMAIL-MSGID: 1771068625430869195 Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Tested-by: Geert Uytterhoeven Acked-by: Mike Rapoport (IBM) Cc: linux-m68k@lists.linux-m68k.org --- arch/m68k/include/asm/cacheflush_mm.h | 27 ++++++++++++++++-------- arch/m68k/include/asm/mcf_pgtable.h | 1 + arch/m68k/include/asm/motorola_pgtable.h | 1 + arch/m68k/include/asm/pgtable_mm.h | 10 +++++---- arch/m68k/include/asm/sun3_pgtable.h | 1 + arch/m68k/mm/motorola.c | 2 +- 6 files changed, 28 insertions(+), 14 deletions(-) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 1ac55e7b47f0..88eb85e81ef6 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -220,24 +220,29 @@ static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vm /* Push the page at kernel virtual address and clear the icache */ /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ -static inline void __flush_page_to_ram(void *vaddr) +static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) { if (CPU_IS_COLDFIRE) { unsigned long addr, start, end; addr = ((unsigned long) vaddr) & ~(PAGE_SIZE - 1); start = addr & ICACHE_SET_MASK; - end = (addr + PAGE_SIZE - 1) & ICACHE_SET_MASK; + end = (addr + nr * PAGE_SIZE - 1) & ICACHE_SET_MASK; if (start > end) { flush_cf_bcache(0, end); end = ICACHE_MAX_ADDR; } flush_cf_bcache(start, end); } else if (CPU_IS_040_OR_060) { - __asm__ __volatile__("nop\n\t" - ".chip 68040\n\t" - "cpushp %%bc,(%0)\n\t" - ".chip 68k" - : : "a" (__pa(vaddr))); + unsigned long paddr = __pa(vaddr); + + do { + __asm__ __volatile__("nop\n\t" + ".chip 68040\n\t" + "cpushp %%bc,(%0)\n\t" + ".chip 68k" + : : "a" (paddr)); + paddr += PAGE_SIZE; + } while (--nr); } else { unsigned long _tmp; __asm__ __volatile__("movec %%cacr,%0\n\t" @@ -249,10 +254,14 @@ static inline void __flush_page_to_ram(void *vaddr) } #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) +#define flush_dcache_page(page) __flush_pages_to_ram(page_address(page), 1) +#define flush_dcache_folio(folio) \ + __flush_pages_to_ram(folio_address(folio), folio_nr_pages(folio)) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) +#define flush_icache_pages(vma, page, nr) \ + __flush_pages_to_ram(page_address(page), nr) +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/m68k/include/asm/mcf_pgtable.h b/arch/m68k/include/asm/mcf_pgtable.h index 43e8da8465f9..772b7e7b0654 100644 --- a/arch/m68k/include/asm/mcf_pgtable.h +++ b/arch/m68k/include/asm/mcf_pgtable.h @@ -291,6 +291,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return pte; } +#define PFN_PTE_SHIFT PAGE_SHIFT #define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT) #define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT)) diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h index ec0dc19ab834..38d5e5edc3e1 100644 --- a/arch/m68k/include/asm/motorola_pgtable.h +++ b/arch/m68k/include/asm/motorola_pgtable.h @@ -112,6 +112,7 @@ static inline void pud_set(pud_t *pudp, pmd_t *pmdp) #define pte_present(pte) (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PROTNONE)) #define pte_clear(mm,addr,ptep) ({ pte_val(*(ptep)) = 0; }) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_page(pte) virt_to_page(__va(pte_val(pte))) #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h index b93c41fe2067..dbdf1c2b2f66 100644 --- a/arch/m68k/include/asm/pgtable_mm.h +++ b/arch/m68k/include/asm/pgtable_mm.h @@ -31,8 +31,6 @@ do{ \ *(pteptr) = (pteval); \ } while(0) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* PMD_SHIFT determines the size of the area a second-level page table can map */ #if CONFIG_PGTABLE_LEVELS == 3 @@ -138,11 +136,15 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); * tables contain all the necessary information. The Sun3 does, but * they are updated on demand. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + #endif /* !__ASSEMBLY__ */ /* MMU-specific headers */ diff --git a/arch/m68k/include/asm/sun3_pgtable.h b/arch/m68k/include/asm/sun3_pgtable.h index 9e7bf8a5f8f8..0cc39a88ce55 100644 --- a/arch/m68k/include/asm/sun3_pgtable.h +++ b/arch/m68k/include/asm/sun3_pgtable.h @@ -105,6 +105,7 @@ static inline void pte_clear (struct mm_struct *mm, unsigned long addr, pte_t *p pte_val (*ptep) = 0; } +#define PFN_PTE_SHIFT 0 #define pte_pfn(pte) (pte_val(pte) & SUN3_PAGE_PGNUM_MASK) #define pfn_pte(pfn, pgprot) \ ({ pte_t __pte; pte_val(__pte) = pfn | pgprot_val(pgprot); __pte; }) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index c75984e2d86b..8bca46e51e94 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -81,7 +81,7 @@ static inline void cache_page(void *vaddr) void mmu_page_ctor(void *page) { - __flush_page_to_ram(page); + __flush_pages_to_ram(page, 1); flush_tlb_kernel_page(page); nocache_page(page); } From patchwork Mon Jul 10 20:43:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118101 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp74995vqm; Mon, 10 Jul 2023 14:12:15 -0700 (PDT) X-Google-Smtp-Source: APBJJlFbRKyj/mMe9UWmM4hi9BTPJD0hiFU4lqrrjUfscbup7Eskhpv6a0ltjhWPn54XzDeIp4zC X-Received: by 2002:a17:903:22c9:b0:1b8:aef2:773e with SMTP id y9-20020a17090322c900b001b8aef2773emr12539110plg.46.1689023535059; Mon, 10 Jul 2023 14:12:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023535; cv=none; d=google.com; s=arc-20160816; b=FlBRdguOYVp1kCyxjSJ0vA2kOcN37TJQLW691ICw2zBS+fNA7fE+MO843kDjlH0A54 Cmkmd2YM95VJDITdzRImJYigCQJphxpOnL9sIDGWQgb4hIpjnI1BEQL9THwQYrtjmHzK Qj2bQNTsrGwv5lVbM/WVQtevD5g+2wUnb0KamiAN/uXN8YNVaVSycTkFYZQ4J9s2bKyL IijaV6SDOjgBSsdwJF1/QWs4cZErEAzBmt1VDnoFWv/0Yw7oJiyaxJ8VwmHN9ipQqT9C dnEUl23Q7YXdMm2nTCs2svLOP/fTtwMd+vt6jukijIJAexUd3IyVkdIW9qP8RxTMkU/j tBEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5z4wJk0QTIjcx4c+UwHHBLj4skJRBt0v1N12u1MFfd4=; fh=11uzIfhdGq1YuG3VlRhckEsy0uLPjx6s+fQfaWW6RNo=; b=f8RQgcVg9Vd+OIk39uYpG4z6PiUl32RSuINOUE7fQQB+HKfWrNkOyxVv4gufpCOBYY VNjTyC+3TZ6oPNrIIr2G8yW9/b69jPJSYEQ7zHjlSOVCjEAvHCgL2JSvYkUcnnyl/w6Z /efBWfN9HExo4oQhGcpevHLlUmAKEQWxW6KrWdmgukvpUXmWFG1zwS3m30qwGLIAwp38 JFQEE+nDeVbsNZYOVQskldsYP0bCpcqLn3CyYB1Y2YuPJk0BEo8NqpX8WwETnhsyW3XL hSYA9gqkoF6Nel//OxrNqNoHiDT5xGLdurBI5xK8lVS3j0cO9aEpQsO1+JUikIEkKN1a MN0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=F3tIJGX3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d9-20020a170902cec900b001b9e9b21249si349171plg.649.2023.07.10.14.12.02; Mon, 10 Jul 2023 14:12:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=F3tIJGX3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233355AbjGJUq0 (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232146AbjGJUo7 (ORCPT ); Mon, 10 Jul 2023 16:44:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE121127; Mon, 10 Jul 2023 13:43:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5z4wJk0QTIjcx4c+UwHHBLj4skJRBt0v1N12u1MFfd4=; b=F3tIJGX3TcBd8jd37iFsFAx744 +Fg9zd01Ea63D7FffjdtHbZRmLMH1r81WylBxdustZ4JMXuUPwnYFR5pQFHmn37M9chwOI5Pcu4eV z11VZwus4IF3NTVCNup0zOEIFRPGlw0XRw/l5MD2SZmt+siplZn9psDbOO87IIHk/6sO73zwp3nQY OHpBuoINn3jHnlqe03bIALGblwIkdlqvgd4zp8aS1BcTvc/dXJo5oJJsqWZRgxk8R/PFVxTH12lQF adwHKSxlkTGhzuIZXvRjXm7rYoNngmHOVMlkoNJq7NJeEtLW8kHl5JAmoqabG3GVaRnwtiIKV0syu Ki84JZwA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjS-00EupL-Ve; Mon, 10 Jul 2023 20:43:43 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Michal Simek Subject: [PATCH v5 16/38] microblaze: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:17 +0100 Message-Id: <20230710204339.3554919-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069541978959257 X-GMAIL-MSGID: 1771069541978959257 Rename PFN_SHIFT_OFFSET to PTE_PFN_SHIFT. Change the calling convention for set_pte() to be the same as other architectures. Add update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Michal Simek --- arch/microblaze/include/asm/cacheflush.h | 8 ++++++++ arch/microblaze/include/asm/pgtable.h | 15 ++++----------- arch/microblaze/include/asm/tlbflush.h | 4 +++- 3 files changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h index 39f8fb6768d8..e6641ff98cb3 100644 --- a/arch/microblaze/include/asm/cacheflush.h +++ b/arch/microblaze/include/asm/cacheflush.h @@ -74,6 +74,14 @@ do { \ flush_dcache_range((unsigned) (addr), (unsigned) (addr) + PAGE_SIZE); \ } while (0); +static void flush_dcache_folio(struct folio *folio) +{ + unsigned long addr = folio_pfn(folio) << PAGE_SHIFT; + + flush_dcache_range(addr, addr + folio_size(folio)); +} +#define flush_dcache_folio flush_dcache_folio + #define flush_cache_page(vma, vmaddr, pfn) \ flush_dcache_range(pfn << PAGE_SHIFT, (pfn << PAGE_SHIFT) + PAGE_SIZE); diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h index d1b8272abcd9..6f9b99082518 100644 --- a/arch/microblaze/include/asm/pgtable.h +++ b/arch/microblaze/include/asm/pgtable.h @@ -230,12 +230,12 @@ extern unsigned long empty_zero_page[1024]; #define pte_page(x) (mem_map + (unsigned long) \ ((pte_val(x) - memory_start) >> PAGE_SHIFT)) -#define PFN_SHIFT_OFFSET (PAGE_SHIFT) +#define PFN_PTE_SHIFT PAGE_SHIFT -#define pte_pfn(x) (pte_val(x) >> PFN_SHIFT_OFFSET) +#define pte_pfn(x) (pte_val(x) >> PFN_PTE_SHIFT) #define pfn_pte(pfn, prot) \ - __pte(((pte_basic_t)(pfn) << PFN_SHIFT_OFFSET) | pgprot_val(prot)) + __pte(((pte_basic_t)(pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) #ifndef __ASSEMBLY__ /* @@ -330,14 +330,7 @@ static inline unsigned long pte_update(pte_t *p, unsigned long clr, /* * set_pte stores a linux PTE into the linux page table. */ -static inline void set_pte(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - *ptep = pte; -} - -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_pte(pte_t *ptep, pte_t pte) { *ptep = pte; } diff --git a/arch/microblaze/include/asm/tlbflush.h b/arch/microblaze/include/asm/tlbflush.h index 2038168ed128..a31ae9d44083 100644 --- a/arch/microblaze/include/asm/tlbflush.h +++ b/arch/microblaze/include/asm/tlbflush.h @@ -33,7 +33,9 @@ static inline void local_flush_tlb_range(struct vm_area_struct *vma, #define flush_tlb_kernel_range(start, end) do { } while (0) -#define update_mmu_cache(vma, addr, ptep) do { } while (0) +#define update_mmu_cache_range(vmf, vma, addr, ptep, nr) do { } while (0) +#define update_mmu_cache(vma, addr, pte) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define flush_tlb_all local_flush_tlb_all #define flush_tlb_mm local_flush_tlb_mm From patchwork Mon Jul 10 20:43:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118096 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp71538vqm; Mon, 10 Jul 2023 14:05:27 -0700 (PDT) X-Google-Smtp-Source: APBJJlHP5XFnf3MwHCOy5aM8IjRYmYpBBnsICmZhVyc+7vrsWZUYlSedglkH/vLX2lOeMyTWuSH2 X-Received: by 2002:a17:903:1107:b0:1b8:8af0:416f with SMTP id n7-20020a170903110700b001b88af0416fmr16539203plh.1.1689023127276; Mon, 10 Jul 2023 14:05:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023127; cv=none; d=google.com; s=arc-20160816; b=taaH2ZUhJ0lXVybV9qkyG+nM+Tw45HQqJbiqOmgGV/8PmdnwolBHxl7nxSO7wO/CXC lgZXi2hD0E1GyXLyQ88pgYeKqKUuhYrKXOICnymQeDmZJqYauJ+4QlUJgQfEAnX8hTsl sFwLyI355pieGL4aQ8qgrWbIGXT4ixAJZEexZLfQagvtEJD765X4xA/muiUlsx9YK18I ToqYtsCjwkHmAQRyesZrRrbj0yogRU6/9ER5juuXLbtjT6bP4433rHrljsrV4rvpPN1s x2f3wGbuXwBo90L+UnB4dxxMW3gmhlw/uSE76+ffr7qIdU7+C/jQxR2uwhXeA1Kpfc2x 8Gaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=V6M6yuYxLKgLs0dM6txlVNLrG54L1bTI8jXyzPojSeg=; fh=iZ1nfRyp6VVe+ifmz9R98qB8nt/E8sAn9zL8e7UhBho=; b=T41WfRQmfVaq9LfAaxzNGB7TPcWxpoImv2Y8PX+6Qia04qLbRMwPsXxnJWehffvO8a azZxRyNj4MEYfzF94r6ILTnowvo3o0kfy1tjBTY3h5u0gtYeUs7yeTOwdKKcQ3dBoAVB HQdEW4VN8REeSyjg7syfPO93CT3D9LvEEhfiOcfYFJ7P36IEjpE1K8l2wYrQiQtTSi6v GLlK7aMHY1X0DnSLAJR/d29rfk16ybQp2zRRMDCDUBtpxERs9Hj28t6JPBAogEV2ju69 yX2G7Ji93cqcYD37iZCVGAVlRoHpjdnbCWhd+0vDQ5apEIm23PMuxoIhxbBaNzrTvvgP vf/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=OUdS+lkN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h8-20020a170902f54800b001b89b7aea8fsi367239plf.493.2023.07.10.14.05.13; Mon, 10 Jul 2023 14:05:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=OUdS+lkN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231706AbjGJUpE (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232312AbjGJUoE (ORCPT ); Mon, 10 Jul 2023 16:44:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11AFEE6E; Mon, 10 Jul 2023 13:43:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=V6M6yuYxLKgLs0dM6txlVNLrG54L1bTI8jXyzPojSeg=; b=OUdS+lkNlp7fuw/1woNDc/REqs Aph0aluSvVzMpstH0jPPCbEiwo+MabIti+uEBTrtC3qTB1ZYI3zxNGPIam2iaqqdoTGy1urkLTfmM IY5FaVvC2Lgj/UCdvTxTuA5I/+NmE/agU+MASiUYS5c0KCPwAkHhZFeRbB6emLdRKs9S4NTJKJaIe n/X5OZibz3XMcF7WOHE7eFW7ikstRWkOh0EK+B7HAdzWM20sguXCno4+y1cEMNYwrpUJcGBLgiz4P /xuZHS+/IFN/pExYzv+jxU87kZwY06jtgth+HalyAE9f1fEN1fyDAi+rFCb9v4BS/TVe/InlDTp0L epzfagtA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjT-00EupQ-3Q; Mon, 10 Jul 2023 20:43:43 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH v5 17/38] mips: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:18 +0100 Message-Id: <20230710204339.3554919-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069114904025631 X-GMAIL-MSGID: 1771069114904025631 Rename _PFN_SHIFT to PFN_PTE_SHIFT. Convert a few places to call set_pte() instead of set_pte_at(). Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org --- arch/mips/bcm47xx/prom.c | 2 +- arch/mips/include/asm/cacheflush.h | 32 +++++++++----- arch/mips/include/asm/pgtable-32.h | 10 ++--- arch/mips/include/asm/pgtable-64.h | 6 +-- arch/mips/include/asm/pgtable-bits.h | 6 +-- arch/mips/include/asm/pgtable.h | 63 ++++++++++++++++++---------- arch/mips/mm/c-r4k.c | 5 ++- arch/mips/mm/cache.c | 56 ++++++++++++------------- arch/mips/mm/init.c | 21 ++++++---- arch/mips/mm/pgtable-32.c | 2 +- arch/mips/mm/pgtable-64.c | 2 +- arch/mips/mm/tlbex.c | 2 +- 12 files changed, 121 insertions(+), 86 deletions(-) diff --git a/arch/mips/bcm47xx/prom.c b/arch/mips/bcm47xx/prom.c index a9bea411d928..99a1ba5394e0 100644 --- a/arch/mips/bcm47xx/prom.c +++ b/arch/mips/bcm47xx/prom.c @@ -116,7 +116,7 @@ void __init prom_init(void) #if defined(CONFIG_BCM47XX_BCMA) && defined(CONFIG_HIGHMEM) #define EXTVBASE 0xc0000000 -#define ENTRYLO(x) ((pte_val(pfn_pte((x) >> _PFN_SHIFT, PAGE_KERNEL_UNCACHED)) >> 6) | 1) +#define ENTRYLO(x) ((pte_val(pfn_pte((x) >> PFN_PTE_SHIFT, PAGE_KERNEL_UNCACHED)) >> 6) | 1) #include diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index d8d3f80f9fc0..0f389bc7cb90 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -36,12 +36,12 @@ */ #define PG_dcache_dirty PG_arch_1 -#define Page_dcache_dirty(page) \ - test_bit(PG_dcache_dirty, &(page)->flags) -#define SetPageDcacheDirty(page) \ - set_bit(PG_dcache_dirty, &(page)->flags) -#define ClearPageDcacheDirty(page) \ - clear_bit(PG_dcache_dirty, &(page)->flags) +#define folio_test_dcache_dirty(folio) \ + test_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_set_dcache_dirty(folio) \ + set_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_clear_dcache_dirty(folio) \ + clear_bit(PG_dcache_dirty, &(folio)->flags) extern void (*flush_cache_all)(void); extern void (*__flush_cache_all)(void); @@ -50,15 +50,24 @@ extern void (*flush_cache_mm)(struct mm_struct *mm); extern void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn); -extern void __flush_dcache_page(struct page *page); +extern void __flush_dcache_pages(struct page *page, unsigned int nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (cpu_has_dc_aliases) + __flush_dcache_pages(&folio->page, folio_nr_pages(folio)); + else if (!cpu_has_ic_fills_f_dc) + folio_set_dcache_dirty(folio); +} +#define flush_dcache_folio flush_dcache_folio + static inline void flush_dcache_page(struct page *page) { if (cpu_has_dc_aliases) - __flush_dcache_page(page); + __flush_dcache_pages(page, 1); else if (!cpu_has_ic_fills_f_dc) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) @@ -73,10 +82,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) { } +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h index ba0016709a1a..0e196650f4f4 100644 --- a/arch/mips/include/asm/pgtable-32.h +++ b/arch/mips/include/asm/pgtable-32.h @@ -153,7 +153,7 @@ static inline void pmd_clear(pmd_t *pmdp) #if defined(CONFIG_XPA) #define MAX_POSSIBLE_PHYSMEM_BITS 40 -#define pte_pfn(x) (((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT)) +#define pte_pfn(x) (((unsigned long)((x).pte_high >> PFN_PTE_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT)) static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) { @@ -161,7 +161,7 @@ pfn_pte(unsigned long pfn, pgprot_t prot) pte.pte_low = (pfn >> _PAGE_PRESENT_SHIFT) | (pgprot_val(prot) & ~_PFNX_MASK); - pte.pte_high = (pfn << _PFN_SHIFT) | + pte.pte_high = (pfn << PFN_PTE_SHIFT) | (pgprot_val(prot) & ~_PFN_MASK); return pte; } @@ -184,9 +184,9 @@ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) #else #define MAX_POSSIBLE_PHYSMEM_BITS 32 -#define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((unsigned long long)(pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((unsigned long long)(pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)((x).pte >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((unsigned long long)(pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((unsigned long long)(pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) #endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */ #define pte_page(x) pfn_to_page(pte_pfn(x)) diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 98e24e3e7f2b..20ca48c1b606 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -298,9 +298,9 @@ static inline void pud_clear(pud_t *pudp) #define pte_page(x) pfn_to_page(pte_pfn(x)) -#define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)((x).pte >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) #ifndef __PAGETABLE_PMD_FOLDED static inline pmd_t *pud_pgtable(pud_t pud) diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h index 1c576679aa87..421e78c30253 100644 --- a/arch/mips/include/asm/pgtable-bits.h +++ b/arch/mips/include/asm/pgtable-bits.h @@ -182,10 +182,10 @@ enum pgtable_bits { #if defined(CONFIG_CPU_R3K_TLB) # define _CACHE_UNCACHED (1 << _CACHE_UNCACHED_SHIFT) # define _CACHE_MASK _CACHE_UNCACHED -# define _PFN_SHIFT PAGE_SHIFT +# define PFN_PTE_SHIFT PAGE_SHIFT #else # define _CACHE_MASK (7 << _CACHE_SHIFT) -# define _PFN_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3) +# define PFN_PTE_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3) #endif #ifndef _PAGE_NO_EXEC @@ -195,7 +195,7 @@ enum pgtable_bits { #define _PAGE_SILENT_READ _PAGE_VALID #define _PAGE_SILENT_WRITE _PAGE_DIRTY -#define _PFN_MASK (~((1 << (_PFN_SHIFT)) - 1)) +#define _PFN_MASK (~((1 << (PFN_PTE_SHIFT)) - 1)) /* * The final layouts of the PTE bits are: diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 574fa14ac8b2..cbb93a834f52 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -66,7 +66,7 @@ extern void paging_init(void); static inline unsigned long pmd_pfn(pmd_t pmd) { - return pmd_val(pmd) >> _PFN_SHIFT; + return pmd_val(pmd) >> PFN_PTE_SHIFT; } #ifndef CONFIG_MIPS_HUGE_TLB_SUPPORT @@ -105,9 +105,6 @@ do { \ } \ } while(0) -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); - #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) #ifdef CONFIG_XPA @@ -157,7 +154,7 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt null.pte_low = null.pte_high = _PAGE_GLOBAL; } - set_pte_at(mm, addr, ptep, null); + set_pte(ptep, null); htw_start(); } #else @@ -196,28 +193,41 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt #if !defined(CONFIG_CPU_R3K_TLB) /* Preserve global status for the pair */ if (pte_val(*ptep_buddy(ptep)) & _PAGE_GLOBAL) - set_pte_at(mm, addr, ptep, __pte(_PAGE_GLOBAL)); + set_pte(ptep, __pte(_PAGE_GLOBAL)); else #endif - set_pte_at(mm, addr, ptep, __pte(0)); + set_pte(ptep, __pte(0)); htw_start(); } #endif -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + bool do_sync = false; - if (!pte_present(pteval)) - goto cache_sync_done; + for (i = 0; i < nr; i++) { + if (!pte_present(pte)) + continue; + if (pte_present(ptep[i]) && + (pte_pfn(ptep[i]) == pte_pfn(pte))) + continue; + do_sync = true; + } - if (pte_present(*ptep) && (pte_pfn(*ptep) == pte_pfn(pteval))) - goto cache_sync_done; + if (do_sync) + __update_cache(addr, pte); - __update_cache(addr, pteval); -cache_sync_done: - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); + } } +#define set_ptes set_ptes /* * (pmds are folded into puds so this doesn't get actually called, @@ -486,7 +496,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, pte_t entry, int dirty) { if (!pte_same(*ptep, entry)) - set_pte_at(vma->vm_mm, address, ptep, entry); + set_pte(ptep, entry); /* * update_mmu_cache will unconditionally execute, handling both * the case that the PTE changed and the spurious fault case. @@ -568,12 +578,21 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) -{ - pte_t pte = *ptep; - __update_tlb(vma, address, pte); +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) +{ + for (;;) { + pte_t pte = *ptep; + __update_tlb(vma, address, pte); + if (--nr == 0) + break; + ptep++; + address += PAGE_SIZE; + } } +#define update_mmu_cache(vma, address, ptep) \ + update_mmu_cache_range(NULL, vma, address, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index 4b6554b48923..187d1c16361c 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -568,13 +568,14 @@ static inline void local_r4k_flush_cache_page(void *args) if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) vaddr = NULL; else { + struct folio *folio = page_folio(page); /* * Use kmap_coherent or kmap_atomic to do flushes for * another ASID than the current one. */ map_coherent = (cpu_has_dc_aliases && - page_mapcount(page) && - !Page_dcache_dirty(page)); + folio_mapped(folio) && + !folio_test_dcache_dirty(folio)); if (map_coherent) vaddr = kmap_coherent(page, addr); else diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index d21cf8c6cf6c..02042100e267 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -99,13 +99,15 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes, return 0; } -void __flush_dcache_page(struct page *page) +void __flush_dcache_pages(struct page *page, unsigned int nr) { - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = page_folio(page); + struct address_space *mapping = folio_flush_mapping(folio); unsigned long addr; + unsigned int i; if (mapping && !mapping_mapped(mapping)) { - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); return; } @@ -114,25 +116,21 @@ void __flush_dcache_page(struct page *page) * case is for exec env/arg pages and those are %99 certainly going to * get faulted into the tlb (and thus flushed) anyways. */ - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - flush_data_cache_page(addr); - - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + for (i = 0; i < nr; i++) { + addr = (unsigned long)kmap_local_page(page + i); + flush_data_cache_page(addr); + kunmap_local((void *)addr); + } } - -EXPORT_SYMBOL(__flush_dcache_page); +EXPORT_SYMBOL(__flush_dcache_pages); void __flush_anon_page(struct page *page, unsigned long vmaddr) { unsigned long addr = (unsigned long) page_address(page); + struct folio *folio = page_folio(page); if (pages_do_alias(addr, vmaddr)) { - if (page_mapcount(page) && !Page_dcache_dirty(page)) { + if (folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -147,27 +145,29 @@ EXPORT_SYMBOL(__flush_anon_page); void __update_cache(unsigned long address, pte_t pte) { - struct page *page; + struct folio *folio; unsigned long pfn, addr; int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc; + unsigned int i; pfn = pte_pfn(pte); if (unlikely(!pfn_valid(pfn))) return; - page = pfn_to_page(pfn); - if (Page_dcache_dirty(page)) { - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - if (exec || pages_do_alias(addr, address & PAGE_MASK)) - flush_data_cache_page(addr); - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + folio = page_folio(pfn_to_page(pfn)); + address &= PAGE_MASK; + address -= offset_in_folio(folio, pfn << PAGE_SHIFT); + + if (folio_test_dcache_dirty(folio)) { + for (i = 0; i < folio_nr_pages(folio); i++) { + addr = (unsigned long)kmap_local_folio(folio, i); - ClearPageDcacheDirty(page); + if (exec || pages_do_alias(addr, address)) + flush_data_cache_page(addr); + kunmap_local((void *)addr); + address += PAGE_SIZE; + } + folio_clear_dcache_dirty(folio); } } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5a8002839550..5dcb525a8995 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -88,7 +88,7 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot) pte_t pte; int tlbidx; - BUG_ON(Page_dcache_dirty(page)); + BUG_ON(folio_test_dcache_dirty(page_folio(page))); preempt_disable(); pagefault_disable(); @@ -169,11 +169,12 @@ void kunmap_coherent(void) void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); if (cpu_has_dc_aliases && - page_mapcount(from) && !Page_dcache_dirty(from)) { + folio_mapped(src) && !folio_test_dcache_dirty(src)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(); @@ -194,15 +195,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } if (vma->vm_flags & VM_EXEC) flush_cache_page(vma, vaddr, page_to_pfn(page)); @@ -212,15 +215,17 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } } EXPORT_SYMBOL_GPL(copy_from_user_page); @@ -448,10 +453,10 @@ static inline void __init mem_init_free_highmem(void) void __init mem_init(void) { /* - * When _PFN_SHIFT is greater than PAGE_SHIFT we won't have enough PTE + * When PFN_PTE_SHIFT is greater than PAGE_SHIFT we won't have enough PTE * bits to hold a full 32b physical address on MIPS32 systems. */ - BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (_PFN_SHIFT > PAGE_SHIFT)); + BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (PFN_PTE_SHIFT > PAGE_SHIFT)); #ifdef CONFIG_HIGHMEM max_mapnr = highend_pfn ? highend_pfn : max_low_pfn; diff --git a/arch/mips/mm/pgtable-32.c b/arch/mips/mm/pgtable-32.c index f57fb69472f8..84dd5136d53a 100644 --- a/arch/mips/mm/pgtable-32.c +++ b/arch/mips/mm/pgtable-32.c @@ -35,7 +35,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/mips/mm/pgtable-64.c b/arch/mips/mm/pgtable-64.c index b4386a0e2ef8..c76d21f7dffb 100644 --- a/arch/mips/mm/pgtable-64.c +++ b/arch/mips/mm/pgtable-64.c @@ -93,7 +93,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index 8d514a9082c6..b4e1c783e617 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -253,7 +253,7 @@ static void output_pgtable_bits_defines(void) pr_define("_PAGE_GLOBAL_SHIFT %d\n", _PAGE_GLOBAL_SHIFT); pr_define("_PAGE_VALID_SHIFT %d\n", _PAGE_VALID_SHIFT); pr_define("_PAGE_DIRTY_SHIFT %d\n", _PAGE_DIRTY_SHIFT); - pr_define("_PFN_SHIFT %d\n", _PFN_SHIFT); + pr_define("PFN_PTE_SHIFT %d\n", PFN_PTE_SHIFT); pr_debug("\n"); } From patchwork Mon Jul 10 20:43:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118112 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp76692vqm; Mon, 10 Jul 2023 14:15:48 -0700 (PDT) X-Google-Smtp-Source: APBJJlEW6p4Um7tXsqxPZROEXwSWDyQ+eojnrAjwWBijqhEq5Tc8Din/n6rmrcUOLcGQE6b9NBc/ X-Received: by 2002:a05:6a00:1248:b0:66c:6678:3776 with SMTP id u8-20020a056a00124800b0066c66783776mr17426368pfi.7.1689023748351; Mon, 10 Jul 2023 14:15:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023748; cv=none; d=google.com; s=arc-20160816; b=nWihE1IALD92jUxzR8PXSzrT2VUxs482d1VhIWxs+6t6NA8asmz9cRpNuLSDSkonjj SgQ4i+tmNYcPn7zHCq2w171ukIboUGmT3mKIOgja2I0tIV6KEeJt1+eul0ygGM/sehtg 4qlqgiIyUY74fcu1iScGWDR4q2i+9S89FqqfiwrG+lsq83PKPCH1d/hir5dSyexjeGpL Ap5l0pbTs8gB/JKzp0IMF69L6T8eNRsfNowg1Y4gTBAxi/l5R5aWj8x0F9Vtd0CNHmA7 brTyRwV/Pcfll5b2wXAduUwGZdvqceASyFVg79hsCsx4WcgdMO4NrjNbelpRILsHbV9G WNXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Wwwg3Ff0sEXoT+gpLjTlRY8CfhDSyXqIxaVQtu9/j9M=; fh=NfohK/7yKtGW9O6g+loKZ3ukxkyHzqNs83MclDbBRJs=; b=EIxgzCsul8Xl68gjnQH9RfRBS5oyl1QwRZ/Huzv64wONZ1z5KIk3jhgXhB4Z1qq1mu nMzypgObxsC1XiD4DlpkiEcpCKUVfkGsqwA2JYhDrqb4H7B87GAuj6aCEQOwQC7B9Biy aB36YELoMZEbHQdnX2lwEA7RvCfFel0jJ7f8mb+oM9CVI9O9Nodge5Wt0rOt0g44Rcr9 i3yX9FBZxoslIYlkiUdKWtiyyVfGUwxLnVj/lFuG8I5tB9KU9pKwJW3A0zQwYgDTe6ll jh9jVGMHnVLnVYtpd5Y8Zyz8Dn/aa7rJY/IkzJMO6Rv2oO3KwqAsgn10Fu7avzIB/nmA O0gg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hcQFSxZH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dw11-20020a056a00368b00b00682a0e45b59si302808pfb.85.2023.07.10.14.15.35; Mon, 10 Jul 2023 14:15:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hcQFSxZH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232553AbjGJUoR (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229587AbjGJUoE (ORCPT ); Mon, 10 Jul 2023 16:44:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED10CE6C; Mon, 10 Jul 2023 13:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Wwwg3Ff0sEXoT+gpLjTlRY8CfhDSyXqIxaVQtu9/j9M=; b=hcQFSxZH1BUy1pekV1P0xDwPNc fi2coX6G0keIT4lFhynthoC81THCaUpZ9t9JtrYrw9KHtyZH4ukw+WZProSQw8fgbGgxdScNgka5c BxaNAQzneZyIWJMjfVBbxcv+wwnoOqiaLAttj/xR0MnJ8+1Tc3+O8dRYGRL9V2sGnR4KnQdoXxFT/ oukgnBz9iSXWedje+9zSd+A2ReCZ8v8kW3V5yt7zFWB4HqTh73MtDQG1ABYXaogzi8/y7Jv1BC5Qi xRHofb9HskVaYVuUuWAaB9HBSh83dKODGfB1xQgvKSS787jCRRM+YsRljg9YtmyEAWC22+Cqd9wie uqYPgm4A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjT-00EupT-6u; Mon, 10 Jul 2023 20:43:43 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Dinh Nguyen Subject: [PATCH v5 18/38] nios2: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:19 +0100 Message-Id: <20230710204339.3554919-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069766127624665 X-GMAIL-MSGID: 1771069766127624665 Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Dinh Nguyen Acked-by: Dinh Nguyen --- arch/nios2/include/asm/cacheflush.h | 6 ++- arch/nios2/include/asm/pgtable.h | 28 ++++++---- arch/nios2/mm/cacheflush.c | 79 ++++++++++++++++------------- 3 files changed, 67 insertions(+), 46 deletions(-) diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index d0b71dd71287..8624ca83cffe 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -29,9 +29,13 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio extern void flush_icache_range(unsigned long start, unsigned long end); -extern void flush_icache_page(struct vm_area_struct *vma, struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 0f5c2564e9f5..be6bf3e0bd7a 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -178,14 +178,21 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - unsigned long paddr = (unsigned long)page_to_virt(pte_page(pteval)); - - flush_dcache_range(paddr, paddr + PAGE_SIZE); - set_pte(ptep, pteval); + unsigned long paddr = (unsigned long)page_to_virt(pte_page(pte)); + + flush_dcache_range(paddr, paddr + nr * PAGE_SIZE); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1; + } } +#define set_ptes set_ptes static inline int pmd_none(pmd_t pmd) { @@ -202,7 +209,7 @@ static inline void pte_clear(struct mm_struct *mm, pte_val(null) = (addr >> PAGE_SHIFT) & 0xf; - set_pte_at(mm, addr, ptep, null); + set_pte(ptep, null); } /* @@ -273,7 +280,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) extern void __init paging_init(void); extern void __init mmu_init(void); -extern void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte); +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #endif /* _ASM_NIOS2_PGTABLE_H */ diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 6aa9257c3ede..28b805f465a8 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -71,26 +71,26 @@ static void __flush_icache(unsigned long start, unsigned long end) __asm__ __volatile(" flushp\n"); } -static void flush_aliases(struct address_space *mapping, struct page *page) +static void flush_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; + struct vm_area_struct *vma; pgoff_t pgoff; + unsigned long nr = folio_nr_pages(folio); - pgoff = page->index; + pgoff = folio->index; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + nr - 1) { + unsigned long start; - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, - page_to_pfn(page)); + start = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + flush_cache_range(vma, start, start + nr * PAGE_SIZE); } flush_dcache_mmap_unlock(mapping); } @@ -138,10 +138,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, __flush_icache(start, end); } -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { unsigned long start = (unsigned long) page_address(page); - unsigned long end = start + PAGE_SIZE; + unsigned long end = start + nr * PAGE_SIZE; __flush_dcache(start, end); __flush_icache(start, end); @@ -158,19 +159,19 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, __flush_icache(start, end); } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +static void __flush_dcache_folio(struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - unsigned long start = (unsigned long)page_address(page); + unsigned long start = (unsigned long)folio_address(folio); - __flush_dcache(start, start + PAGE_SIZE); + __flush_dcache(start, start + folio_size(folio)); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -178,32 +179,38 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); /* Flush this page if there are aliases. */ if (mapping && !mapping_mapped(mapping)) { - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(folio); if (mapping) { - unsigned long start = (unsigned long)page_address(page); - flush_aliases(mapping, page); - flush_icache_range(start, start + PAGE_SIZE); + unsigned long start = (unsigned long)folio_address(folio); + flush_aliases(mapping, folio); + flush_icache_range(start, start + folio_size(folio)); } - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; struct address_space *mapping; reload_tlb_page(vma, address, pte); @@ -215,19 +222,19 @@ void update_mmu_cache(struct vm_area_struct *vma, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(folio); - if(mapping) - { - flush_aliases(mapping, page); + mapping = folio_flush_mapping(folio); + if (mapping) { + flush_aliases(mapping, folio); if (vma->vm_flags & VM_EXEC) - flush_icache_page(vma, page); + flush_icache_pages(vma, &folio->page, + folio_nr_pages(folio)); } } From patchwork Mon Jul 10 20:43:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118118 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp89708vqm; Mon, 10 Jul 2023 14:47:41 -0700 (PDT) X-Google-Smtp-Source: APBJJlG3QtfdwR4sQfEjm4XLMmsDbrwjviIr0quii1MfasE35povPz/MpqyoZ/hUwXgGTIDYW7QP X-Received: by 2002:aa7:cf8b:0:b0:51a:5ebe:4ab9 with SMTP id z11-20020aa7cf8b000000b0051a5ebe4ab9mr12376479edx.14.1689025660947; Mon, 10 Jul 2023 14:47:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689025660; cv=none; d=google.com; s=arc-20160816; b=ywp6Nnv5XQiNDimeCCeR+k6kqB89zt47GTaAUhd6gSqXRRHBlVF2ag51LfcjswanSN /X3LKsskwZZRpYOQ49frFRA3Q+BQEePFtGPTnoYfjbIrdEks1mCE+kE1s+hTEki2OIaS vmwBvjCrFPbXxifAQDf7n4xr7qZ2PlDAD2WOLe/s+Dqq7Hm+O3CTvxoPzTsD1wo2Wl9V Io5lfZmzjTQhlPNRgoNL13hKx+jqhQHwzBJ7hzncC2l118Jn9Nbz/uYqH28v4WB1MJLy gC/1GUgtLQeeXUp5ujRo7Mfwoz2cz+9wUJqw1niKtVQZqZIs0nkXASwphDXuGwRLsrqV kH7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=LkNLbBSWedZKYyPfN1pGuZQo5tC7Q714M9YkIGeohI8=; fh=fYArf1rABs9OtP70nSbyu5D+VWf3R8DxfB60E3iUuMI=; b=HhMOPQT4uqzmipf0UCUC9F74LVuxrVbTATgEs4VxmKL55zchbwUPc81m7McRPkMUri Tm1D6ocn0PMiNGaDDRMC8XvYMk7TIH4QMA+WttdykiUz+SeJocacgxZQXkKyd/WdGO9K ZZjdMl13nkg0Cwh3HHU5c0ur95pPOWhRbtjA8SHgwcg1c+CWKT4JnvtXdqt5dwhfDzDz URuq/PaKmjO75HVHDmdVlc8pScyM+hHSR6DNMKmUMYEEYPht7d/JGTAXijuUkzhBTsBT JXa/SFW5Djry0JGt1o4UFB5a8ZWbRK+NzFt4B28xXioleSmTU0rPtfVCkQ8raKYslcAd gLJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=LstrdFs9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r6-20020aa7cfc6000000b0051dfa08070fsi431245edy.430.2023.07.10.14.47.17; Mon, 10 Jul 2023 14:47:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=LstrdFs9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233263AbjGJUpy (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjGJUo5 (ORCPT ); Mon, 10 Jul 2023 16:44:57 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CD6E19C; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LkNLbBSWedZKYyPfN1pGuZQo5tC7Q714M9YkIGeohI8=; b=LstrdFs9UlKVnGLIEf/uAPHDQu pOcMiPmZPEJbQXp8d+wByOWnPshMUaJOCjYNfNldJhUqffupXq2qr8ZW2cIEmvQSvBMZBk4WZTWwx D+h9KKwgOJoSQv5oq9816YjDOxDh8LKWQTqQLYLMInOiJTbUxKGZjexoGn7JzrNc1CLeVnITSE7n7 jz4V7oRq0/Gll+bVW0g0QVkL+PoDSNhsfLO9kWfztiTTfuVf7WndNMFG5cpOgSme0XkSl0Xiox2R5 bU0RGm7kycVY3av7mjwaL0GDR6eAER8KIgTNLG4IrpJTHfaEWOCIWBqgqOZHA39mj7MRaicZpYCdR TEEB/kQQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjT-00Eupe-BF; Mon, 10 Jul 2023 20:43:43 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Jonas Bonn , Stefan Kristiansson , Stafford Horne , linux-openrisc@vger.kernel.org Subject: [PATCH v5 19/38] openrisc: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:20 +0100 Message-Id: <20230710204339.3554919-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771071771687416811 X-GMAIL-MSGID: 1771071771687416811 Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Jonas Bonn Cc: Stefan Kristiansson Cc: Stafford Horne Cc: linux-openrisc@vger.kernel.org --- arch/openrisc/include/asm/cacheflush.h | 8 +++++++- arch/openrisc/include/asm/pgtable.h | 15 ++++++++++----- arch/openrisc/mm/cache.c | 12 ++++++++---- 3 files changed, 25 insertions(+), 10 deletions(-) diff --git a/arch/openrisc/include/asm/cacheflush.h b/arch/openrisc/include/asm/cacheflush.h index eeac40d4a854..984c331ff5f4 100644 --- a/arch/openrisc/include/asm/cacheflush.h +++ b/arch/openrisc/include/asm/cacheflush.h @@ -56,10 +56,16 @@ static inline void sync_icache_dcache(struct page *page) */ #define PG_dc_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_dc_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - clear_bit(PG_dc_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_icache_user_page(vma, page, addr, len) \ diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h index 3eb9b9555d0d..7bdf1bb0d177 100644 --- a/arch/openrisc/include/asm/pgtable.h +++ b/arch/openrisc/include/asm/pgtable.h @@ -46,7 +46,7 @@ extern void paging_init(void); * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) @@ -357,6 +357,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) #define __pmd_offset(address) \ (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_pfn(x) ((unsigned long)(((x).pte)) >> PAGE_SHIFT) #define pfn_pte(pfn, prot) __pte((((pfn) << PAGE_SHIFT)) | pgprot_val(prot)) @@ -379,13 +380,17 @@ static inline void update_tlb(struct vm_area_struct *vma, extern void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { - update_tlb(vma, address, pte); - update_cache(vma, address, pte); + update_tlb(vma, address, ptep); + update_cache(vma, address, ptep); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + /* __PHX__ FIXME, SWAP, this probably doesn't work */ /* diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c index 534a52ec5e66..eb43b73f3855 100644 --- a/arch/openrisc/mm/cache.c +++ b/arch/openrisc/mm/cache.c @@ -43,15 +43,19 @@ void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte) { unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT; - struct page *page = pfn_to_page(pfn); - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); /* * Since icaches do not snoop for updated data on OpenRISC, we * must write back and invalidate any dirty pages manually. We * can skip data pages, since they will not end up in icaches. */ - if ((vma->vm_flags & VM_EXEC) && dirty) - sync_icache_dcache(page); + if ((vma->vm_flags & VM_EXEC) && dirty) { + unsigned int nr = folio_nr_pages(folio); + + while (nr--) + sync_icache_dcache(folio_page(folio, nr)); + } } From patchwork Mon Jul 10 20:43:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118117 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp79246vqm; Mon, 10 Jul 2023 14:21:39 -0700 (PDT) X-Google-Smtp-Source: APBJJlHUfZ2mvwqhm00bjwpPz8XaZLOZ+wQpyPKZNiVunepJpkWgOZMBDJm0hvvMnzAGBUQ31Y7p X-Received: by 2002:a17:903:482:b0:1b8:a4a9:6225 with SMTP id jj2-20020a170903048200b001b8a4a96225mr12245100plb.7.1689024099300; Mon, 10 Jul 2023 14:21:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689024099; cv=none; d=google.com; s=arc-20160816; b=AUipScMdvLBC3cRwPn/WksDtUEvadS/6vd2omh4DzdBDyoaN4uxxu0xJ3d8SVByVPB PreJHBmz0eIs7h69eYpAmpLCmevu++B77UZYO9pdhz1cECQA69wX+44rlXLKrIqLxAWh 2ATMc4zSLlxPxeeIFnKWxZXoqx+AvrRHvWceveZBZF3EZuSKLAvGw18wVZ52v/mw8voK a64/fK8uQsXUmXY8zs9PerqjjhUQTtsUuvRmxz9ghfjOJB6vRv+D7tgCzZ5OyJvY9kV2 fftWmm0rvXaSiSS21trVhhRLu5+Ofq6M+5QdgpIfPCE7jhqB1XlUOApEkR3vwtOIM3Eh Tciw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eXHV2JcA2dh0XcKNU/gzTdwGqE1v+tTzv2WVHsOGr9M=; fh=RamtDc2i6eS+Xg4cFoUvd/Tb8vePDaTgmX+LzXv+dco=; b=iTemc9fN9m/nWFVZ4ssEPDCOPUcsy+8YPtlda1Vww82O6QBhiANuzSw3C6zD8yxniM f0nBLtMqBWcciD15NoZ5vYANkrrm9sIbOAALEmjyuUbfYMiDi5orNQyPsUg89h4JpA/m GXosrbTYB54IfALXptTh4wMI3hO4mI4JivDO0PePOZ12gF9SzGcPkWMKxRIJUiNodjvm RgEqX5YDiMNnZZ5ZAt+JLDe4hIc10otdkryHYKmEoGeUzbipgeYyLlh/p7O13B69coaB oV+JgybErQd7jK1IK1tf4J6t8G6WggFHnwuylMZTPYalXmNl94ggyXbsfnUtLS1hj3Su DO4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=B3VIc80O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j14-20020a170903024e00b001b69de1eae1si366499plh.620.2023.07.10.14.21.26; Mon, 10 Jul 2023 14:21:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=B3VIc80O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232608AbjGJUo7 (ORCPT + 99 others); Mon, 10 Jul 2023 16:44:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232310AbjGJUoE (ORCPT ); Mon, 10 Jul 2023 16:44:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5974E6F; Mon, 10 Jul 2023 13:43:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eXHV2JcA2dh0XcKNU/gzTdwGqE1v+tTzv2WVHsOGr9M=; b=B3VIc80OBWd43RjMo8M572CrEa zGyYC6wIXDYVAo+o+lgp6aTPkjpXshQX68hmVuFOWlBzoPnz3NatZSZVmXtuSWy2OAPZtPyf6z9J+ fHDLATzBpxfxXoTAXtgm1EkuAWXbAtk+OwLXcLoAE8VL04uXdUJzZCSqeJ4p0d+VNXo60/6rULD20 RNRix6CMP0fn7cmXnb3fmGcmhce36CU7xyFVHtOXlxI660M24oAr3q1QId6Bj+qdh4hhD3WCgw1XG /HpnF7VifeCQD/24RQatL/tYDDYAjuMJDGns+7CgNuNsY6Jzh1jIAxVYHqxg5TnoZwSD4xVQhnAqe JUJmn7IQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjT-00Eupp-FK; Mon, 10 Jul 2023 20:43:43 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH v5 20/38] parisc: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:21 +0100 Message-Id: <20230710204339.3554919-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771070133830653744 X-GMAIL-MSGID: 1771070133830653744 Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org --- arch/parisc/include/asm/cacheflush.h | 14 ++-- arch/parisc/include/asm/pgtable.h | 37 +++++---- arch/parisc/kernel/cache.c | 107 ++++++++++++++++++--------- 3 files changed, 105 insertions(+), 53 deletions(-) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index c8b6928cee1e..b77c3e0c37d3 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -43,8 +43,13 @@ void invalidate_kernel_vmap_range(void *vaddr, int size); #define flush_cache_vmap(start, end) flush_cache_all() #define flush_cache_vunmap(start, end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) @@ -53,10 +58,9 @@ void flush_dcache_page(struct page *page); #define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \ xa_unlock_irqrestore(&mapping->i_pages, flags) -#define flush_icache_page(vma,page) do { \ - flush_kernel_dcache_page_addr(page_address(page)); \ - flush_kernel_icache_page(page_address(page)); \ -} while (0) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index 5656395c95ee..ce38bb375b60 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -73,15 +73,6 @@ extern void __update_cache(pte_t pte); mb(); \ } while(0) -#define set_pte_at(mm, addr, pteptr, pteval) \ - do { \ - if (pte_present(pteval) && \ - pte_user(pteval)) \ - __update_cache(pteval); \ - *(pteptr) = (pteval); \ - purge_tlb_entries(mm, addr); \ - } while (0) - #endif /* !__ASSEMBLY__ */ #define pte_ERROR(e) \ @@ -285,7 +276,7 @@ extern unsigned long *empty_zero_page; #define pte_none(x) (pte_val(x) == 0) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_user(x) (pte_val(x) & _PAGE_USER) -#define pte_clear(mm, addr, xp) set_pte_at(mm, addr, xp, __pte(0)) +#define pte_clear(mm, addr, xp) set_pte(xp, __pte(0)) #define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK) #define pmd_address(x) ((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT) @@ -391,11 +382,29 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) extern void paging_init (void); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + if (pte_present(pte) && pte_user(pte)) + __update_cache(pte); + for (;;) { + *ptep = pte; + purge_tlb_entries(mm, addr); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_PTE_SHIFT; + addr += PAGE_SIZE; + } +} +#define set_ptes set_ptes + /* Used for deferring calls to flush_dcache_page() */ #define PG_dcache_dirty PG_arch_1 -#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep) +#define update_mmu_cache_range(vmf, vma, addr, ptep, nr) __update_cache(*ptep) +#define update_mmu_cache(vma, addr, ptep) __update_cache(*ptep) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that @@ -450,7 +459,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned if (!pte_young(pte)) { return 0; } - set_pte_at(vma->vm_mm, addr, ptep, pte_mkold(pte)); + set_pte(ptep, pte_mkold(pte)); return 1; } @@ -460,14 +469,14 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t old_pte; old_pte = *ptep; - set_pte_at(mm, addr, ptep, __pte(0)); + set_pte(ptep, __pte(0)); return old_pte; } static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - set_pte_at(mm, addr, ptep, pte_wrprotect(*ptep)); + set_pte(ptep, pte_wrprotect(*ptep)); } #define pte_same(A,B) (pte_val(A) == pte_val(B)) diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index b55b35c89d6a..442109a48940 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -94,11 +94,11 @@ static inline void flush_data_cache(void) /* Kernel virtual address of pfn. */ #define pfn_va(pfn) __va(PFN_PHYS(pfn)) -void -__update_cache(pte_t pte) +void __update_cache(pte_t pte) { unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; + unsigned int nr; /* We don't have pte special. As a result, we can be called with an invalid pfn and we don't need to flush the kernel dcache page. @@ -106,13 +106,17 @@ __update_cache(pte_t pte) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page_mapping_file(page) && - test_bit(PG_dcache_dirty, &page->flags)) { - flush_kernel_dcache_page_addr(pfn_va(pfn)); - clear_bit(PG_dcache_dirty, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + if (folio_flush_mapping(folio) && + test_bit(PG_dcache_dirty, &folio->flags)) { + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); + clear_bit(PG_dcache_dirty, &folio->flags); } else if (parisc_requires_coherency()) - flush_kernel_dcache_page_addr(pfn_va(pfn)); + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); } void @@ -366,6 +370,20 @@ static void flush_user_cache_page(struct vm_area_struct *vma, unsigned long vmad preempt_enable(); } +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) +{ + void *kaddr = page_address(page); + + for (;;) { + flush_kernel_dcache_page_addr(kaddr); + flush_kernel_icache_page(kaddr); + if (--nr == 0) + break; + kaddr += PAGE_SIZE; + } +} + static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) { pte_t *ptep = NULL; @@ -394,27 +412,30 @@ static inline bool pte_needs_flush(pte_t pte) == (_PAGE_PRESENT | _PAGE_ACCESSED); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); - struct vm_area_struct *mpnt; - unsigned long offset; + struct address_space *mapping = folio_flush_mapping(folio); + struct vm_area_struct *vma; unsigned long addr, old_addr = 0; + void *kaddr; unsigned long count = 0; - unsigned long flags; + unsigned long i, nr, flags; pgoff_t pgoff; if (mapping && !mapping_mapped(mapping)) { - set_bit(PG_dcache_dirty, &page->flags); + set_bit(PG_dcache_dirty, &folio->flags); return; } - flush_kernel_dcache_page_addr(page_address(page)); + nr = folio_nr_pages(folio); + kaddr = folio_address(folio); + for (i = 0; i < nr; i++) + flush_kernel_dcache_page_addr(kaddr + i * PAGE_SIZE); if (!mapping) return; - pgoff = page->index; + pgoff = folio->index; /* * We have carefully arranged in arch_get_unmapped_area() that @@ -424,20 +445,33 @@ void flush_dcache_page(struct page *page) * on machines that support equivalent aliasing */ flush_dcache_mmap_lock_irqsave(mapping, flags); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - addr = mpnt->vm_start + offset; - if (parisc_requires_coherency()) { - bool needs_flush = false; - pte_t *ptep; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + nr - 1) { + unsigned long offset = pgoff - vma->vm_pgoff; + unsigned long pfn = folio_pfn(folio); + + addr = vma->vm_start; + nr = folio_nr_pages(folio); + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + addr += offset * PAGE_SIZE; + } + if (addr + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - addr) / PAGE_SIZE; - ptep = get_ptep(mpnt->vm_mm, addr); - if (ptep) { - needs_flush = pte_needs_flush(*ptep); + if (parisc_requires_coherency()) { + for (i = 0; i < nr; i++) { + pte_t *ptep = get_ptep(vma->vm_mm, + addr + i * PAGE_SIZE); + if (!ptep) + continue; + if (pte_needs_flush(*ptep)) + flush_user_cache_page(vma, + addr + i * PAGE_SIZE); + /* Optimise accesses to the same table? */ pte_unmap(ptep); } - if (needs_flush) - flush_user_cache_page(mpnt, addr); } else { /* * The TLB is the engine of coherence on parisc: @@ -450,27 +484,32 @@ void flush_dcache_page(struct page *page) * in (until the user or kernel specifically * accesses it, of course) */ - flush_tlb_page(mpnt, addr); + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); if (old_addr == 0 || (old_addr & (SHM_COLOUR - 1)) != (addr & (SHM_COLOUR - 1))) { - __flush_cache_page(mpnt, addr, page_to_phys(page)); + for (i = 0; i < nr; i++) + __flush_cache_page(vma, + addr + i * PAGE_SIZE, + (pfn + i) * PAGE_SIZE); /* * Software is allowed to have any number * of private mappings to a page. */ - if (!(mpnt->vm_flags & VM_SHARED)) + if (!(vma->vm_flags & VM_SHARED)) continue; if (old_addr) pr_err("INEQUIVALENT ALIASES 0x%lx and 0x%lx in file %pD\n", - old_addr, addr, mpnt->vm_file); - old_addr = addr; + old_addr, addr, vma->vm_file); + if (nr == folio_nr_pages(folio)) + old_addr = addr; } } WARN_ON(++count == 4096); } flush_dcache_mmap_unlock_irqrestore(mapping, flags); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* Defined in arch/parisc/kernel/pacache.S */ EXPORT_SYMBOL(flush_kernel_dcache_range_asm); From patchwork Mon Jul 10 20:43:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118089 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp64217vqm; Mon, 10 Jul 2023 13:49:22 -0700 (PDT) X-Google-Smtp-Source: APBJJlFjYuxcz0A/xS8CUYoAYagg91wi6PikSy0nldabSLA+e6j/tACjuASIkozzOGBmVijxmOth X-Received: by 2002:a17:906:3f1d:b0:97e:17cc:cc95 with SMTP id c29-20020a1709063f1d00b0097e17cccc95mr15734417ejj.36.1689022162581; Mon, 10 Jul 2023 13:49:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022162; cv=none; d=google.com; s=arc-20160816; b=hsADz52iEP7afOmBhF5sVoFp8We9BhRhC7eJnXHXbWrDFXPVU4mupLkchG+ym39RZu izfRkpYEDgAB4q9yjy4OLr/+Ip8WOndsu0pdbo+Er5J1Kg6s/3s6AeJkuq0tpfr81lyl 2QtHWidPWZsXWCPgaWZM5g6SejndgDDX38fq3pRm1uqo5fEkk7v5jZzx8Ui2A3nF7TiA xsAQdgO7ST6c/Smy/4zWpkAq53iD6XBMZTDfv8zhhqSS6xt/QdoLLjAIox9rG4vM6ynt +69qT9AD60YDgs31aEuRt42CT6U1SeMKE3SApBEeispFAxXle89AI8TCdNNRRG8HvN2d wxEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=j/7dUwRRHtvw/GvWecTN2dD8kupNRDePMFLGED6iGvE=; fh=jTI92HOAS2JZafJlFs7wPWC51vV+Qn1jQgfQ8LPlO1A=; b=rHzK1oeVIEXUbgJru1pzVlaSBCFLAuuLbM11Vc8c313PmR6laOIO5ge2K/9bgtYSrF 1kd/Gj8qu1kzQJ38162MTnPZYrmaaAyb90UwfaHvF9OGcDfK9AFkKekPvPkEvz1MDaCc 1KBryUCnt9tEwXQD4QFCSBHliZtygb2lt5oocT67TxjvULjVhbih6io8+SbLRS/odTZU htjTkubDXwm61D2HhZUi8gjAXVGrlonS3bgrRpio7+0n9+TSvXhZIA5mqNTgDLY9CZP4 eKDFrag06PMPi30+drCR4K6qrxf+F7gbkl5wsXsaF/tu6I8esdW1x3zUPiWhrowxpbP1 zkpA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=l9kaSur2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p9-20020a170906140900b00993689daad1si441564ejc.116.2023.07.10.13.48.59; Mon, 10 Jul 2023 13:49:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=l9kaSur2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233003AbjGJUqU (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232081AbjGJUo7 (ORCPT ); Mon, 10 Jul 2023 16:44:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 853991AB; Mon, 10 Jul 2023 13:43:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=j/7dUwRRHtvw/GvWecTN2dD8kupNRDePMFLGED6iGvE=; b=l9kaSur2Mas97HqlEi14GcqOKX oHghEDA7b9YPf5IiU02uYiOiueJPzhRmbq0LOidMuxyUveKyxiIGXeMZVOypVgyR0PTOYHyrd+DYE 55JDwzhlQxB/BucsOjkBO0iYTn8/e1sPmza7Rpiq7s7URszxOMKThKh1KupfNQOCD1sBkER3YvYOA FnZFvchFaiZaAI6R4IMQKtde4b6O3QmKbB+NqfKuKkX4dSPsvjzy9+rpcFQg56mJpoLnbSaOoXjNe Y4mm0/tVHs5XdQUkeFDqlt1YOqNVJb0Q+VXARukWQp1BVhZl5qeBQl6I5iW6tQHeJfEjXFJ41FE5f GM6XjuVg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjT-00Eupu-JU; Mon, 10 Jul 2023 20:43:43 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v5 21/38] powerpc: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:22 +0100 Message-Id: <20230710204339.3554919-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068103406776823 X-GMAIL-MSGID: 1771068103406776823 Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: linuxppc-dev@lists.ozlabs.org Reviewed-by: Christophe Leroy --- arch/powerpc/include/asm/book3s/32/pgtable.h | 5 -- arch/powerpc/include/asm/book3s/64/pgtable.h | 6 +-- arch/powerpc/include/asm/book3s/pgtable.h | 11 ++--- arch/powerpc/include/asm/cacheflush.h | 14 ++++-- arch/powerpc/include/asm/kvm_ppc.h | 10 ++-- arch/powerpc/include/asm/nohash/pgtable.h | 16 ++---- arch/powerpc/include/asm/pgtable.h | 12 +++++ arch/powerpc/mm/book3s64/hash_utils.c | 11 +++-- arch/powerpc/mm/cacheflush.c | 40 +++++---------- arch/powerpc/mm/nohash/e500_hugetlbpage.c | 3 +- arch/powerpc/mm/pgtable.c | 51 +++++++++++--------- 11 files changed, 86 insertions(+), 93 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h index 7bf1fe7297c6..5f12b9382909 100644 --- a/arch/powerpc/include/asm/book3s/32/pgtable.h +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h @@ -462,11 +462,6 @@ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) pgprot_val(pgprot)); } -static inline unsigned long pte_pfn(pte_t pte) -{ - return pte_val(pte) >> PTE_RPN_SHIFT; -} - /* Generic modifiers for PTE bits */ static inline pte_t pte_wrprotect(pte_t pte) { diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 4acc9690f599..c5baa3082a5a 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -104,6 +104,7 @@ * and every thing below PAGE_SHIFT; */ #define PTE_RPN_MASK (((1UL << _PAGE_PA_MAX) - 1) & (PAGE_MASK)) +#define PTE_RPN_SHIFT PAGE_SHIFT /* * set of bits not changed in pmd_modify. Even though we have hash specific bits * in here, on radix we expect them to be zero. @@ -569,11 +570,6 @@ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) return __pte(((pte_basic_t)pfn << PAGE_SHIFT) | pgprot_val(pgprot) | _PAGE_PTE); } -static inline unsigned long pte_pfn(pte_t pte) -{ - return (pte_val(pte) & PTE_RPN_MASK) >> PAGE_SHIFT; -} - /* Generic modifiers for PTE bits */ static inline pte_t pte_wrprotect(pte_t pte) { diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h index d18b748ea3ae..3b7bd36a2321 100644 --- a/arch/powerpc/include/asm/book3s/pgtable.h +++ b/arch/powerpc/include/asm/book3s/pgtable.h @@ -9,13 +9,6 @@ #endif #ifndef __ASSEMBLY__ -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - - #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty); @@ -36,7 +29,9 @@ void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * corresponding HPTE into the hash table ahead of time, instead of * waiting for the inevitable extra hash-table miss exception. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { if (IS_ENABLED(CONFIG_PPC32) && !mmu_has_feature(MMU_FTR_HPTE_TABLE)) return; diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h index 7564dd4fd12b..ef7d2de33b89 100644 --- a/arch/powerpc/include/asm/cacheflush.h +++ b/arch/powerpc/include/asm/cacheflush.h @@ -35,13 +35,19 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) * It just marks the page as not i-cache clean. We do the i-cache * flush later when the page is given to a user process, if necessary. */ -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) return; /* avoid an atomic op if possible */ - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } void flush_icache_range(unsigned long start, unsigned long stop); @@ -51,7 +57,7 @@ void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); #define flush_icache_user_page flush_icache_user_page -void flush_dcache_icache_page(struct page *page); +void flush_dcache_icache_folio(struct folio *folio); /** * flush_dcache_range(): Write any modified data cache blocks out to memory and diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index d16d80ad2ae4..b4da8514af43 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -894,7 +894,7 @@ void kvmppc_init_lpid(unsigned long nr_lpids); static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) { - struct page *page; + struct folio *folio; /* * We can only access pages that the kernel maps * as memory. Bail out for unmapped ones. @@ -903,10 +903,10 @@ static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) return; /* Clear i-cache for new pages */ - page = pfn_to_page(pfn); - if (!test_bit(PG_dcache_clean, &page->flags)) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index a6caaaab6f92..56ea48276356 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -101,8 +101,6 @@ static inline bool pte_access_permitted(pte_t pte, bool write) static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) { return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) | pgprot_val(pgprot)); } -static inline unsigned long pte_pfn(pte_t pte) { - return pte_val(pte) >> PTE_RPN_SHIFT; } /* Generic modifiers for PTE bits */ static inline pte_t pte_exprotect(pte_t pte) @@ -166,12 +164,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return __pte(pte_val(pte) & ~_PAGE_SWP_EXCLUSIVE); } -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - /* This low level function performs the actual PTE insertion * Setting the PTE depends on the MMU type and other factors. It's * an horrible mess that I'm not going to try to clean up now but @@ -282,10 +274,12 @@ static inline int pud_huge(pud_t pud) * for the page which has just been mapped in. */ #if defined(CONFIG_PPC_E500) && defined(CONFIG_HUGETLB_PAGE) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); #else -static inline -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) {} +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) {} #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 445a22987aa3..da5119dba8a4 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -41,6 +41,12 @@ struct mm_struct; #ifndef __ASSEMBLY__ +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr); +#define set_ptes set_ptes +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + #ifndef MAX_PTRS_PER_PGD #define MAX_PTRS_PER_PGD PTRS_PER_PGD #endif @@ -48,6 +54,12 @@ struct mm_struct; /* Keep these as a macros to avoid include dependency mess */ #define pte_page(x) pfn_to_page(pte_pfn(x)) #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot)) + +static inline unsigned long pte_pfn(pte_t pte) +{ + return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT; +} + /* * Select all bits except the pfn */ diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index fedffe3ae136..ad2afa08e62e 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -1307,18 +1307,19 @@ void hash__early_init_mmu_secondary(void) */ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap) { - struct page *page; + struct folio *folio; if (!pfn_valid(pte_pfn(pte))) return pp; - page = pte_page(pte); + folio = page_folio(pte_page(pte)); /* page is dirty */ - if (!test_bit(PG_dcache_clean, &page->flags) && !PageReserved(page)) { + if (!test_bit(PG_dcache_clean, &folio->flags) && + !folio_test_reserved(folio)) { if (trap == INTERRUPT_INST_STORAGE) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } else pp |= HPTE_R_N; } diff --git a/arch/powerpc/mm/cacheflush.c b/arch/powerpc/mm/cacheflush.c index 0e9b4879c0f9..8760d2223abe 100644 --- a/arch/powerpc/mm/cacheflush.c +++ b/arch/powerpc/mm/cacheflush.c @@ -148,44 +148,30 @@ static void __flush_dcache_icache(void *p) invalidate_icache_range(addr, addr + PAGE_SIZE); } -static void flush_dcache_icache_hugepage(struct page *page) +void flush_dcache_icache_folio(struct folio *folio) { - int i; - int nr = compound_nr(page); + unsigned int i, nr = folio_nr_pages(folio); - if (!PageHighMem(page)) { + if (flush_coherent_icache()) + return; + + if (!folio_test_highmem(folio)) { + void *addr = folio_address(folio); for (i = 0; i < nr; i++) - __flush_dcache_icache(lowmem_page_address(page + i)); - } else { + __flush_dcache_icache(addr + i * PAGE_SIZE); + } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { for (i = 0; i < nr; i++) { - void *start = kmap_local_page(page + i); + void *start = kmap_local_folio(folio, i * PAGE_SIZE); __flush_dcache_icache(start); kunmap_local(start); } - } -} - -void flush_dcache_icache_page(struct page *page) -{ - if (flush_coherent_icache()) - return; - - if (PageCompound(page)) - return flush_dcache_icache_hugepage(page); - - if (!PageHighMem(page)) { - __flush_dcache_icache(lowmem_page_address(page)); - } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { - void *start = kmap_local_page(page); - - __flush_dcache_icache(start); - kunmap_local(start); } else { - flush_dcache_icache_phys(page_to_phys(page)); + unsigned long pfn = folio_pfn(folio); + for (i = 0; i < nr; i++) + flush_dcache_icache_phys((pfn + i) * PAGE_SIZE); } } -EXPORT_SYMBOL(flush_dcache_icache_page); void clear_user_page(void *page, unsigned long vaddr, struct page *pg) { diff --git a/arch/powerpc/mm/nohash/e500_hugetlbpage.c b/arch/powerpc/mm/nohash/e500_hugetlbpage.c index 58c8d9849cb1..6b30e40d4590 100644 --- a/arch/powerpc/mm/nohash/e500_hugetlbpage.c +++ b/arch/powerpc/mm/nohash/e500_hugetlbpage.c @@ -178,7 +178,8 @@ book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte) * * This must always be called with the pte lock held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { if (is_vm_hugetlb_page(vma)) book3e_hugetlb_preload(vma, address, *ptep); diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index cb2dcdb18f8e..db236b494845 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -58,7 +58,7 @@ static inline int pte_looks_normal(pte_t pte) return 0; } -static struct page *maybe_pte_to_page(pte_t pte) +static struct folio *maybe_pte_to_folio(pte_t pte) { unsigned long pfn = pte_pfn(pte); struct page *page; @@ -68,7 +68,7 @@ static struct page *maybe_pte_to_page(pte_t pte) page = pfn_to_page(pfn); if (PageReserved(page)) return NULL; - return page; + return page_folio(page); } #ifdef CONFIG_PPC_BOOK3S @@ -84,12 +84,12 @@ static pte_t set_pte_filter_hash(pte_t pte) pte = __pte(pte_val(pte) & ~_PAGE_HPTEFLAGS); if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE) || cpu_has_feature(CPU_FTR_NOEXECUTE))) { - struct page *pg = maybe_pte_to_page(pte); - if (!pg) + struct folio *folio = maybe_pte_to_folio(pte); + if (!folio) return pte; - if (!test_bit(PG_dcache_clean, &pg->flags)) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } return pte; @@ -107,7 +107,7 @@ static pte_t set_pte_filter_hash(pte_t pte) { return pte; } */ static inline pte_t set_pte_filter(pte_t pte) { - struct page *pg; + struct folio *folio; if (radix_enabled()) return pte; @@ -120,18 +120,18 @@ static inline pte_t set_pte_filter(pte_t pte) return pte; /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) return pte; /* If the page clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) return pte; /* If it's an exec fault, we flush the cache and make it clean */ if (is_exec_fault()) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); return pte; } @@ -142,7 +142,7 @@ static inline pte_t set_pte_filter(pte_t pte) static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, int dirty) { - struct page *pg; + struct folio *folio; if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) return pte; @@ -168,17 +168,17 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, #endif /* CONFIG_DEBUG_VM */ /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) goto bail; /* If the page is already clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) goto bail; /* Clean the page and set PG_dcache_clean */ - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); bail: return pte_mkexec(pte); @@ -187,8 +187,8 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, /* * set_pte stores a linux PTE into the linux page table. */ -void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr) { /* * Make sure hardware valid bit is not set. We don't do @@ -203,7 +203,14 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte = set_pte_filter(pte); /* Perform the setting of the PTE */ - __set_pte_at(mm, addr, ptep, pte, 0); + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + (1UL << PTE_RPN_SHIFT)); + addr += PAGE_SIZE; + } } void unmap_kernel_page(unsigned long va) From patchwork Mon Jul 10 20:43:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118084 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp63254vqm; Mon, 10 Jul 2023 13:47:07 -0700 (PDT) X-Google-Smtp-Source: APBJJlEwP5wTZodqFgPLvlYPFy1aocnGwmbvJ+GnVwLVASg1huR1T4afe3NApdv4iFkzE0ZqmMmc X-Received: by 2002:a2e:800b:0:b0:2b6:a344:29cf with SMTP id j11-20020a2e800b000000b002b6a34429cfmr9740757ljg.17.1689022027056; Mon, 10 Jul 2023 13:47:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022027; cv=none; d=google.com; s=arc-20160816; b=KOiR63fW/mTP3LTH1jt6ZYFx6CwNaJ8I5OwqxcY5ZtDHq21Ulf0SWdmPOLfcZT7mWw O0F3Wu5MaT0EK54XYZAYy2fc164OounqjG13fOjGce3jWjD1+16UOOKBLSWDBM3g38D3 8BaYZCHyK1KRfs13g/ldRiMW8WXJrQKmfTZn3tWnFuZDoX5lyLMZnzAUQ2XncITJsd4L SQFLea/TzwAflCzZ6ItjuV8mK+45glOpgfoDeX3sojTqXvZyXz9u/LAJujKHK4hLIJTf BP7p+lVUR/f54Ez5R8KkdYg1k2HnkrRA/JEWCf+tgD5IrTK3kTMlxOwglj54iVW5859A know== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=n5JuGTxMehujF2hW5yjYC2rYcF2MyOBRN8Ek1lm+OQ8=; fh=C6qUs4q9PxZQFwtC8FG8QILjBEvy8oPwWiZgwVUUn/U=; b=hujr+QacHQDAxf0msnBWatvcSeqKjq/9B2NshXpIR6rWZvYni+tAZ6WKf16Kd8P1/6 XjLw/KRrlyLQqQFzXXv+Lottkl0hsvhVQRFPXrNpAfvba6vbyYVI0GMr7uGQWg/uP5SU H6Tgy7LdiJ1cFAGuhQPMyqUX99m+Mtx8CxB+jzZSIs6mDkvN/Bh3KfeG7uY6g9Q6T5Y6 23wK3yQTk+y2Y9lB5/xr4BUXug2EZTh645+HNpxsswW0CJdpIIFCw1vCL1ZlUirU6enG 2GB1VXCuO/OY/z0yEgu5tK5d7JUZYKivAIGUyaUeUgcrVjcyhoo8iu2KLMhP0+V5tBx3 Qj3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=v6vtdEYG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o13-20020a1709061d4d00b00992fbea44b1si326546ejh.920.2023.07.10.13.46.42; Mon, 10 Jul 2023 13:47:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=v6vtdEYG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233065AbjGJUpS (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232350AbjGJUoF (ORCPT ); Mon, 10 Jul 2023 16:44:05 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7ECB3E73; Mon, 10 Jul 2023 13:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n5JuGTxMehujF2hW5yjYC2rYcF2MyOBRN8Ek1lm+OQ8=; b=v6vtdEYGO07DAE5iFQ66CUz422 4gRsIdrkhM465InPQ8/JnqOq0IG7KzUhaCoe3gRmSP4XJCbA1zUml/Y4VS6dAKxZZ4N/5RZjHuN87 qFAsrNfw7teoWxYTYbZCiF8pFZnjmUgiRCzzjcwDtmgtr8VAUvFeuhmF6ab90f3xCGgsog6KA9feL W5DhRN/ugE2aKKCau9+a+puGlTUVZRO+aEhD4bYP+PsCwD2qvvkUWtuFTy9VYMt8pfiQ9Y3V/60BF /hxmrNg806AS1AMQX5X5zpKy/CC56VqSPSMoXWpMaWWUUVHCgBTLtfpRRCK/7rtJ4XjW3BA93ABwT inAW3qHA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjT-00Eupy-NG; Mon, 10 Jul 2023 20:43:43 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alexandre Ghiti , Mike Rapoport , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [PATCH v5 22/38] riscv: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:23 +0100 Message-Id: <20230710204339.3554919-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771067961216852222 X-GMAIL-MSGID: 1771067961216852222 Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Alexandre Ghiti Acked-by: Mike Rapoport (IBM) Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: linux-riscv@lists.infradead.org Acked-by: Palmer Dabbelt --- arch/riscv/include/asm/cacheflush.h | 19 +++++++-------- arch/riscv/include/asm/pgtable.h | 38 +++++++++++++++++++---------- arch/riscv/mm/cacheflush.c | 13 +++------- 3 files changed, 37 insertions(+), 33 deletions(-) diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 8091b8bf4883..0d8c92c5dfb7 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -15,20 +15,19 @@ static inline void local_flush_icache_all(void) #define PG_dcache_clean PG_arch_1 -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in flush_icache_pte()). - */ - if (PageHuge(page)) - page = compound_head(page); - - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); } +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + /* * RISC-V doesn't have an instruction to flush parts of the instruction cache, * so instead we just flush the whole thing. diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 2137e36595b3..c8f897ed5fd0 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -445,8 +445,9 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) /* Commit new configuration to MMU hardware */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { /* * The kernel assumes that TLBs don't cache invalid entries, but @@ -455,8 +456,11 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, * Relying on flush_tlb_fix_spurious_fault would suffice, but * the extra traps reduce performance. So, eagerly SFENCE.VMA. */ - local_flush_tlb_page(address); + while (nr--) + local_flush_tlb_page(address + nr * PAGE_SIZE); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -487,8 +491,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) void flush_icache_pte(pte_t pte); -static inline void __set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void __set_pte_at(pte_t *ptep, pte_t pteval) { if (pte_present(pteval) && pte_exec(pteval)) flush_icache_pte(pteval); @@ -496,17 +499,26 @@ static inline void __set_pte_at(struct mm_struct *mm, set_pte(ptep, pteval); } -static inline void set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pteval, 1); - __set_pte_at(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, nr); + + for (;;) { + __set_pte_at(ptep, pteval); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pteval) += 1 << _PAGE_PFN_SHIFT; + } } +#define set_ptes set_ptes static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - __set_pte_at(mm, addr, ptep, __pte(0)); + __set_pte_at(ptep, __pte(0)); } #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS @@ -515,7 +527,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, pte_t entry, int dirty) { if (!pte_same(*ptep, entry)) - set_pte_at(vma->vm_mm, address, ptep, entry); + __set_pte_at(ptep, entry); /* * update_mmu_cache will unconditionally execute, handling both * the case that the PTE changed and the spurious fault case. @@ -688,14 +700,14 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { page_table_check_pmd_set(mm, addr, pmdp, pmd); - return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)); + return __set_pte_at((pte_t *)pmdp, pmd_pte(pmd)); } static inline void set_pud_at(struct mm_struct *mm, unsigned long addr, pud_t *pudp, pud_t pud) { page_table_check_pud_set(mm, addr, pudp, pud); - return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)); + return __set_pte_at((pte_t *)pudp, pud_pte(pud)); } #ifdef CONFIG_PAGE_TABLE_CHECK diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c index fbc59b3f69f2..f1387272a551 100644 --- a/arch/riscv/mm/cacheflush.c +++ b/arch/riscv/mm/cacheflush.c @@ -82,18 +82,11 @@ void flush_icache_mm(struct mm_struct *mm, bool local) #ifdef CONFIG_MMU void flush_icache_pte(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { + if (!test_bit(PG_dcache_clean, &folio->flags)) { flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } #endif /* CONFIG_MMU */ From patchwork Mon Jul 10 20:43:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118103 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp75181vqm; Mon, 10 Jul 2023 14:12:44 -0700 (PDT) X-Google-Smtp-Source: APBJJlEdUJjnWxpzkQ676AOfTbWgw8km3m96tqwgOwvBgk56ZFj98lF7y8Vi4dMlEaU134oIGuvL X-Received: by 2002:a17:903:41c2:b0:1b8:8728:d776 with SMTP id u2-20020a17090341c200b001b88728d776mr23692119ple.0.1689023563653; Mon, 10 Jul 2023 14:12:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023563; cv=none; d=google.com; s=arc-20160816; b=ZZoEHzGEN1IxAuxy/X1eUfL3nS2NL23vTl1wonVkOFUKJrmKBH+Y8u8yOl5o3mn21K B7JuFwPTe8Vcx+j/aXXNm+FeSutV+iiOlYPA0BnwEU20JjTAFaIOnRRDrF95HocPStv7 UDxXcRSW9nq8JW/GSZMJI1+EIlN82k3QkWQcpGlokk6zsUQ1dpKcwbCeznvYWGk3gg0Q 9J/mJYGQlJV0GGthiIsCGRIFK0buTBKea2Vewr+Cw81yoL1OVjtCTqJ3SafEARIkhqtk WZ0o3JlVcJYMbGGfSKBFqHRg8HeVaCJHSbFkKaodzvJO6/1JilBCid+CPeCCHIlpbKHF bqOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9mxVcCkW583JrlzbOMVQaZnB0bjf4NUO5YrFMq87KVA=; fh=aOM6241f6jxAbyEFQMMJAXRCR7/Qw+WS0LOuDz3zGIc=; b=qBHKB3s2TSQpTtYCLY3flxxL6Y9hYZG7WjcrMY/oJwfGAG+y6568GKUR3oauloA8Mi tl/RIOpUDi6hoHq6Sb6/zAYkTy6fLdOIel6rioTfrNxMmf7cHUvxC/eSnrv/kbbEby8P +/umLXdaKDwlQd5jnoP92TVRqfHkPTdeuqBRv2ZDe29vNUBnuKLz2vS6JdFmkQxylJh2 kbKy/+sDqXWNpAAlrs/BXgkJeX+Ai/U0u32siz6KjulN76IWRvHfTmyINzFz3xNJIe5b GdHujKQ3INdawVu8j7JdE26L1Ywnq0ASWN/A16kw0A72tFXcw4EzjcxVqXQiu3aQ4ffA /mPA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=tiHU73Gt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ik28-20020a170902ab1c00b001b8b3f8ed8bsi348519plb.424.2023.07.10.14.12.31; Mon, 10 Jul 2023 14:12:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=tiHU73Gt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230265AbjGJUpJ (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232319AbjGJUoE (ORCPT ); Mon, 10 Jul 2023 16:44:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02645E5D; Mon, 10 Jul 2023 13:43:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9mxVcCkW583JrlzbOMVQaZnB0bjf4NUO5YrFMq87KVA=; b=tiHU73GtBk0CdV4R8NECb7nrWG rlw/hDC8XD9b2zjAZwna48outBRoVxu2IWG4ZicdNlUuRYBDKZRUemkoPgiXlrkf0kVB/sx3qjfs0 KkxCLgeBb0dGeLCejf0eJpZyZUb0QAlMhBZF+E3fcYghEbOccZYeYRtjGSz0QvTXnX52XdAJw1TXs dD9qFxvpT4uZslcdHGUINiZkGgdG7bUfWf+eV3k8ld7aPrcsNPTW5Ce0qlVAzV58FK0Y85gBRJU9t +h+io5vedYA2iPQhdC9XJcsDOkWmkXsYFq638vvxWop58dOIBFBXxPiLMZOQ82sbNU0OuAiljU1om x2DyixKQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjT-00EuqC-Se; Mon, 10 Jul 2023 20:43:43 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Gerald Schaefer , Mike Rapoport , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org Subject: [PATCH v5 23/38] s390: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:24 +0100 Message-Id: <20230710204339.3554919-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069572314161612 X-GMAIL-MSGID: 1771069572314161612 Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Gerald Schaefer Acked-by: Mike Rapoport (IBM) Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: linux-s390@vger.kernel.org --- arch/s390/include/asm/pgtable.h | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index c55f3c3365af..02973c740a5b 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -47,6 +47,7 @@ static inline void update_page_count(int level, long count) * tables contain all the necessary information. */ #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vmf, vma, addr, ptep, nr) do { } while (0) #define update_mmu_cache_pmd(vma, address, ptep) do { } while (0) /* @@ -1316,20 +1317,34 @@ pgprot_t pgprot_writecombine(pgprot_t prot); pgprot_t pgprot_writethrough(pgprot_t prot); /* - * Certain architectures need to do special things when PTEs - * within a page table are directly modified. Thus, the following - * hook is made available. + * Set multiple PTEs to consecutive pages with a single call. All PTEs + * are within the same folio, PMD and VMA. */ -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t entry) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t entry, unsigned int nr) { if (pte_present(entry)) entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED)); - if (mm_has_pgste(mm)) - ptep_set_pte_at(mm, addr, ptep, entry); - else - set_pte(ptep, entry); + if (mm_has_pgste(mm)) { + for (;;) { + ptep_set_pte_at(mm, addr, ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + addr += PAGE_SIZE; + } + } else { + for (;;) { + set_pte(ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + } + } } +#define set_ptes set_ptes /* * Conversion functions: convert a page and protection to a page entry, From patchwork Mon Jul 10 20:43:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118114 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp77348vqm; Mon, 10 Jul 2023 14:17:16 -0700 (PDT) X-Google-Smtp-Source: APBJJlGIcGHTIYpcU/sEVc+rbk4S18VeeeQVwoxePy47rf6gpvyWhZ3p7hkpH/ty35H0aydjMCFd X-Received: by 2002:a05:6a20:7485:b0:130:d5a:e413 with SMTP id p5-20020a056a20748500b001300d5ae413mr14250916pzd.6.1689023835760; Mon, 10 Jul 2023 14:17:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023835; cv=none; d=google.com; s=arc-20160816; b=Y8yXrz/DCGkvG/2nNo7KYbCBFgCeJrrnKz1buM+v5xaUNac2BRhcLCKmH2/110tDYG 2jO9Q15N+u8NbL/Fl1EHHSP4+CZQcFSJRvTbheBYA+XtAJOjNNW0GI3ol1RWrCJIOljI zR75USvv+hJrNGAHLnK6Zi5wy6YKBvHXS4dHrhFtfcWrzHFGjC8DoGtmTjnPEcCw4UcN ctGkQFDjYmVN7e7f5yRFL0Q19nBEvOBTTSb917ClxkYRq2slAMD+MAwTeeshBcIGa3sP u44D1uvMSoGsyXf1pjC0zuvVmCb6fUAfI61eG/U9KSsJ9ePrIBNIlFqbZVfU3PnEaxJX ACVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ueXixPUpzkyHVw6Q1Uf0w0UoKSG6KS3y+zNsVFthAW0=; fh=mxiCX4DOOwADVcpIju7mDPynHZPIxYp9GthoVE/d7yA=; b=uNZZAfjn4gzoGw//yq5mPSQY0u8P5Wvtd9VwejAQNYSKvJS8izfyP2SMpi6HP0Q5if SUmaeDbIBkce33Kg3Qira8bVZFWckCR4phcW0OYtuk6pDvb3HQeFOzsV9hn/J7XMDHiR SyPRtz7f+N1Ky9MGh9IrhEEZQvrYeAdAZpaeFUkBY/WICpeMMJ9HlceMwo+5k4Om7tmd 6LXlDrPNK6jINB9UT95pO+s+1Jkil4PWXBjWUPX+TscSmexBsOnEgRE7Vi3UEQEKa91j UXbdhjaAZIttC+WbcMntGpBnHjRAFQJdV45WmfIZeZpNtirlEUuORX0PT0cxLv8ccpU7 BVUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=BiK3aJKU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dw11-20020a056a00368b00b00682a0e45b59si302808pfb.85.2023.07.10.14.17.03; Mon, 10 Jul 2023 14:17:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=BiK3aJKU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233314AbjGJUqM (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231545AbjGJUo6 (ORCPT ); Mon, 10 Jul 2023 16:44:58 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20BF61A8; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ueXixPUpzkyHVw6Q1Uf0w0UoKSG6KS3y+zNsVFthAW0=; b=BiK3aJKU41a3pJdC5u6MSc53uB aj3dlae196KGglZG8EqZfO3w2EpZve4xJsy+ucJ5OfnqzWtvDZc9WN6BR83c70twEA1pmqf2pFRos 8MRnZMHLflk64aWV4HslzXthJcye9M6V2Cm+kbfnO4gVbaJu3MudBP69smxKx6CSZlaYgHrlU4rKn PB/bMHvePTtR9VwZdLtIGApmGThS5c+FewWx4eCEoRKEKy0wLDGHKxvPCECgLbRRZDyS5IXUJucLc b3Yu3mqeDi38pXCKolC8cGhMAXx6zrwAjbk9DGdg4EGux5q3Ge56O8zhi9LwyRMEQe8h52l9H3sHH 2FPp4z7Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00EuqJ-0B; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , linux-sh@vger.kernel.org Subject: [PATCH v5 24/38] sh: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:25 +0100 Message-Id: <20230710204339.3554919-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069857778486620 X-GMAIL-MSGID: 1771069857778486620 Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clean flag from being per-page to per-folio. Flush the entire folio containing the pages in flush_icache_pages() for ease of implementation. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Yoshinori Sato Cc: Rich Felker Cc: John Paul Adrian Glaubitz Cc: linux-sh@vger.kernel.org --- arch/sh/include/asm/cacheflush.h | 21 ++++++++----- arch/sh/include/asm/pgtable.h | 7 +++-- arch/sh/include/asm/pgtable_32.h | 5 ++- arch/sh/mm/cache-j2.c | 4 +-- arch/sh/mm/cache-sh4.c | 26 +++++++++++----- arch/sh/mm/cache-sh7705.c | 26 ++++++++++------ arch/sh/mm/cache.c | 52 ++++++++++++++++++-------------- arch/sh/mm/kmap.c | 3 +- 8 files changed, 89 insertions(+), 55 deletions(-) diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 481a664287e2..9fceef6f3e00 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -13,9 +13,9 @@ * - flush_cache_page(mm, vmaddr, pfn) flushes a single page * - flush_cache_range(vma, start, end) flushes a range of pages * - * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache + * - flush_dcache_folio(folio) flushes(wback&invalidates) a folio for dcache * - flush_icache_range(start, end) flushes(invalidates) a range for icache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) pages for icache * - flush_cache_sigtramp(vaddr) flushes the signal trampoline */ extern void (*local_flush_cache_all)(void *args); @@ -23,9 +23,9 @@ extern void (*local_flush_cache_mm)(void *args); extern void (*local_flush_cache_dup_mm)(void *args); extern void (*local_flush_cache_page)(void *args); extern void (*local_flush_cache_range)(void *args); -extern void (*local_flush_dcache_page)(void *args); +extern void (*local_flush_dcache_folio)(void *args); extern void (*local_flush_icache_range)(void *args); -extern void (*local_flush_icache_page)(void *args); +extern void (*local_flush_icache_folio)(void *args); extern void (*local_flush_cache_sigtramp)(void *args); static inline void cache_noop(void *args) { } @@ -42,11 +42,18 @@ extern void flush_cache_page(struct vm_area_struct *vma, extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range -extern void flush_icache_page(struct vm_area_struct *vma, - struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index 3ce30becf6df..729f5c6225fb 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -102,13 +102,16 @@ extern void __update_cache(struct vm_area_struct *vma, extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); -static inline void -update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; __update_cache(vma, address, pte); __update_tlb(vma, address, pte); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h index 21952b094650..676f3d4ef6ce 100644 --- a/arch/sh/include/asm/pgtable_32.h +++ b/arch/sh/include/asm/pgtable_32.h @@ -307,14 +307,13 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define set_pte(pteptr, pteval) (*(pteptr) = pteval) #endif -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) */ #define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pfn_pte(pfn, prot) \ __pte(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define pfn_pmd(pfn, prot) \ @@ -323,7 +322,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define pte_none(x) (!pte_val(x)) #define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) +#define pte_clear(mm, addr, ptep) set_pte(ptep, __pte(0)) #define pmd_none(x) (!pmd_val(x)) #define pmd_present(x) (pmd_val(x)) diff --git a/arch/sh/mm/cache-j2.c b/arch/sh/mm/cache-j2.c index f277862a11f5..9ac960214380 100644 --- a/arch/sh/mm/cache-j2.c +++ b/arch/sh/mm/cache-j2.c @@ -55,9 +55,9 @@ void __init j2_cache_init(void) local_flush_cache_dup_mm = j2_flush_both; local_flush_cache_page = j2_flush_both; local_flush_cache_range = j2_flush_both; - local_flush_dcache_page = j2_flush_dcache; + local_flush_dcache_folio = j2_flush_dcache; local_flush_icache_range = j2_flush_icache; - local_flush_icache_page = j2_flush_icache; + local_flush_icache_folio = j2_flush_icache; local_flush_cache_sigtramp = j2_flush_icache; pr_info("Initial J2 CCR is %.8x\n", __raw_readl(j2_ccr_base)); diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c index 72c2e1b46c08..862046f26981 100644 --- a/arch/sh/mm/cache-sh4.c +++ b/arch/sh/mm/cache-sh4.c @@ -107,19 +107,29 @@ static inline void flush_cache_one(unsigned long start, unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh4_flush_dcache_page(void *arg) +static void sh4_flush_dcache_folio(void *arg) { - struct page *page = arg; - unsigned long addr = (unsigned long)page_address(page); + struct folio *folio = arg; #ifndef CONFIG_SMP - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); else #endif - flush_cache_one(CACHE_OC_ADDRESS_ARRAY | - (addr & shm_align_mask), page_to_phys(page)); + { + unsigned long pfn = folio_pfn(folio); + unsigned long addr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + flush_cache_one(CACHE_OC_ADDRESS_ARRAY | + (addr & shm_align_mask), + pfn * PAGE_SIZE); + addr += PAGE_SIZE; + pfn++; + } + } wmb(); } @@ -379,7 +389,7 @@ void __init sh4_cache_init(void) __raw_readl(CCN_PRR)); local_flush_icache_range = sh4_flush_icache_range; - local_flush_dcache_page = sh4_flush_dcache_page; + local_flush_dcache_folio = sh4_flush_dcache_folio; local_flush_cache_all = sh4_flush_cache_all; local_flush_cache_mm = sh4_flush_cache_mm; local_flush_cache_dup_mm = sh4_flush_cache_mm; diff --git a/arch/sh/mm/cache-sh7705.c b/arch/sh/mm/cache-sh7705.c index 9b63a53a5e46..b509a407588f 100644 --- a/arch/sh/mm/cache-sh7705.c +++ b/arch/sh/mm/cache-sh7705.c @@ -132,15 +132,20 @@ static void __flush_dcache_page(unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh7705_flush_dcache_page(void *arg) +static void sh7705_flush_dcache_folio(void *arg) { - struct page *page = arg; - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = arg; + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); - else - __flush_dcache_page(__pa(page_address(page))); + clear_bit(PG_dcache_clean, &folio->flags); + else { + unsigned long pfn = folio_pfn(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_dcache_page((pfn + i) * PAGE_SIZE); + } } static void sh7705_flush_cache_all(void *args) @@ -176,19 +181,20 @@ static void sh7705_flush_cache_page(void *args) * Not entirely sure why this is necessary on SH3 with 32K cache but * without it we get occasional "Memory fault" when loading a program. */ -static void sh7705_flush_icache_page(void *page) +static void sh7705_flush_icache_folio(void *arg) { - __flush_purge_region(page_address(page), PAGE_SIZE); + struct folio *folio = arg; + __flush_purge_region(folio_address(folio), folio_size(folio)); } void __init sh7705_cache_init(void) { local_flush_icache_range = sh7705_flush_icache_range; - local_flush_dcache_page = sh7705_flush_dcache_page; + local_flush_dcache_folio = sh7705_flush_dcache_folio; local_flush_cache_all = sh7705_flush_cache_all; local_flush_cache_mm = sh7705_flush_cache_all; local_flush_cache_dup_mm = sh7705_flush_cache_all; local_flush_cache_range = sh7705_flush_cache_all; local_flush_cache_page = sh7705_flush_cache_page; - local_flush_icache_page = sh7705_flush_icache_page; + local_flush_icache_folio = sh7705_flush_icache_folio; } diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 3aef78ceb820..9bcaa5619eab 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -20,9 +20,9 @@ void (*local_flush_cache_mm)(void *args) = cache_noop; void (*local_flush_cache_dup_mm)(void *args) = cache_noop; void (*local_flush_cache_page)(void *args) = cache_noop; void (*local_flush_cache_range)(void *args) = cache_noop; -void (*local_flush_dcache_page)(void *args) = cache_noop; +void (*local_flush_dcache_folio)(void *args) = cache_noop; void (*local_flush_icache_range)(void *args) = cache_noop; -void (*local_flush_icache_page)(void *args) = cache_noop; +void (*local_flush_icache_folio)(void *args) = cache_noop; void (*local_flush_cache_sigtramp)(void *args) = cache_noop; void (*__flush_wback_region)(void *start, int size); @@ -61,15 +61,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + struct folio *folio = page_folio(page); + + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(vto); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } if (vma->vm_flags & VM_EXEC) @@ -80,27 +82,30 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + test_bit(PG_dcache_clean, &folio->flags)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(vfrom); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } } void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); - if (boot_cpu_data.dcache.n_aliases && page_mapcount(from) && - test_bit(PG_dcache_clean, &from->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(src) && + test_bit(PG_dcache_clean, &src->flags)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(vfrom); @@ -136,27 +141,28 @@ EXPORT_SYMBOL(clear_user_highpage); void __update_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte) { - struct page *page; unsigned long pfn = pte_pfn(pte); if (!boot_cpu_data.dcache.n_aliases) return; - page = pfn_to_page(pfn); if (pfn_valid(pfn)) { - int dirty = !test_and_set_bit(PG_dcache_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dcache_clean, &folio->flags); if (dirty) - __flush_purge_region(page_address(page), PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } void __flush_anon_page(struct page *page, unsigned long vmaddr) { + struct folio *folio = page_folio(page); unsigned long addr = (unsigned long) page_address(page); if (pages_do_alias(addr, vmaddr)) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -164,7 +170,8 @@ void __flush_anon_page(struct page *page, unsigned long vmaddr) /* __flush_purge_region((void *)kaddr, PAGE_SIZE); */ kunmap_coherent(kaddr); } else - __flush_purge_region((void *)addr, PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } @@ -215,11 +222,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, } EXPORT_SYMBOL(flush_cache_range); -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - cacheop_on_each_cpu(local_flush_dcache_page, page, 1); + cacheop_on_each_cpu(local_flush_dcache_folio, folio, 1); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void flush_icache_range(unsigned long start, unsigned long end) { @@ -233,10 +240,11 @@ void flush_icache_range(unsigned long start, unsigned long end) } EXPORT_SYMBOL(flush_icache_range); -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { - /* Nothing uses the VMA, so just pass the struct page along */ - cacheop_on_each_cpu(local_flush_icache_page, page, 1); + /* Nothing uses the VMA, so just pass the folio along */ + cacheop_on_each_cpu(local_flush_icache_folio, page_folio(page), 1); } void flush_cache_sigtramp(unsigned long address) diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c index 73fd7cc99430..fa50e8f6e7a9 100644 --- a/arch/sh/mm/kmap.c +++ b/arch/sh/mm/kmap.c @@ -27,10 +27,11 @@ void __init kmap_coherent_init(void) void *kmap_coherent(struct page *page, unsigned long addr) { + struct folio *folio = page_folio(page); enum fixed_addresses idx; unsigned long vaddr; - BUG_ON(!test_bit(PG_dcache_clean, &page->flags)); + BUG_ON(!test_bit(PG_dcache_clean, &folio->flags)); preempt_disable(); pagefault_disable(); From patchwork Mon Jul 10 20:43:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118100 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp74243vqm; Mon, 10 Jul 2023 14:10:42 -0700 (PDT) X-Google-Smtp-Source: APBJJlFSxgqVFKbdJzDBGIKC7NjMpJeiP+71wkwiW4pkfjAGnlF1N/h3tCzrVqdhhHjdXtqesjnX X-Received: by 2002:a17:90b:390d:b0:262:f99b:a530 with SMTP id ob13-20020a17090b390d00b00262f99ba530mr14376887pjb.34.1689023442136; Mon, 10 Jul 2023 14:10:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023442; cv=none; d=google.com; s=arc-20160816; b=TvMFvZgP0rPH1gjoUzeK8VFgZKdhz59Fp/SKLF/nd/TIPXeVojRNnv5+SqEO4/yVBT eAHEWZ6QIAx4DshlmPLU3vQ8x/h8Kl2vttGvBtXmHf4tWxKs7a71cMox4FlDlozD+cq6 +fBExD4Evh8Oa1oaszZo91AuyDwtje29UI5C6zo1lNxrzekqO2fl8H/qHLxP+WNB4ewh mvdkMNI0rz9KVVyBHPLD7NwK+O89qcedM9y84UdOIIFC7V8P1+S38O8Jsj1eYIjvrH63 QNrGqiDuz6oJTZqb5cd4q7xcVLg0ipwS37ZAMVw2YiglGUE4Jdrf7x9XZ1YdZ+BWUIKZ PeDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ACu0coWtToHgVmiLGofuGFdkSZdUJPASY/IVnZUUiqI=; fh=P3NzrGLue3FO6xS7FDo5hCtLvsFeqDdkUDh50xAFff8=; b=qnYG1G7fZgur4iEMe5ImxqqILKjQ5sGyLtZsIecCBFtJtZOWT64p4ONA+GFNfU6ZT1 IInU9e9cKhvxGMHLKOlOw2f0hJGE09HXKM1sV1Z3GhVqGym1G41b0yhU+uo9SjSQKEn/ x7UxoWkFbLxGxpdMmf+Y25cf6IV1/xuEdoE0ygyCLhL8VusXnSnvCm3Mi38yWzCp+jTW nmLeSdVRsqvMjzs5+vp0AHvoZuvYVyauThlF25J5LDis7/GAtiQInYg5lmbDOmYOtopp pe0nsjmSFv8rvuN45Cg31zfw+AxtMa5k8YeCHFIvJPAB9sz/LVhnsB5izHHk08nljprO 7r3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=KQOV3xnx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w14-20020a17090a380e00b00262c43dbd58si435361pjb.23.2023.07.10.14.10.28; Mon, 10 Jul 2023 14:10:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=KQOV3xnx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232989AbjGJUqE (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231130AbjGJUo6 (ORCPT ); Mon, 10 Jul 2023 16:44:58 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 422971BB; Mon, 10 Jul 2023 13:43:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ACu0coWtToHgVmiLGofuGFdkSZdUJPASY/IVnZUUiqI=; b=KQOV3xnxHjVz1lKBWRN/GFYNE1 GTBdXlh6aD0aeos173WgoFubkYCHUodGZDmayFlE9FTXCLuhz5xIE+AadrazVBmXfKT39tbNfgOw2 K0iRfSZnxTkWs/5gE9dfpnjaeyyhaWQ54RdmXki7k6/fHGo7TvY2BI+m+rMPm9kpT6ET/U9ve5HvC kigmCWilg4GNeA6mHkAZu+vCFDNFdVxEPrfAjlY70cViC7O6jSSM5hCzEcYZMOvfO8aZ7LjRdXxo8 1+HNso/FGePI0Jwzvi2mfP0b60fd1N7ZvvDiphtMsH5/Byk3Ia3ED0/XU4GFchyjsgBz4v9l4J535 pnYQNKyw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00EuqN-3M; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v5 25/38] sparc32: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:26 +0100 Message-Id: <20230710204339.3554919-26-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069444672251864 X-GMAIL-MSGID: 1771069444672251864 Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_32.h | 9 +++++++-- arch/sparc/include/asm/pgtable_32.h | 8 ++++---- arch/sparc/mm/init_32.c | 13 +++++++++++-- 3 files changed, 22 insertions(+), 8 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index adb6991d0455..8dba35d63328 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -16,6 +16,7 @@ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) #define flush_icache_page(vma, pg) do { } while (0) +#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ @@ -35,11 +36,15 @@ #define flush_page_for_dma(addr) \ sparc32_cachetlb_ops->page_for_dma(addr) -struct page; void sparc_flush_page_to_ram(struct page *page); +void sparc_flush_folio_to_ram(struct folio *folio); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) sparc_flush_page_to_ram(page) +#define flush_dcache_folio(folio) sparc_flush_folio_to_ram(folio) +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index d4330e3c57a6..315d316614ca 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -101,8 +101,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) srmmu_swap((unsigned long *)ptep, pte_val(pteval)); } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - static inline int srmmu_device_memory(unsigned long x) { return ((x & 0xF0000000) != 0); @@ -256,6 +254,7 @@ static inline pte_t pte_mkyoung(pte_t pte) return __pte(pte_val(pte) | SRMMU_REF); } +#define PFN_PTE_SHIFT (PAGE_SHIFT - 4) #define pfn_pte(pfn, prot) mk_pte(pfn_to_page(pfn), prot) static inline unsigned long pte_pfn(pte_t pte) @@ -268,7 +267,7 @@ static inline unsigned long pte_pfn(pte_t pte) */ return ~0UL; } - return (pte_val(pte) & SRMMU_PTE_PMASK) >> (PAGE_SHIFT-4); + return (pte_val(pte) & SRMMU_PTE_PMASK) >> PFN_PTE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -318,6 +317,7 @@ void mmu_info(struct seq_file *m); #define FAULT_CODE_USER 0x4 #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vmf, vma, address, ptep, nr) do { } while (0) void srmmu_mapiorange(unsigned int bus, unsigned long xpa, unsigned long xva, unsigned int len); @@ -422,7 +422,7 @@ static inline int io_remap_pfn_range(struct vm_area_struct *vma, ({ \ int __changed = !pte_same(*(__ptep), __entry); \ if (__changed) { \ - set_pte_at((__vma)->vm_mm, (__address), __ptep, __entry); \ + set_pte(__ptep, __entry); \ flush_tlb_page(__vma, __address); \ } \ __changed; \ diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 9c0ea457bdf0..d96a14ffceeb 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -297,11 +297,20 @@ void sparc_flush_page_to_ram(struct page *page) { unsigned long vaddr = (unsigned long)page_address(page); - if (vaddr) - __flush_page_to_ram(vaddr); + __flush_page_to_ram(vaddr); } EXPORT_SYMBOL(sparc_flush_page_to_ram); +void sparc_flush_folio_to_ram(struct folio *folio) +{ + unsigned long vaddr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_page_to_ram(vaddr + i * PAGE_SIZE); +} +EXPORT_SYMBOL(sparc_flush_folio_to_ram); + static const pgprot_t protection_map[16] = { [VM_NONE] = PAGE_NONE, [VM_READ] = PAGE_READONLY, From patchwork Mon Jul 10 20:43:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118111 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp76164vqm; Mon, 10 Jul 2023 14:14:48 -0700 (PDT) X-Google-Smtp-Source: APBJJlGl6P5xnl7ptRMpD7kOIyCIwWpSJ3I7Y8zQxFz62AkQ518l67dqwEEo1H2Hb6yltBCnynU8 X-Received: by 2002:a05:6a20:12c8:b0:f0:50c4:4c43 with SMTP id v8-20020a056a2012c800b000f050c44c43mr19425993pzg.5.1689023688510; Mon, 10 Jul 2023 14:14:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023688; cv=none; d=google.com; s=arc-20160816; b=A/o54HvxybxGVoFhZc99xopM+xrVTyuHMTuTzQEaVQU1w0tBHJk0LKaLIuYzknKSCC gXGNFJ0hWQt4aD2k33w3oMUEPMirFpdeOgaWwt/iTyT/+csP5gvByTLeCTNQ3S9HbMod Vy60PrKEOOI/yKiVeMsLsr3/7ML7YpswPTpmsii+JLtjyaSC8PzzsoITc/T10Rs7/4+N cd46UvrbdtyS/YSfgxK0h6D6tBWmSJXG4CTYLMLuEtaki8+uCB5yDAS5eoz0CCXXx5mS F2kQmjhsHZcYK88nOuqWeRTucTL5QuQm1rP+10tdKDtsKvnPdSMm9k9Rd2sKFL/Aeniu TgxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YRhp3xK+lrYd9ylsYOmm8heTEVlrJyLMg7UrXZJoO+M=; fh=P3NzrGLue3FO6xS7FDo5hCtLvsFeqDdkUDh50xAFff8=; b=Yueahxuv7Ps5J17PGzrjouUyZZsrP3MMgDk2LbXTen5ur9MADMOvoZbIhbgTMY4sWj mAuix25Us2dYp+28jK/SSozlBVtCR6gKRCrMnTRVJ/oeLc/wnH+2JpoGWN1u0g8uxXGl V0Roj4zZO+Ur/Dzs1SGlXQqeOiGF+PecUsiXsm6xwVKAMNwYqcOdnWcXXSTdY4z79j7S O4732YLCUiAzoQFUr+Rz8lc6SQGQBADVEzNiLSiyuy7VNCo/WXGpIQas93sLrm9b7CUt CqJij092QNBJhLG1se3hs/wTdm6e4YaDonOujUYnKl4euxWHfsRf26AGpRekT0dlxfsf 0Npg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Hri9iFmt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k24-20020a634b58000000b00542ad648fbasi212775pgl.188.2023.07.10.14.14.35; Mon, 10 Jul 2023 14:14:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Hri9iFmt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233018AbjGJUq3 (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232625AbjGJUo7 (ORCPT ); Mon, 10 Jul 2023 16:44:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 298F41B7; Mon, 10 Jul 2023 13:43:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YRhp3xK+lrYd9ylsYOmm8heTEVlrJyLMg7UrXZJoO+M=; b=Hri9iFmtPT5TdLtCeKU7kF22ed 1ICiSTlAOc9KMB+MyOt53Nq+6h7zxcHzHDHZiwU1Y122DImxtWiYGBHPfWd8TJDjEkbBUCLAi2tbQ GA+zRDqyHHYGa3b+uZmKKB5DnV2NhHIQ383FVBPThCUfq0hWMB0af+MpFtxCntuI51HNVjZHpV2Cu LLr2KY32+CHuoFXwiihzr2TkZudHTOuBwRRXDmoFn2/tBUP9lBOhmVEgq1C1G2eD8I8ufMSm2zqZY OXzYkWcPfeYsdivPjfSDP3UxazeHC1jsIPCR7XE7ONN0EvQvug0S9qzpnnw8DdovZWPnsRAVFcCi4 hWrnU/ZA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00EuqU-88; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v5 26/38] sparc64: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:27 +0100 Message-Id: <20230710204339.3554919-27-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069703336623772 X-GMAIL-MSGID: 1771069703336623772 Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Convert the PG_dcache_dirty flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_64.h | 18 ++++-- arch/sparc/include/asm/pgtable_64.h | 24 ++++++-- arch/sparc/kernel/smp_64.c | 56 +++++++++++------- arch/sparc/mm/init_64.c | 78 +++++++++++++++----------- arch/sparc/mm/tlb.c | 5 +- 5 files changed, 116 insertions(+), 65 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index b9341836597e..a9a719f04d06 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -35,20 +35,26 @@ void flush_icache_range(unsigned long start, unsigned long end); void __flush_icache_page(unsigned long); void __flush_dcache_page(void *addr, int flush_icache); -void flush_dcache_page_impl(struct page *page); +void flush_dcache_folio_impl(struct folio *folio); #ifdef CONFIG_SMP -void smp_flush_dcache_page_impl(struct page *page, int cpu); -void flush_dcache_page_all(struct mm_struct *mm, struct page *page); +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu); +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio); #else -#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) -#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) +#define smp_flush_dcache_folio_impl(folio, cpu) flush_dcache_folio_impl(folio) +#define flush_dcache_folio_all(mm, folio) flush_dcache_folio_impl(folio) #endif void __flush_dcache_range(unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_icache_page(vma, pg) do { } while(0) +#define flush_icache_pages(vma, pg, nr) do { } while(0) void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 5563efa1a19f..c6c31631cdb0 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -927,8 +927,19 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, maybe_tlb_batch_add(mm, addr, ptep, orig, fullmm, PAGE_SHIFT); } -#define set_pte_at(mm,addr,ptep,pte) \ - __set_pte_at((mm), (addr), (ptep), (pte), 0) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + addr += PAGE_SIZE; + } +} +#define set_ptes set_ptes #define pte_clear(mm,addr,ptep) \ set_pte_at((mm), (addr), (ptep), __pte(0UL)) @@ -947,8 +958,8 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, \ if (pfn_valid(this_pfn) && \ (((old_addr) ^ (new_addr)) & (1 << 13))) \ - flush_dcache_page_all(current->mm, \ - pfn_to_page(this_pfn)); \ + flush_dcache_folio_all(current->mm, \ + page_folio(pfn_to_page(this_pfn))); \ } \ newpte; \ }) @@ -963,7 +974,10 @@ struct seq_file; void mmu_info(struct seq_file *); struct vm_area_struct; -void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); +void update_mmu_cache_range(struct vm_fault *, struct vm_area_struct *, + unsigned long addr, pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #ifdef CONFIG_TRANSPARENT_HUGEPAGE void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd); diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index e5964d1d8b37..f3969a3600db 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -921,20 +921,26 @@ extern unsigned long xcall_flush_dcache_page_cheetah; #endif extern unsigned long xcall_flush_dcache_page_spitfire; -static inline void __local_flush_dcache_page(struct page *page) +static inline void __local_flush_dcache_folio(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + unsigned long pfn = folio_pfn(folio) + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } -void smp_flush_dcache_page_impl(struct page *page, int cpu) +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu) { int this_cpu; @@ -948,14 +954,14 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) this_cpu = get_cpu(); if (cpu == this_cpu) { - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); } else if (cpu_online(cpu)) { - void *pg_addr = page_address(page); + void *pg_addr = folio_address(folio); u64 data0 = 0; if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -963,18 +969,23 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpumask_of(cpu)); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpumask_of(cpu)); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } } put_cpu(); } -void flush_dcache_page_all(struct mm_struct *mm, struct page *page) +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio) { void *pg_addr; u64 data0; @@ -988,10 +999,10 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) atomic_inc(&dcpage_flushes); #endif data0 = 0; - pg_addr = page_address(page); + pg_addr = folio_address(folio); if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -999,13 +1010,18 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpu_online_mask); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpu_online_mask); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); preempt_enable(); } diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 04f9db0c3111..c48d189aaffd 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -195,21 +195,26 @@ atomic_t dcpage_flushes_xcall = ATOMIC_INIT(0); #endif #endif -inline void flush_dcache_page_impl(struct page *page) +inline void flush_dcache_folio_impl(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + BUG_ON(tlb_type == hypervisor); #ifdef CONFIG_DEBUG_DCFLUSH atomic_inc(&dcpage_flushes); #endif #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), - ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, + ((tlb_type == spitfire) && + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } @@ -218,10 +223,10 @@ inline void flush_dcache_page_impl(struct page *page) #define PG_dcache_cpu_mask \ ((1UL<flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) +#define dcache_dirty_cpu(folio) \ + (((folio)->flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) -static inline void set_dcache_dirty(struct page *page, int this_cpu) +static inline void set_dcache_dirty(struct folio *folio, int this_cpu) { unsigned long mask = this_cpu; unsigned long non_cpu_bits; @@ -238,11 +243,11 @@ static inline void set_dcache_dirty(struct page *page, int this_cpu) "bne,pn %%xcc, 1b\n\t" " nop" : /* no outputs */ - : "r" (mask), "r" (non_cpu_bits), "r" (&page->flags) + : "r" (mask), "r" (non_cpu_bits), "r" (&folio->flags) : "g1", "g7"); } -static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) +static inline void clear_dcache_dirty_cpu(struct folio *folio, unsigned long cpu) { unsigned long mask = (1UL << PG_dcache_dirty); @@ -260,7 +265,7 @@ static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) " nop\n" "2:" : /* no outputs */ - : "r" (cpu), "r" (mask), "r" (&page->flags), + : "r" (cpu), "r" (mask), "r" (&folio->flags), "i" (PG_dcache_cpu_mask), "i" (PG_dcache_cpu_shift) : "g1", "g7"); @@ -284,9 +289,10 @@ static void flush_dcache(unsigned long pfn) page = pfn_to_page(pfn); if (page) { + struct folio *folio = page_folio(page); unsigned long pg_flags; - pg_flags = page->flags; + pg_flags = folio->flags; if (pg_flags & (1UL << PG_dcache_dirty)) { int cpu = ((pg_flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask); @@ -296,11 +302,11 @@ static void flush_dcache(unsigned long pfn) * in the SMP case. */ if (cpu == this_cpu) - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); else - smp_flush_dcache_page_impl(page, cpu); + smp_flush_dcache_folio_impl(folio, cpu); - clear_dcache_dirty_cpu(page, cpu); + clear_dcache_dirty_cpu(folio, cpu); put_cpu(); } @@ -388,12 +394,14 @@ bool __init arch_hugetlb_valid_size(unsigned long size) } #endif /* CONFIG_HUGETLB_PAGE */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { struct mm_struct *mm; unsigned long flags; bool is_huge_tsb; pte_t pte = *ptep; + unsigned int i; if (tlb_type != hypervisor) { unsigned long pfn = pte_pfn(pte); @@ -440,15 +448,21 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * } } #endif - if (!is_huge_tsb) - __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, - address, pte_val(pte)); + if (!is_huge_tsb) { + for (i = 0; i < nr; i++) { + __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, + address, pte_val(pte)); + address += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } + } spin_unlock_irqrestore(&mm->context.lock, flags); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { + unsigned long pfn = folio_pfn(folio); struct address_space *mapping; int this_cpu; @@ -459,35 +473,35 @@ void flush_dcache_page(struct page *page) * is merely the zero page. The 'bigcore' testcase in GDB * causes this case to run millions of times. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; this_cpu = get_cpu(); - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) { - int dirty = test_bit(PG_dcache_dirty, &page->flags); + bool dirty = test_bit(PG_dcache_dirty, &folio->flags); if (dirty) { - int dirty_cpu = dcache_dirty_cpu(page); + int dirty_cpu = dcache_dirty_cpu(folio); if (dirty_cpu == this_cpu) goto out; - smp_flush_dcache_page_impl(page, dirty_cpu); + smp_flush_dcache_folio_impl(folio, dirty_cpu); } - set_dcache_dirty(page, this_cpu); + set_dcache_dirty(folio, this_cpu); } else { /* We could delay the flush for the !page_mapping * case too. But that case is for exec env/arg * pages and those are %99 certainly going to get * faulted into the tlb (and thus flushed) anyways. */ - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); } out: put_cpu(); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void __kprobes flush_icache_range(unsigned long start, unsigned long end) { @@ -2280,10 +2294,10 @@ void __init paging_init(void) setup_page_offset(); /* These build time checkes make sure that the dcache_dirty_cpu() - * page->flags usage will work. + * folio->flags usage will work. * * When a page gets marked as dcache-dirty, we store the - * cpu number starting at bit 32 in the page->flags. Also, + * cpu number starting at bit 32 in the folio->flags. Also, * functions like clear_dcache_dirty_cpu use the cpu mask * in 13-bit signed-immediate instruction fields. */ diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 7ecf8556947a..0d41c94ec3ac 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -118,6 +118,7 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, unsigned long paddr, pfn = pte_pfn(orig); struct address_space *mapping; struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) goto no_cache_flush; @@ -127,13 +128,13 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, goto no_cache_flush; /* A real file page? */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) goto no_cache_flush; paddr = (unsigned long) page_address(page); if ((paddr ^ vaddr) & (1 << 13)) - flush_dcache_page_all(mm, page); + flush_dcache_folio_all(mm, folio); } no_cache_flush: From patchwork Mon Jul 10 20:43:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118110 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp75889vqm; Mon, 10 Jul 2023 14:14:12 -0700 (PDT) X-Google-Smtp-Source: APBJJlEkyHUjFZrnvAU7CCTxS0uAG6Q4Msi4fjoRxRBVIW1ZIOgkxWcDLU7LgfSgdQGAs6u2ar0D X-Received: by 2002:a17:902:c20c:b0:1b8:46fc:f526 with SMTP id 12-20020a170902c20c00b001b846fcf526mr14198266pll.61.1689023651921; Mon, 10 Jul 2023 14:14:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023651; cv=none; d=google.com; s=arc-20160816; b=PCXBS/kZ2Z8JDFSsrGZjXjzQEDT0lUOZZ5RCefikJ9aVPCLCUEYrWnyyO0x/HYHqWj rhDCDV6jkQn1MvxJO8qp0tmNZRsXvP8lGEW6nK3pGzrJIRnnscKn9KIUFC3yPgrAO0rP cLoBSk1aMgvjGfUIk6LIvtY1wJs8pCAhdHyC6PIhviuj44Lg/QWLIYoyKyUIWfOHMNfZ IsDoDPfSa8ZGyTMW5SqSOukpFmTTyCRgvFFdOPM5WN25+svx4YZx3u6hYhEXwRRd/nGw cndFDF8pl/15UW66pphvVh/FvnMU29nGEIGDOAfBNTQ/+ZkZnEArQ/i8zX98qqWprvl2 uzMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Syy9e2c8ZW8YV+TAzwjXxuMyXTj35iZpxEM63ZAjrDs=; fh=MSsGIgZIgM0QcwIC9/WLWjroNn5JyjD0xQ2/T5PQel8=; b=R4NX7jdYde56TYeewYRKX6TatvXriuAjIr849x88InNWhQnr4FLUdEPjRIEqI2GN6Q 5WBAYuZlnx1ycYSgYzx8ynuAQBbrtV9RDZ0OIYWHuE8XJ3opaDwPON+orZZOINntS9c7 xmy+Gut94HmNP1SszICSxLsxW5j5i3NvYBMkkMJniDszRCoe06aWSPxSvUWPSedX4Ay9 f3yzZWY98sE47B7bLeVvCs4ySQvaM/6VchTtahYxjHmwOZtD80wG3d9gSLu/p+PVa05F uUO9rAwj0TIWzcRGQlPjaEvlRAFqAPFDP6sUg1oXQo/p0RRWV4FrJA9hCMwTOAereXJV 8+YQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=N43G73nI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c7-20020a170902d48700b001b8a67f1c10si366279plg.468.2023.07.10.14.13.59; Mon, 10 Jul 2023 14:14:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=N43G73nI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230245AbjGJUpb (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229963AbjGJUoe (ORCPT ); Mon, 10 Jul 2023 16:44:34 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07D31E58; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Syy9e2c8ZW8YV+TAzwjXxuMyXTj35iZpxEM63ZAjrDs=; b=N43G73nIZ0b/OqOtMEkUld9RHI F4Mgrg1TmEBFXnr4M6ztPSKvHtYp5heEnvbw/0hdHJHAAUm4WwDFWqMOlv8p6EKECfnJjrdxEvb9v WVjvfctXnRh9p5Fv/zakRG1KBwzkzEk5iNhl7/qEbHgEzKNMZV+aq8+HdlTJYZ6g/rUY7s6ZU28ti wZWGNUgAEO29XNQYuRytMNAPJ3L3OYnn72bzL+kT640TxmZXe9sVZw57VvJDm+ufECLq1c7hlFBdx w4UqBYuwIpNPdbRwfUbzHZLDARxk4Jfk5F7VFE5ZqvgBTJ4sK9dUmqf2XUc+UnFJq0w5HLdb+eFQi Deu2w/Mw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00Euqb-Aj; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Richard Weinberger , Anton Ivanov , Johannes Berg , linux-um@lists.infradead.org Subject: [PATCH v5 27/38] um: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:28 +0100 Message-Id: <20230710204339.3554919-28-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069664789699956 X-GMAIL-MSGID: 1771069664789699956 Add PFN_PTE_SHIFT and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Richard Weinberger Cc: Anton Ivanov Cc: Johannes Berg Cc: linux-um@lists.infradead.org --- arch/um/include/asm/pgtable.h | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index a70d1618eb35..44f6c76167d9 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -242,11 +242,7 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *pteptr, pte_t pteval) -{ - set_pte(pteptr, pteval); -} +#define PFN_PTE_SHIFT PAGE_SHIFT #define __HAVE_ARCH_PTE_SAME static inline int pte_same(pte_t pte_a, pte_t pte_b) @@ -290,6 +286,7 @@ struct mm_struct; extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); #define update_mmu_cache(vma,address,ptep) do {} while (0) +#define update_mmu_cache_range(vmf, vma, address, ptep, nr) do {} while (0) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that From patchwork Mon Jul 10 20:43:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118106 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp75487vqm; Mon, 10 Jul 2023 14:13:25 -0700 (PDT) X-Google-Smtp-Source: APBJJlEVgkQWCHi9lrVibu4T5EhTlqOsf14sDK4J8BouBETvF+uZaGsmnB4KcuMMNgvWCBiO7m6u X-Received: by 2002:a92:dc44:0:b0:345:a201:82b7 with SMTP id x4-20020a92dc44000000b00345a20182b7mr11949887ilq.26.1689023605280; Mon, 10 Jul 2023 14:13:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023605; cv=none; d=google.com; s=arc-20160816; b=v+nstOKBTAknGgEzs3kFSjvs2EM+yrlH/grHTRdslXBWe3KrVVFjI6M2VWOTHJzRkn TQzvY11JQJcSEMWw8lg4MVwcmcTa/ZgwLhilfeJ5rfNsQnpT9WN3DUtAOUdwN5IfvLQv 6kOeGddfJmbti1VqSjPdSPTxRMW1StpK/z8/22pFoP7oL3ep2xAPlhcH3mHR1wGr7tK7 tgqNN0WBS0jxWQ6wZjq9z/bqdOKk5m0LsVszc17vAAb1sx8C3qlLgkHOhpkMh6rYy5SB /MFBk1utsetth15EJ6YAXleWwb5ZI+suCsLViRuSRd/wO1y3HIonDzi4TjIqibqKR2jy A81Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sMQRt4vH1ObkP1bKXMUZ8zoM72SIlxiZjD55fkj3j0k=; fh=qVg5AHFr4dYKBFX4ShpZB9Ii08Z0Id08YllyJGucfqA=; b=TBqcdaoaQmkxAoU/nib4mRccOVwmkbCj4XUvW0dcVlKfuOfz6Gg4bibMxsB6Ozmw+6 SPmXIX7Sbj2+gtEWKe7BfAfRh5tvkoV1isjg91lMsGueB6UGT+XlXwJ9k/T+VtpjGThp mCYOjDXAzTkda70akkN0Kk89kr057dlM91RunBxgbt4CEmVn7cRWvD50Yxj/tomfDopY e8EtKTVWY+4KybwkRMABy/Vgs5ancMbap2ZRvEgSljVf0JMWd4pWNxILlvurCKShGCaE Ey+UJMrwMChxCzIquIzzgeSRTfX7y9hzN34kXXBIBlB4iku8HhAEmK6+68ofMiBuPBVv 3aBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=n3OahCLr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f10-20020a170902f38a00b001b55934ae61si352416ple.288.2023.07.10.14.13.12; Mon, 10 Jul 2023 14:13:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=n3OahCLr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232661AbjGJUpk (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229669AbjGJUoQ (ORCPT ); Mon, 10 Jul 2023 16:44:16 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05B5DE4D; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sMQRt4vH1ObkP1bKXMUZ8zoM72SIlxiZjD55fkj3j0k=; b=n3OahCLrH5ynSGq4TtitzKFZCm tCz7WUYXNr5RD1i/rXpMMwliW4bONdC7+C/iFgM/tLwwAtd+drGtecPi8nyKP+4KpGdkBguM8BzBL qm4OC/hSWVz0SYJS3P3id1s7KFaaC/BcXTAHQVR6MJP3POJnWrGtle2JD+xi7BusE+ns41ORyEKVr F1fyClyXhHiAK6uepDxgQX68J0kwUgdcAvl4PT8gRA/6kG2RUQrpaj+XO96r5BPI3XrTI9t9sTWWz Z3wad0zjHfsvX1okoo2p+8FuF/jgzM9RVpF49eI4rQ7zbzJbLiurc6Sr6UEju5X1kggGqjnRewSWo 7Hw+7bSg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00Euqd-Cy; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v5 28/38] x86: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:29 +0100 Message-Id: <20230710204339.3554919-29-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069616412706390 X-GMAIL-MSGID: 1771069616412706390 Add PFN_PTE_SHIFT and a noop update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" --- arch/x86/include/asm/pgtable.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index c6242bc58a71..9818f13bfa09 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -185,6 +185,8 @@ static inline int pte_special(pte_t pte) static inline u64 protnone_mask(u64 val); +#define PFN_PTE_SHIFT PAGE_SHIFT + static inline unsigned long pte_pfn(pte_t pte) { phys_addr_t pfn = pte_val(pte); @@ -1020,13 +1022,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) return res; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - set_pte(ptep, pte); -} - static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { @@ -1292,6 +1287,11 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { } +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) +{ +} static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { From patchwork Mon Jul 10 20:43:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118087 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp63647vqm; Mon, 10 Jul 2023 13:47:59 -0700 (PDT) X-Google-Smtp-Source: APBJJlEhaS/tbGTCBQim+/TUYsODLqmYvenUdh4I/9ueLI6hIWtOiaex1UrOtK/PZn3nKbj8Zttj X-Received: by 2002:a17:906:20ce:b0:993:e691:6dd5 with SMTP id c14-20020a17090620ce00b00993e6916dd5mr8380496ejc.7.1689022079434; Mon, 10 Jul 2023 13:47:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022079; cv=none; d=google.com; s=arc-20160816; b=zU1LrrA4X6/+jLPXLgh1MjTsOaVSK6/t+t+B9QKQLcrP78mMRt2HluANI+A8qiy+oB ldC8R3TzIw2BhtxNe5xZ7MuvJVJ5Drow5a60N+wwxZoxMp9qMQS4JRb5fOJPKFA8PMk5 sOpjqH2jh+n7EVlAU/bxFQPFlxVbwLwZUAUep/P53pCq2Ig3jxfIpla6RORT9GgJ4uwi rxFt0cwskumunrqU5QpC0yhNG7ORQ4RDTOapNlY9CN009skaUqLx50ZNXBbOvwDVcPdT oB3g+PG5HYhnz9/aRYDVP17+HQu5lRIR3K3ji59nGf7hRTifPCIaY1E2EfB338vHxkoJ Gb6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cPCd6cun2Qq+jg4xfMvT7i67deMTX4AKGcyftDmfVKA=; fh=ZzS10TrZ5p4CSs851F4akwFwEtRY4wdcT7SPmE9EAgY=; b=Hhvzjcv1+yYXMiaCx33Iyz+D/7fQTfTKvnj7PSpM/wmbmRFOCbMbu2TQ8yxZ+S6rJF iR6HKvIPZVuhERaxtkoLT9kxvwS6gP5to20NWbOjg66IMtYZcgPEFZZWf1T2maNuytuJ DEKlAin+Y59c8wyMD4g4LhoEO/j5ffFS7xJ0YkG3hYktNkVrgCoCYraazvBUpWH7t06h 7FUrHGyWECo7DIF0GGvUcUKZulmtikximMM27a01ZWcMRSp1PX5s/3mYkFr9Ndpncy0E POHGCzEm3Qsf5IF+rvVcy7g/03slQwguJp9rL97jWvi9CI0Y0sdcObigmThD3W0Pm1+G FW6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=C9TSROXS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p9-20020a170906140900b00993689daad1si441564ejc.116.2023.07.10.13.47.36; Mon, 10 Jul 2023 13:47:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=C9TSROXS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232860AbjGJUp5 (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230268AbjGJUo6 (ORCPT ); Mon, 10 Jul 2023 16:44:58 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 230071AC; Mon, 10 Jul 2023 13:43:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cPCd6cun2Qq+jg4xfMvT7i67deMTX4AKGcyftDmfVKA=; b=C9TSROXSunzG3cB3FVg0vhwjCE YgmwcMCux99SPh+GxYxHk3AlIpRjvDf35Dzcd+hFlpbMupPuhvzorw+hrAixznhcikQzs7/63ir80 wJeXXRvpz11TsZkzFWAwHpmRMOboz1asiiDtglKNC+hSgwiJrSt79ufQmkB27WNBM2WJUY5L+W1Dj iKk91117x9cIhMuI+eAanjtJbWR9qM8GNcARzrUptagmoTQdzCpyl5tYnd4iDVkXJD6/dEUQX+qsA kszHZ67VpOrbYHYqtV1Qcsozcizy840BVWqZclQlU1IrssvuddvRZPni9wHqyR3es+cfJuJX6chcC 71T/9+QQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00Euqf-Fc; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH v5 29/38] xtensa: Implement the new page table range API Date: Mon, 10 Jul 2023 21:43:30 +0100 Message-Id: <20230710204339.3554919-30-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068015919376415 X-GMAIL-MSGID: 1771068015919376415 Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Max Filippov Cc: linux-xtensa@linux-xtensa.org --- arch/xtensa/include/asm/cacheflush.h | 9 ++- arch/xtensa/include/asm/pgtable.h | 18 +++--- arch/xtensa/mm/cache.c | 83 ++++++++++++++++------------ 3 files changed, 63 insertions(+), 47 deletions(-) diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 7b4359312c25..35153f6725e4 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -119,8 +119,14 @@ void flush_cache_page(struct vm_area_struct*, #define flush_cache_vmap(start,end) flush_cache_all() #define flush_cache_vunmap(start,end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} void local_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); @@ -156,6 +162,7 @@ void local_flush_cache_page(struct vm_area_struct *vma, /* This is not required, see Documentation/core-api/cachetlb.rst */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index fc7a14884c6c..ef79cb6c20dc 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -274,6 +274,7 @@ static inline pte_t pte_mkwrite(pte_t pte) * and a page entry and page directory to the page they refer to. */ +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define pte_same(a,b) (pte_val(a) == pte_val(b)) #define pte_page(x) pfn_to_page(pte_pfn(x)) @@ -301,15 +302,9 @@ static inline void update_pte(pte_t *ptep, pte_t pteval) struct mm_struct; -static inline void -set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) -{ - update_pte(ptep, pteval); -} - -static inline void set_pte(pte_t *ptep, pte_t pteval) +static inline void set_pte(pte_t *ptep, pte_t pte) { - update_pte(ptep, pteval); + update_pte(ptep, pte); } static inline void @@ -407,8 +402,11 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) #else -extern void update_mmu_cache(struct vm_area_struct * vma, - unsigned long address, pte_t *ptep); +struct vm_fault; +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, address, ptep) \ + update_mmu_cache_range(NULL, vma, address, ptep, 1) typedef pte_t *pte_addr_t; diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c index 19e5a478a7e8..7ec66a79f472 100644 --- a/arch/xtensa/mm/cache.c +++ b/arch/xtensa/mm/cache.c @@ -121,9 +121,9 @@ EXPORT_SYMBOL(copy_user_highpage); * */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); /* * If we have a mapping but the page is not mapped to user-space @@ -132,14 +132,14 @@ void flush_dcache_page(struct page *page) */ if (mapping && !mapping_mapped(mapping)) { - if (!test_bit(PG_arch_1, &page->flags)) - set_bit(PG_arch_1, &page->flags); + if (!test_bit(PG_arch_1, &folio->flags)) + set_bit(PG_arch_1, &folio->flags); return; } else { - - unsigned long phys = page_to_phys(page); - unsigned long temp = page->index << PAGE_SHIFT; + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; + unsigned long temp = folio_pos(folio); + unsigned int i, nr = folio_nr_pages(folio); unsigned long alias = !(DCACHE_ALIAS_EQ(temp, phys)); unsigned long virt; @@ -154,22 +154,26 @@ void flush_dcache_page(struct page *page) return; preempt_disable(); - virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(virt, phys); + for (i = 0; i < nr; i++) { + virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(virt, phys); - virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); + virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); - if (alias) - __flush_invalidate_dcache_page_alias(virt, phys); + if (alias) + __flush_invalidate_dcache_page_alias(virt, phys); - if (mapping) - __invalidate_icache_page_alias(virt, phys); + if (mapping) + __invalidate_icache_page_alias(virt, phys); + phys += PAGE_SIZE; + temp += PAGE_SIZE; + } preempt_enable(); } /* There shouldn't be an entry in the cache for this page anymore. */ } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* * For now, flush the whole cache. FIXME?? @@ -207,45 +211,52 @@ EXPORT_SYMBOL(local_flush_cache_page); #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */ -void -update_mmu_cache(struct vm_area_struct * vma, unsigned long addr, pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; + unsigned int i; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); - /* Invalidate old entry in TLBs */ - - flush_tlb_page(vma, addr); + /* Invalidate old entries in TLBs */ + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); + nr = folio_nr_pages(folio); #if (DCACHE_WAY_SIZE > PAGE_SIZE) - if (!PageReserved(page) && test_bit(PG_arch_1, &page->flags)) { - unsigned long phys = page_to_phys(page); + if (!folio_test_reserved(folio) && test_bit(PG_arch_1, &folio->flags)) { + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; unsigned long tmp; preempt_disable(); - tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - __invalidate_icache_page_alias(tmp, phys); + for (i = 0; i < nr; i++) { + tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + __invalidate_icache_page_alias(tmp, phys); + phys += PAGE_SIZE; + } preempt_enable(); - clear_bit(PG_arch_1, &page->flags); + clear_bit(PG_arch_1, &folio->flags); } #else - if (!PageReserved(page) && !test_bit(PG_arch_1, &page->flags) + if (!folio_test_reserved(folio) && !test_bit(PG_arch_1, &folio->flags) && (vma->vm_flags & VM_EXEC) != 0) { - unsigned long paddr = (unsigned long)kmap_atomic(page); - __flush_dcache_page(paddr); - __invalidate_icache_page(paddr); - set_bit(PG_arch_1, &page->flags); - kunmap_atomic((void *)paddr); + for (i = 0; i < nr; i++) { + void *paddr = kmap_local_folio(folio, i * PAGE_SIZE); + __flush_dcache_page((unsigned long)paddr); + __invalidate_icache_page((unsigned long)paddr); + kunmap_local(paddr); + } + set_bit(PG_arch_1, &folio->flags); } #endif } From patchwork Mon Jul 10 20:43:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118102 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp75014vqm; Mon, 10 Jul 2023 14:12:17 -0700 (PDT) X-Google-Smtp-Source: APBJJlGrfrDyu4v7VNheZ0hgErwo+jQiPJpAjQ6bM05gAO23ugZS7NzUJOHSQxp6weBC+hw8Rr6D X-Received: by 2002:a05:6a20:1444:b0:126:306b:c6d4 with SMTP id a4-20020a056a20144400b00126306bc6d4mr12542474pzi.23.1689023537149; Mon, 10 Jul 2023 14:12:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023537; cv=none; d=google.com; s=arc-20160816; b=nGcBHvwX0GJ1uTiFg9yJdAUKo6mwGQgbAJ3CnE170GdU6Bl9cgsekVrL45UMFB+JqS NFs+sf5pncSjq0T+wNpI2WpKd+KdNHVw2ZDZiffHG8Visue/LhzEOpvR/EKeVb0SPyEx V3rP97vQzOGT9gCNT6l1Ep5x2Epi6OPVV9JW8Lq1KplrYGL2jOGxg8MXgUvxX0jOfWW2 40+C9jU56nx76jzo4uqx21NyNVYtyS0OZbX6esl/bLvmHYDI8+f2eaBLtkNy/xZphFDH 1FuZLUkVCJoefz7aZblKozmx0gBuIENEwcvhKHjxeMocFqs5rYwf8Fxc/M+YM9g0tA6P R7/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=F+ZlOZYXfLNjysRkosLQAF/h5hJm/ZmBQd5bhit3iG0=; fh=YSRXgHJFLntr8RR5WDr+xEuSZ7CBE2Ww6sl4CKuWw+A=; b=pBDCWS+bRBaMbajsolAAZiz76Dbgb1AFooRwKYGYghj70w0Awd6lRpJQ3/1QH4877l HJ8exqc3o6my1ba+2PMGTMAjMuvDfzQt8zqN+Q5PxGkZTTfC/5aBJ09+kdUf5BJtU1DY XNj6RCluH6KuTCaxw4xS6nerKN5vMzmmga45tCEmzUVQ7RkLrPxBCRGZyjuLVMuO/Ebz XCvWpbMkoRVxIbyvqLOhnSpLuQuNPFcsoyJIVDmw/veS5wVkP//r7yWOZiBeRCk2UtvJ HOcQRXIjEob1nQqRVYcrR5mIrNfWwaHe0LzWCGl1Th2ldmimGm7amYfUs/qYjcV8+kK2 nkJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HN1p+gEF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 36-20020a631364000000b0055b618c77f6si222453pgt.180.2023.07.10.14.12.04; Mon, 10 Jul 2023 14:12:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HN1p+gEF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232932AbjGJUpO (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232343AbjGJUoF (ORCPT ); Mon, 10 Jul 2023 16:44:05 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BCEAE72; Mon, 10 Jul 2023 13:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=F+ZlOZYXfLNjysRkosLQAF/h5hJm/ZmBQd5bhit3iG0=; b=HN1p+gEFEXL2I8SdOVwCy+Eoad oY5wyMKzIMjZhTic3XvCdeipBjThposExOtkyN4oekrhLmxN6ZymyArLOYJ+VSl4KhakWOI1/1nqE RT2qar+hBM7Ne2vF8t5D9eCiEWKvSlKQKsfaWCBBY8AXd2aK4IbGnhp+TV3Jr7h3xHeiff8XDrGUQ sLOPCde6Y2M2iPruUMowU4SXdBEYtZGRNPfhkrbNNKbDu4x6hdXhIyE7RnlxTSgySRCZJ27lXH73L 3txlvo0r36dhsc3FNV1+BDXkTUctkFzxPOdMqCsWjJk5/j4m7pnssMhA8pQqbyPSBa2owsXhLEDo8 dAV4KQLw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00Euqh-I4; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Anshuman Khandual Subject: [PATCH v5 30/38] mm: Remove page_mapping_file() Date: Mon, 10 Jul 2023 21:43:31 +0100 Message-Id: <20230710204339.3554919-31-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069544236435672 X-GMAIL-MSGID: 1771069544236435672 This function has no more users. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- include/linux/pagemap.h | 8 -------- 1 file changed, 8 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 794e4e55dc38..71dd79b4ae0a 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -414,14 +414,6 @@ static inline struct address_space *page_file_mapping(struct page *page) return folio_file_mapping(page_folio(page)); } -/* - * For file cache pages, return the address_space, otherwise return NULL - */ -static inline struct address_space *page_mapping_file(struct page *page) -{ - return folio_flush_mapping(page_folio(page)); -} - /** * folio_inode - Get the host inode for this folio. * @folio: The folio. From patchwork Mon Jul 10 20:43:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118085 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp63281vqm; Mon, 10 Jul 2023 13:47:10 -0700 (PDT) X-Google-Smtp-Source: APBJJlEXVNCWSkniT6Ze8QkwjdBEJQXPVrRMC4gwf9lIDzkfajZ0uHd0vLe2VdO3jNSfZcHiHX0e X-Received: by 2002:a17:906:943:b0:992:a9ba:b8da with SMTP id j3-20020a170906094300b00992a9bab8damr11411418ejd.70.1689022030506; Mon, 10 Jul 2023 13:47:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022030; cv=none; d=google.com; s=arc-20160816; b=QF3U0QDBuOvEkeVDHMxwyDLbg5HiQoB9iBMIg4Jcp/M7HCSBGQiWrI+tO78YNvaMDP WzjmbM2/qJCgzStx+QwHF6gZeQ6CGqc55eSXhiYBARWAd+CyIOiMhCY0I0HFgSmAOdP9 qld9TK+jS49VCrg9elma6vjl0XGcMUhZ79WTKwSzvUxcRIUEQqOZhKA0IVfxgdGejUaq eXYlxWxz8L5eCat8GjAtPBEWTJeAKtM4O+2nRg1+eOp/ZJo0ovk1Zlnm2z3JJk5vU2qD YPto5M3r8KVtlkxo9ola47aamTT0aNb+dU0zCyCGQL9drmerbIRpVEIizc90/Ewz0tQG 8X3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UHpZ6SH0+WkPUL5bpjXDkOmNFMUyMh29y7ULEhoyDL8=; fh=XtKYrrV24qpJDEOQ96O/Lmd5LL6GP9mEWUOmxLG8cXI=; b=iDenuuuYDsJx8pqRfmcLn9m/ta6cM/EDu4QVPyTYFDONOC4J7DGXRbS476b83w+rGX w1xjhwAvdbA6K5tDgMsAo0yXdTxIbHHqyfhlzA9sYvuFS/bmlLYtE5l1E8WRVd2snmqZ xCLRSvQNCWYapHgQIXrgShjfGJhaaRR5vFbJv1q/O8UTNY53QU6vicME/A2JT+IxGQcn nKk3IynrTYYWhmpWUABDid1hTMxVUQrtNCWzjQlgfYIXigNf5rePm5HrwNmrdTYTei5W 2hwHtfqvZPyHaheXDCfmx+yMs0allJyPSELZdcZSbrYHgsHhvD3335ib7buUW9pr/LO1 /J1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=vnFHLham; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id re21-20020a170906d8d500b009938f92dbe0si440581ejb.153.2023.07.10.13.46.46; Mon, 10 Jul 2023 13:47:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=vnFHLham; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233117AbjGJUp2 (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229801AbjGJUob (ORCPT ); Mon, 10 Jul 2023 16:44:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90FA9E75; Mon, 10 Jul 2023 13:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=UHpZ6SH0+WkPUL5bpjXDkOmNFMUyMh29y7ULEhoyDL8=; b=vnFHLhamhUuJx0XVkHEr5iDcu2 +3TJfoCbjKyZ6MYJWfL+lRo5VdBtcZ3o3ErEP71IBepVWG5QEcktOE64SGuwbKhI2TCNJy2jSUr3S XUdOYIzfjjZpYt21j2rO+RsnS3qFE18u8bltgkl7lJuHO6cpLA6wpdidTuFE82lFB899X/avx3efl wuwaJ8F5jzkTFJknU8FZMdIIE4XzZIBI/6INfraNfqH9T4oT9p9gcyypEMXe09QySrPxQDV6Yuf06 VyHnvza0hQMTgJM+PJrtzKBa7QGc2QTjnTKQ6j6nSJElgEJgdaXIJX31I7n7Tl2yJRfXhmIyteKz4 +mjospSg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00Euql-KC; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 31/38] mm: Rationalise flush_icache_pages() and flush_icache_page() Date: Mon, 10 Jul 2023 21:43:32 +0100 Message-Id: <20230710204339.3554919-32-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771067964914734406 X-GMAIL-MSGID: 1771067964914734406 Move the default (no-op) implementation of flush_icache_pages() to from . Remove the flush_icache_page() wrapper from each architecture into . Signed-off-by: Matthew Wilcox (Oracle) --- arch/alpha/include/asm/cacheflush.h | 5 +---- arch/arc/include/asm/cacheflush.h | 9 --------- arch/arm/include/asm/cacheflush.h | 7 ------- arch/csky/abiv1/inc/abi/cacheflush.h | 1 - arch/csky/abiv2/inc/abi/cacheflush.h | 1 - arch/hexagon/include/asm/cacheflush.h | 2 +- arch/loongarch/include/asm/cacheflush.h | 2 -- arch/m68k/include/asm/cacheflush_mm.h | 1 - arch/mips/include/asm/cacheflush.h | 6 ------ arch/nios2/include/asm/cacheflush.h | 2 +- arch/parisc/include/asm/cacheflush.h | 2 +- arch/sh/include/asm/cacheflush.h | 2 +- arch/sparc/include/asm/cacheflush_32.h | 2 -- arch/sparc/include/asm/cacheflush_64.h | 3 --- arch/xtensa/include/asm/cacheflush.h | 4 ---- include/asm-generic/cacheflush.h | 12 ------------ include/linux/cacheflush.h | 9 +++++++++ 17 files changed, 14 insertions(+), 56 deletions(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 3956460e69e2..36a7e924c3b9 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -53,10 +53,6 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_user_page flush_icache_user_page #endif /* CONFIG_SMP */ -/* This is used only in __do_fault and do_swap_page. */ -#define flush_icache_page(vma, page) \ - flush_icache_user_page((vma), (page), 0, 0) - /* * Both implementations of flush_icache_user_page flush the entire * address space, so one call, no matter how many pages. @@ -66,6 +62,7 @@ static inline void flush_icache_pages(struct vm_area_struct *vma, { flush_icache_user_page(vma, page, 0, 0); } +#define flush_icache_pages flush_icache_pages #include diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index 04f65f588510..bd5b1a9a0544 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -18,15 +18,6 @@ #include #include -/* - * Semantically we need this because icache doesn't snoop dcache/dma. - * However ARC Cache flush requires paddr as well as vaddr, latter not available - * in the flush_icache_page() API. So we no-op it but do the equivalent work - * in update_mmu_cache() - */ -#define flush_icache_page(vma, page) -#define flush_icache_pages(vma, page, nr) - void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 841e268d2374..f6181f69577f 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -321,13 +321,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -/* - * We don't appear to need to do anything here. In fact, if we did, we'd - * duplicate cache flushing elsewhere performed by flush_dcache_page(). - */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, * vmalloc, ioremap etc) in kernel space for pages. On non-VIPT diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index 0d6cb65624c4..908d8b0bc4fd 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -45,7 +45,6 @@ extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, u #define flush_cache_vmap(start, end) cache_wbinv_all() #define flush_cache_vunmap(start, end) cache_wbinv_all() -#define flush_icache_page(vma, page) do {} while (0); #define flush_icache_range(start, end) cache_wbinv_range(start, end) #define flush_icache_mm_range(mm, start, end) cache_wbinv_range(start, end) #define flush_icache_deferred(mm) do {} while (0); diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index 9c728933a776..40be16907267 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -33,7 +33,6 @@ static inline void flush_dcache_page(struct page *page) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) #define flush_icache_range(start, end) cache_wbinv_range(start, end) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index dc3f500a5a01..bfff514a81c8 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -18,7 +18,7 @@ * - flush_cache_range(vma, start, end) flushes a range of pages * - flush_icache_range(start, end) flush a range of instructions * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) nr pages for icache * * Need to doublecheck which one is really needed for ptrace stuff to work. */ diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 88a44da50a3b..80bd74106985 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -46,8 +46,6 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) -#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 88eb85e81ef6..ed12358c4783 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -261,7 +261,6 @@ static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) #define flush_dcache_mmap_unlock(mapping) do { } while (0) #define flush_icache_pages(vma, page, nr) \ __flush_pages_to_ram(page_address(page), nr) -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index 0f389bc7cb90..f36c2519ed97 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -82,12 +82,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) - extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); extern void (*__flush_icache_user_range)(unsigned long start, diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index 8624ca83cffe..7c48c5213fb7 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -35,7 +35,7 @@ void flush_dcache_folio(struct folio *folio); extern void flush_icache_range(unsigned long start, unsigned long end); void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); +#define flush_icache_pages flush_icache_pages #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index b77c3e0c37d3..b4006f2a9705 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -60,7 +60,7 @@ static inline void flush_dcache_page(struct page *page) void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 9fceef6f3e00..878b6b551bd2 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -53,7 +53,7 @@ extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index 8dba35d63328..21f6c918238b 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -15,8 +15,6 @@ #define flush_cache_page(vma,addr,pfn) \ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) -#define flush_icache_page(vma, pg) do { } while (0) -#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index a9a719f04d06..0e879004efff 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -53,9 +53,6 @@ static inline void flush_dcache_page(struct page *page) flush_dcache_folio(page_folio(page)); } -#define flush_icache_page(vma, pg) do { } while(0) -#define flush_icache_pages(vma, pg, nr) do { } while(0) - void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, unsigned long len, int write); diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 35153f6725e4..785a00ce83c1 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -160,10 +160,6 @@ void local_flush_cache_page(struct vm_area_struct *vma, __invalidate_icache_range(start,(end) - (start)); \ } while (0) -/* This is not required, see Documentation/core-api/cachetlb.rst */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 09d51a680765..84ec53ccc450 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -77,18 +77,6 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #define flush_icache_user_range flush_icache_range #endif -#ifndef flush_icache_page -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} - -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) -{ -} -#endif - #ifndef flush_icache_user_page static inline void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index 82136f3fcf54..55f297b2c23f 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -17,4 +17,13 @@ static inline void flush_dcache_folio(struct folio *folio) #define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ +#ifndef flush_icache_pages +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} +#endif + +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) + #endif /* _LINUX_CACHEFLUSH_H */ From patchwork Mon Jul 10 20:43:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118099 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp74104vqm; Mon, 10 Jul 2023 14:10:28 -0700 (PDT) X-Google-Smtp-Source: APBJJlG5y/FLOm6DkGYM5cT/szQcWU5gnQcsw3/PxrnofvKctgINXZYity3gtqF69QUwGAocx3S7 X-Received: by 2002:a17:902:e5c5:b0:1b8:4f92:565e with SMTP id u5-20020a170902e5c500b001b84f92565emr16049418plf.21.1689023427933; Mon, 10 Jul 2023 14:10:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023427; cv=none; d=google.com; s=arc-20160816; b=cG5Of8jEf8arVoHcaBJdEHMUPNs0uGsAnX6rchcYrlVc6yOH5uGC+cL2pkagXFiqfd E6mjNqdkRm9hU2tKD5mFaMuPfbUeW/ODbrE+5BqVxjqlyMJAgnqJEAizNZZYgedUtJSk xxEq+6gWIiRdSXVtERlD9I6DIAc5rmd+7dDXKs2sIO+i+gYjrn34J9HS7HXeLqQbwkHt GlfWkTVFnDcV767JMklfsVAF+etGg8XzLjNDdOlWvL/7oXhUppPQEXsFKZ2aUmgg+8zh eWQ70MVoQ+TeNQgKQor6+8Oglcp0j0j56JdH7z3u+FQhCAnJ7pBDCUZzTXY58MeggORZ mrxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zInuUFQyhX5p7DpW3QqqQDh+iQp8HWHqrOQDZ/8fecU=; fh=YSRXgHJFLntr8RR5WDr+xEuSZ7CBE2Ww6sl4CKuWw+A=; b=OsiI/A3c5EqUBPWomLvFu1kHGEO8fWD2QwoOSE8eZPM9DY+BFuRNUPZEWla3Od/4JA tRygBRfegQ6e6DUYTgFhkJRBNTHtioiO2bJHj0/qwsIZVAa2u6wgU0wQtfXbAtQz0blY mZJPWSOI7Jg4Mt3HnMDFaYa8qHzCJZS8kUtZJM74E7rKYUMAMqvNsVQhOSjC1zg33FFH tPqA59mvdu0rTEaM8gqt2VBmhtGVpcxLxGNEutXJ3t+cTzt73JF9R2zKhYvvRyjV9i5V qX0IEDchkE0R/LwXq0YD0OvKLzuIuOnUQaQeO5poWUUtn3iwviXCbTDAbAP+uIaKNY8h ihQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=h5hg1vuQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h13-20020a170902f54d00b001b8987f4004si349215plf.541.2023.07.10.14.10.14; Mon, 10 Jul 2023 14:10:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=h5hg1vuQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233403AbjGJUqt (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232887AbjGJUpG (ORCPT ); Mon, 10 Jul 2023 16:45:06 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E9F410D8; Mon, 10 Jul 2023 13:44:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zInuUFQyhX5p7DpW3QqqQDh+iQp8HWHqrOQDZ/8fecU=; b=h5hg1vuQKj8IikMBDz/QMQ5zbM BU94LS/OWLcKkW6MxRnbHPLCfyRZ2Lr6xBmEK9Rx6BXDSf1Sra6nL03pUpr/aTtubBUTbTjg4XQH+ VdLL8HvPmHmq79BaL5uIRw16OPQJX1cm4deBVvW+FjCaqQha73lKU2GtIOfjnBeGaQnYkCRUB+4/G aYjSnWAwOzf+4ATQRUlE040bdRBIC74jAE4rASNqAsBSG2GrOBj/+eN1MgBa+07KdCQCYEQA7VSOI 8V/56S2yFOPY+Hc8geU/XDl1km87/OOnNr1GGaCAOUC1zRM0wAZJVqzCjQWNAdN+VV2gt9Shqf+aI vZ94wgIA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00Euqu-P3; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Anshuman Khandual Subject: [PATCH v5 32/38] mm: Tidy up set_ptes definition Date: Mon, 10 Jul 2023 21:43:33 +0100 Message-Id: <20230710204339.3554919-33-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069429872208443 X-GMAIL-MSGID: 1771069429872208443 Now that all architectures are converted, we can remove the PFN_PTE_SHIFT ifdef and we can define set_pte_at() unconditionally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- include/linux/pgtable.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 22f48f9997d5..e2a0bd5941be 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -181,7 +181,6 @@ static inline int pmd_young(pmd_t pmd) #endif #ifndef set_ptes -#ifdef PFN_PTE_SHIFT /** * set_ptes - Map consecutive pages to a contiguous range of addresses. * @mm: Address space to map the pages into. @@ -209,13 +208,8 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); } } -#ifndef set_pte_at -#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -#endif #endif -#else #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -#endif #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, From patchwork Mon Jul 10 20:43:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118092 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp67428vqm; Mon, 10 Jul 2023 13:57:32 -0700 (PDT) X-Google-Smtp-Source: APBJJlHbl/VpI5YrqNYywJ1k0vJ+6IUQdSswXvQlHn76AFveMco86CVkiJu2DvoZnJVsIYkL/z0Q X-Received: by 2002:a05:6808:6399:b0:398:5d57:3d08 with SMTP id ec25-20020a056808639900b003985d573d08mr11782242oib.37.1689022652379; Mon, 10 Jul 2023 13:57:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022652; cv=none; d=google.com; s=arc-20160816; b=DVEtFKLc3Mq2vWxkI49EwupaEmLwC1hRM+GhG9hUfdQKr0edzpjXdAd+/yWRmekrOe 5UfLLl8O2h0MCsxWBaCBr1vIaKh7g5PFxzM1eqzZnmDB/nJHJcj3IN+anMAGfRmWbjuc jBevnJZf1kXWh7DSqEjEpNtzvPs/KEAPVfJOpI8LNipRvvdMsWd57BoJUtBUkkS30EGt 0xXTx57zhf33suIwmYWCI+a3a6jp0ibZUiXq9OIs6ini4LpAER837p2TDDbTb9Mbc0Ix WypVpNUahI3ydZDH1qHHgM9msrn9jemjQEie9jqfSuG5CUB7+oo8rzak+AwS982Oh96n LooQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=paPNbsDhrgFpFK66NfUXWHVPB3cHJyjAQhVhGRR5N+U=; fh=YSRXgHJFLntr8RR5WDr+xEuSZ7CBE2Ww6sl4CKuWw+A=; b=hmw4ftyeLOotDjDVUUlqxwE4lW0fWPiIb+NtajWPwPPhgLWJDHn/gLD/YW+mL1VQS5 DOffEpQV9gfPtepaB9kI+9pPiZr+pOo4HcDbCCLKJnDRB8rVI6m2TnFRyvsEgi3+oWCp uwrUZ9X5EU8G2m9Jyv7RyFk0YWUCs0OGA7Ft06QJkwthrc2461i7KJUxAdbor2kRvTiH WhzWowBUMgIUQScc9Bfu80EmPGgEwqmipjr28Efs7fMv1P3Is0CiTHOzv7fdjtqa1QBV yvh+QdH1pTQxv8g7gRw4lIbYuukMyXOoxxq3RID7mlzALi/get0B9zoHxZR4QQrl/Um1 4vTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=QUlAJhWR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e24-20020a656798000000b005533cf1fdbfsi195413pgr.629.2023.07.10.13.57.18; Mon, 10 Jul 2023 13:57:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=QUlAJhWR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233074AbjGJUpX (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232237AbjGJUoF (ORCPT ); Mon, 10 Jul 2023 16:44:05 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6BDF1A2; Mon, 10 Jul 2023 13:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=paPNbsDhrgFpFK66NfUXWHVPB3cHJyjAQhVhGRR5N+U=; b=QUlAJhWRNtFfmO7LZAtZUSyWv3 Yj6u2SE4iI6rFGiFOGHlKsUXxIapWLmZWsYfFAYfENSbb4vLwByEs/6XN1nlTK/hyUof5bWAqj1dj LAku+E4ib0zFnv5xeOROvKW4KEucJNW8DbWcT3FawboddIGTkkMdxX4Bs8w+dO3NhVWk2WVx6hLdJ kFvVs8KfTUMemf+kCGffTtHBOEI0wVNJSzi+cXICOq1HuBI6x+sns9KcpYKYB016jvQ5PEw2PlHJ+ UhkJM6mqe3VtGclFhhf+B9SFjdYsrGwJt82VnL86dE6lfhahmjlfo6re3zLgz8LCtNU5D8iJrUlh9 LshlmaPw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjU-00Eur6-Sh; Mon, 10 Jul 2023 20:43:44 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Anshuman Khandual Subject: [PATCH v5 33/38] mm: Use flush_icache_pages() in do_set_pmd() Date: Mon, 10 Jul 2023 21:43:34 +0100 Message-Id: <20230710204339.3554919-34-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068616436581774 X-GMAIL-MSGID: 1771068616436581774 Push the iteration over each page down to the architectures (many can flush the entire THP without iteration). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- mm/memory.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 1dada20e3995..ebb4138acdb2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4258,7 +4258,6 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) bool write = vmf->flags & FAULT_FLAG_WRITE; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; pmd_t entry; - int i; vm_fault_t ret = VM_FAULT_FALLBACK; if (!transhuge_vma_suitable(vma, haddr)) @@ -4291,8 +4290,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) if (unlikely(!pmd_none(*vmf->pmd))) goto out; - for (i = 0; i < HPAGE_PMD_NR; i++) - flush_icache_page(vma, page + i); + flush_icache_pages(vma, page, HPAGE_PMD_NR); entry = mk_huge_pmd(page, vma->vm_page_prot); if (write) From patchwork Mon Jul 10 20:43:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118095 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp70322vqm; Mon, 10 Jul 2023 14:03:13 -0700 (PDT) X-Google-Smtp-Source: APBJJlFiBswUHYzM0FJaDdtDwerVd4erpb0xPAicW6vnxLGB8GJnFCLC4DdZEtTmu6pOrOyv5KZd X-Received: by 2002:a17:903:495:b0:1b8:7483:d47a with SMTP id jj21-20020a170903049500b001b87483d47amr9807747plb.37.1689022993341; Mon, 10 Jul 2023 14:03:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022993; cv=none; d=google.com; s=arc-20160816; b=itCrCWMTAquCQrF//9VSMzrSsJtpXus62aJpsB7J4ZLZXXtbuOBLYwZ8yJQcp4zwZv 6pWBRvK8yjVpjWzF1sKJMZUUGtpjhvWJDknmmcBs+hgq33Zdr7u5azX0+UsPEmp7XUMe 0G0ko6guoHPpxF6pBNUzJLwCrztBbjQMYClbpppkHx6JKJQ3P/D1jlbi8niuJHTp1WXl pwjrgqn3fW3J+XybGARdA3qA6IYIsiTBI7woPnl05/yM2lKhqf7n9kDCd9r8vioco464 ZXDFQ7hcOgUO/a2SwMjT+6Got+gdch0CP76QZ0eIogdVrWf/H13rsGjKkbbf7XI7a/lG bKEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=LZXcBnkSopupOjRiXEH3IjaRoI8hYxfLxhTdnrJE7/E=; fh=Qc6GEpVPrF7JduJ1DyLOaCjpXHan3q8l/c6tVyGBTME=; b=Uc9e6jS0/VclhDarEyOBG+MwGjhOrDnzzzP5bbuuLprKjdza3lkxw+kV2hfDvxen8q fmt4LTDwWsz/vo9kjmbFXvgw82W2WfqA3e5Uic9byXAxy7xc8YhPgc2M1XV0kInNtlxj FcZupGlFSrEjhN82uU0OjMr1cs4ySK0v4PBZdHi6NEWwEjgPTCzv7zOMwXZ/iDC4cy5N 3Gz3mr2ecGadGkGUVspzgAVcg4N/ZvxhGN+HQYJG8ZGi5YpZN+kyfTryz9AqFoxQfAM7 FjIg/KNvhPQ4rH0m9uhTxzE8CXWAzcO8f9zV/EMGS/MqDWMnBSZjRRSF6LKQv9SxV5Ir T61Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=UZGRwuNv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l8-20020a170902f68800b001b8698149f2si344315plg.588.2023.07.10.14.02.58; Mon, 10 Jul 2023 14:03:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=UZGRwuNv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233149AbjGJUph (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230519AbjGJUoe (ORCPT ); Mon, 10 Jul 2023 16:44:34 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D3EDE47; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LZXcBnkSopupOjRiXEH3IjaRoI8hYxfLxhTdnrJE7/E=; b=UZGRwuNvpDEO6LgiTtVda6Le3s 8V37WDV4IfgqmT9LGcaQms1+Yk3cLoG2A9KbAfSe7Zu8NeuBDEN6ErcwCrCkw9dcBn4cYyRvuPRZr Jo9VtQkdKqShHWEOW0l06/c0HvvzKnZATErnunpUS8LoBMVlHlTOTIG2ZyBlqqylUDkogeF1xJwCf OlKJu7iSmyX7JGks9C0DDMHv1rEL48wUH5KRJcKgpx6pC2RbyT2o+ycJsWUwUKjbOaleBzWKenZ6u 7PWFgAZCI1jLFzp9FbolUcv/FaJP8Jf9qxtRccd6BjYahFeP0VEV1rHYVQh6pa/NU/900MqLtqgCv jvEi83Ew==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjV-00EurD-0w; Mon, 10 Jul 2023 20:43:45 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v5 34/38] filemap: Add filemap_map_folio_range() Date: Mon, 10 Jul 2023 21:43:35 +0100 Message-Id: <20230710204339.3554919-35-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068974068414215 X-GMAIL-MSGID: 1771068974068414215 From: Yin Fengwei filemap_map_folio_range() maps partial/full folio. Comparing to original filemap_map_pages(), it updates refcount once per folio instead of per page and gets minor performance improvement for large folio. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]), got 2% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 109 ++++++++++++++++++++++++++------------------------- 1 file changed, 55 insertions(+), 54 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 8040545954bc..bdc1e0b811bf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2168,16 +2168,6 @@ unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, } EXPORT_SYMBOL(filemap_get_folios); -static inline -bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) -{ - if (!folio_test_large(folio) || folio_test_hugetlb(folio)) - return false; - if (index >= max) - return false; - return index < folio_next_index(folio) - 1; -} - /** * filemap_get_folios_contig - Get a batch of contiguous folios * @mapping: The address_space to search @@ -3436,10 +3426,10 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio, return false; } -static struct folio *next_uptodate_page(struct folio *folio, - struct address_space *mapping, - struct xa_state *xas, pgoff_t end_pgoff) +static struct folio *next_uptodate_folio(struct xa_state *xas, + struct address_space *mapping, pgoff_t end_pgoff) { + struct folio *folio = xas_next_entry(xas, end_pgoff); unsigned long max_idx; do { @@ -3477,20 +3467,51 @@ static struct folio *next_uptodate_page(struct folio *folio, return NULL; } -static inline struct folio *first_map_page(struct address_space *mapping, - struct xa_state *xas, - pgoff_t end_pgoff) +/* + * Map page range [start_page, start_page + nr_pages) of folio. + * start_page is gotten from start by folio_page(folio, start) + */ +static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, + struct folio *folio, unsigned long start, + unsigned long addr, unsigned int nr_pages) { - return next_uptodate_page(xas_find(xas, end_pgoff), - mapping, xas, end_pgoff); -} + vm_fault_t ret = 0; + struct vm_area_struct *vma = vmf->vma; + struct file *file = vma->vm_file; + struct page *page = folio_page(folio, start); + unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); + unsigned int ref_count = 0, count = 0; -static inline struct folio *next_map_page(struct address_space *mapping, - struct xa_state *xas, - pgoff_t end_pgoff) -{ - return next_uptodate_page(xas_next_entry(xas, end_pgoff), - mapping, xas, end_pgoff); + do { + if (PageHWPoison(page)) + continue; + + if (mmap_miss > 0) + mmap_miss--; + + /* + * NOTE: If there're PTE markers, we'll leave them to be + * handled in the specific fault path, and it'll prohibit the + * fault-around logic. + */ + if (!pte_none(*vmf->pte)) + continue; + + if (vmf->address == addr) + ret = VM_FAULT_NOPAGE; + + ref_count++; + do_set_pte(vmf, page, addr); + update_mmu_cache(vma, addr, vmf->pte); + } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + + /* Restore the vmf->pte */ + vmf->pte -= nr_pages; + + folio_ref_add(folio, ref_count); + WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); + + return ret; } vm_fault_t filemap_map_pages(struct vm_fault *vmf, @@ -3503,12 +3524,11 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, unsigned long addr; XA_STATE(xas, &mapping->i_pages, start_pgoff); struct folio *folio; - struct page *page; - unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); vm_fault_t ret = 0; + int nr_pages = 0; rcu_read_lock(); - folio = first_map_page(mapping, &xas, end_pgoff); + folio = next_uptodate_folio(&xas, mapping, end_pgoff); if (!folio) goto out; @@ -3525,17 +3545,13 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, goto out; } do { -again: - page = folio_file_page(folio, xas.xa_index); - if (PageHWPoison(page)) - goto unlock; - - if (mmap_miss > 0) - mmap_miss--; + unsigned long end; addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; + end = folio->index + folio_nr_pages(folio) - 1; + nr_pages = min(end, end_pgoff) - xas.xa_index + 1; /* * NOTE: If there're PTE markers, we'll leave them to be @@ -3545,32 +3561,17 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, if (!pte_none(ptep_get(vmf->pte))) goto unlock; - /* We're about to handle the fault */ - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; + ret |= filemap_map_folio_range(vmf, folio, + xas.xa_index - folio->index, addr, nr_pages); - do_set_pte(vmf, page, addr); - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, addr, vmf->pte); - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - folio_ref_inc(folio); - goto again; - } - folio_unlock(folio); - continue; unlock: - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - goto again; - } folio_unlock(folio); folio_put(folio); - } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL); + folio = next_uptodate_folio(&xas, mapping, end_pgoff); + } while (folio); pte_unmap_unlock(vmf->pte, vmf->ptl); out: rcu_read_unlock(); - WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret; } EXPORT_SYMBOL(filemap_map_pages); From patchwork Mon Jul 10 20:43:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118086 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp63632vqm; Mon, 10 Jul 2023 13:47:57 -0700 (PDT) X-Google-Smtp-Source: APBJJlHxH07SXYh9PdT6K5Bvcq6pJ7zkoPDubEy1zl3GsVHbrqU/Ii3oj9aRd3r1bjEfugbHX+Wj X-Received: by 2002:a2e:9d04:0:b0:2b6:e8a0:a7f8 with SMTP id t4-20020a2e9d04000000b002b6e8a0a7f8mr10728887lji.31.1689022077039; Mon, 10 Jul 2023 13:47:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022077; cv=none; d=google.com; s=arc-20160816; b=qRwfS3ZvDZ0IYvm0fkeVMG321DLJ//eiY8277IRSeHj0pw9oMycccg9b+GhrHJikri LmMCM9TU0vvwscYEsAm4OQAtHZpcTCVbmDrtBwvJwNm7YTrQ0lJKTW0xse5AYeDzkoDj wEovwqjkMz0QX0fBm9zokhvTcE4BGe6AElBBymhC5dgaAQlb2unvbk9qgoika03xa5Ou bcNWl8w2xzr7bbng5RCCrksyGSXBncmqK+fWz+gVfz2T0peXd8Ky+vdNI6+OlvREREVZ W935WF6q/vIIK5ii7m+edd/mms9CCt+DOXSatEJlvEq0rK7kuMB5KetpY6tqi1ge5o85 fOMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=tse6c+PyGQiNNj587rPbXI3VeCMysTQLdbxkBMTFw2U=; fh=Qc6GEpVPrF7JduJ1DyLOaCjpXHan3q8l/c6tVyGBTME=; b=nEG96eGX7jVofQSv7LQoOjAZ038YKbVcTflQOUbuc6ZwIalnDG+h4IwZ/GrQFgqiMc KxYTwGy18Hik+6QnRFkr6AH+w70zuEKySVI5p1UqEkzEXVd26x5XrTTeIkP8xy6kotMj gDhL2gSQN39R62uOaguftOLRprmuj2MmoouLgA3QsOhamlD88XjrkqbQUCJS9i7bNO+C PYr1KuiCbCeK53BRwBhN5zkxvouauvBn+rUjGqETauK46g4y7rkX4Z6bOzld2ZGQHWEJ z9XabBi5fNiC0kfRAX0sU+O7PKa13NuMLyMcHOb0CVIj7DfNIKCyK+VbIi6tV0avBpQY c+Sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=ByTAb8QL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e18-20020a170906505200b00987d26a0998si367598ejk.455.2023.07.10.13.47.33; Mon, 10 Jul 2023 13:47:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=ByTAb8QL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233256AbjGJUpu (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229655AbjGJUo5 (ORCPT ); Mon, 10 Jul 2023 16:44:57 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DC36E77; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tse6c+PyGQiNNj587rPbXI3VeCMysTQLdbxkBMTFw2U=; b=ByTAb8QLCEfj5LUw8NRweblA0g IWcJ9mcvelPbcUNEc/gRgpW6n4ttTTktmyu3V7EmrnW0an+1JIqhmeCeyRQBWmDPJuFHUwjzFDWpo XAB1S/E03sxX4yFKmTZaX+54W5iyrUP3Co8NFiKEQfVqMdDBwutG450DZAYzgF4pCRF6XxL4ZmNwA N9WKG1uQmnB8U8rygg5WlNYdy5feYokVBgqd+1DtKuYcoWmWh+pL3pdz30uR+RgEeTguP6cjlmefw 71UTFiAkQLzksnwO8aPUFBJy4ZdRrZcfBsOz5IlsZVF5NRmRO55SF3NP3Vg1E93vlS+J6o3/PcAqu a09WvaTg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjV-00EurO-7T; Mon, 10 Jul 2023 20:43:45 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v5 35/38] rmap: add folio_add_file_rmap_range() Date: Mon, 10 Jul 2023 21:43:36 +0100 Message-Id: <20230710204339.3554919-36-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068013478309952 X-GMAIL-MSGID: 1771068013478309952 From: Yin Fengwei folio_add_file_rmap_range() allows to add pte mapping to a specific range of file folio. Comparing to page_add_file_rmap(), it batched updates __lruvec_stat for large folio. Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 2 ++ mm/rmap.c | 60 +++++++++++++++++++++++++++++++++----------- 2 files changed, 48 insertions(+), 14 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b87d01660412..a3825ce81102 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -198,6 +198,8 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, + struct vm_area_struct *, bool compound); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); diff --git a/mm/rmap.c b/mm/rmap.c index 2668f5ea3534..642b3ad8bb1a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1306,31 +1306,39 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } /** - * page_add_file_rmap - add pte mapping to a file page - * @page: the page to add the mapping to + * folio_add_file_rmap_range - add pte mapping to page range of a folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @nr_pages: The number of pages which will be mapped * @vma: the vm area in which the mapping is added * @compound: charge the page as compound or small page * + * The page range of folio is defined by [first_page, first_page + nr_pages) + * * The caller needs to hold the pte lock. */ -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, - bool compound) +void folio_add_file_rmap_range(struct folio *folio, struct page *page, + unsigned int nr_pages, struct vm_area_struct *vma, + bool compound) { - struct folio *folio = page_folio(page); atomic_t *mapped = &folio->_nr_pages_mapped; - int nr = 0, nr_pmdmapped = 0; - bool first; + unsigned int nr_pmdmapped = 0, first; + int nr = 0; - VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); + VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio); /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { - first = atomic_inc_and_test(&page->_mapcount); - nr = first; - if (first && folio_test_large(folio)) { - nr = atomic_inc_return_relaxed(mapped); - nr = (nr < COMPOUND_MAPPED); - } + do { + first = atomic_inc_and_test(&page->_mapcount); + if (first && folio_test_large(folio)) { + first = atomic_inc_return_relaxed(mapped); + first = (first < COMPOUND_MAPPED); + } + + if (first) + nr++; + } while (page++, --nr_pages > 0); } else if (folio_test_pmd_mappable(folio)) { /* That test is redundant: it's for safety or to optimize out */ @@ -1359,6 +1367,30 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma, compound); } +/** + * page_add_file_rmap - add pte mapping to a file page + * @page: the page to add the mapping to + * @vma: the vm area in which the mapping is added + * @compound: charge the page as compound or small page + * + * The caller needs to hold the pte lock. + */ +void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, + bool compound) +{ + struct folio *folio = page_folio(page); + unsigned int nr_pages; + + VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page); + + if (likely(!compound)) + nr_pages = 1; + else + nr_pages = folio_nr_pages(folio); + + folio_add_file_rmap_range(folio, page, nr_pages, vma, compound); +} + /** * page_remove_rmap - take down pte mapping from a page * @page: page to remove mapping from From patchwork Mon Jul 10 20:43:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118094 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp69466vqm; Mon, 10 Jul 2023 14:01:46 -0700 (PDT) X-Google-Smtp-Source: APBJJlFkInQXrtcUaUd1jSmkShVCN85wbbkTi7Bcf1Q547v1gpJl6Ipi4qVvbI6wTXwCa0+4mbEq X-Received: by 2002:a05:6a20:4c8:b0:129:c38e:cdd7 with SMTP id 8-20020a056a2004c800b00129c38ecdd7mr10097003pzd.38.1689022906256; Mon, 10 Jul 2023 14:01:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689022906; cv=none; d=google.com; s=arc-20160816; b=AoOaAPBP9hmM8zgAcUGuZlTJCs7P8b3nbcCSOcAux4Tqvo7ihaSE7izWbF/Hl6Av21 MgJX5cetSX7NpwsHtRH682V0dsHT7CozKudTEYvrOrX8qfEnRfcX4LNZrZFNBKJhgUGo H+OP0/Rz/f3H/B8Y72QA4ATgnsv+x4UHgNIOlbLzOZeDSGfEemOjUesbvNr9i4SacQUG Tk2ipwb9VDdgQa4Vtl7m6U6R+RlyDSprQn6BQSh2A8suw5x2qDFfmXD8Ld4WLY6cESpG dhdDNIxQaEeTq9Zf1JoWLOZzxg2HRh8Rwm/4x7QzZ2XZ42hnOFKh/jytHciuIRpeL1Am +03Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WrqzHXPjiFBpxLGjYvBy8RYTw/0aGQBc23U0HQPGcW8=; fh=Qc6GEpVPrF7JduJ1DyLOaCjpXHan3q8l/c6tVyGBTME=; b=TJxze9gbU/Ymd8nQOYtqq+IcpbKNTV01vqS7NoMlUhToL5HBd/wr5GAjkQ6Pej1K9R BM31rIgpSJVm0GYEWNXzvhgxarApPcaw1YCVP3DitKLnTmf0W3clJkDz2blFxL7228ou rXTLxDAVOkwTjYBW4pNeq1mbPRdOEmM5tjzct4fgIz4AeeLanbmbIzh++shPaQDTQO/v FPqs2Y7uXAN6qQZAkOHx+rec2zEcFOlmhyjwXQ9DMV+o8CV8QKvxJgGM/BKBz4DmJd5H jJ9WGcgoP6FbMrrtNgCnou9xaSRKHFoLeOvEtdkaTl6t5u0JMdkeYVWNctJ8g3K5j8t1 FnkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=sIIIXaBA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t13-20020a056a0021cd00b00682ad3438besi276476pfj.214.2023.07.10.14.01.32; Mon, 10 Jul 2023 14:01:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=sIIIXaBA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233320AbjGJUqO (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230017AbjGJUo6 (ORCPT ); Mon, 10 Jul 2023 16:44:58 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA4C8E78; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WrqzHXPjiFBpxLGjYvBy8RYTw/0aGQBc23U0HQPGcW8=; b=sIIIXaBAvEZhZh4uxJnWMQ5+Pc npP0DZ6HmpXBCr26kwW6ZIWEIYBFLBKcCXU9TqGQNOXKmiqEN+jKNSC8Z9HX/cPYfE7m02dacuYud wKh6ttMmQBuM/rIifVS2Pz+d0ohGhEqc3T2T0iJTuUH7KQi8v/ym+tGS9rOCyaeXObjgR3QqJOHxm 8vIsImmc1dsWvnj/nSgi/1LxdYBySXkzS4DtzFU2k+ESBlnDRke/Ba9/+XMiz+v+soCRM0C5X4N3V tTqXfiq7fdOrx9OF8KiFD8hBxpdxBn9jN5+qDoBsFptNWgNjIxDZgn9D7oQBpla75++vL/cMKYYnN 6r8rSi/Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjV-00EurW-Bs; Mon, 10 Jul 2023 20:43:45 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v5 36/38] mm: Convert do_set_pte() to set_pte_range() Date: Mon, 10 Jul 2023 21:43:37 +0100 Message-Id: <20230710204339.3554919-37-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771068882960814682 X-GMAIL-MSGID: 1771068882960814682 From: Yin Fengwei set_pte_range() allows to setup page table entries for a specific range. It takes advantage of batched rmap update for large folio. It now takes care of calling update_mmu_cache_range(). Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 2 +- include/linux/mm.h | 3 ++- mm/filemap.c | 3 +-- mm/memory.c | 37 +++++++++++++++++---------- 4 files changed, 28 insertions(+), 17 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index ed148919e11a..211a03053992 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -661,7 +661,7 @@ locked. The VM will unlock the page. Filesystem should find and map pages associated with offsets from "start_pgoff" till "end_pgoff". ->map_pages() is called with the RCU lock held and must not block. If it's not possible to reach a page without blocking, -filesystem should skip it. Filesystem should use do_set_pte() to setup +filesystem should skip it. Filesystem should use set_pte_range() to setup page table entry. Pointer to entry associated with the page is passed in "pte" field in vm_fault structure. Pointers to entries for other offsets should be calculated relative to "pte". diff --git a/include/linux/mm.h b/include/linux/mm.h index 9687b48dfb1b..525a7e16850a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1314,7 +1314,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); diff --git a/mm/filemap.c b/mm/filemap.c index bdc1e0b811bf..c06e9d331416 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3501,8 +3501,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, ret = VM_FAULT_NOPAGE; ref_count++; - do_set_pte(vmf, page, addr); - update_mmu_cache(vma, addr, vmf->pte); + set_pte_range(vmf, folio, page, 1, addr); } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); /* Restore the vmf->pte */ diff --git a/mm/memory.c b/mm/memory.c index ebb4138acdb2..e712e5fda56e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4323,15 +4323,24 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) } #endif -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) +/** + * set_pte_range - Set a range of PTEs to point to pages in a folio. + * @vmf: Fault decription. + * @folio: The folio that contains @page. + * @page: The first page to create a PTE for. + * @nr: The number of PTEs to create. + * @addr: The first address to create a PTE for. + */ +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); bool write = vmf->flags & FAULT_FLAG_WRITE; - bool prefault = vmf->address != addr; + bool prefault = in_range(vmf->address, addr, nr * PAGE_SIZE); pte_t entry; - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr); entry = mk_pte(page, vma->vm_page_prot); if (prefault && arch_wants_old_prefaulted_pte()) @@ -4345,14 +4354,18 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) entry = pte_mkuffd_wp(entry); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr); - lru_cache_add_inactive_or_unevictable(page, vma); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); + VM_BUG_ON_FOLIO(nr != 1, folio); + folio_add_new_anon_rmap(folio, vma, addr); + folio_add_lru_vma(folio, vma); } else { - inc_mm_counter(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page, vma, false); + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); + folio_add_file_rmap_range(folio, page, nr, vma, false); } - set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr); } static bool vmf_pte_changed(struct vm_fault *vmf) @@ -4420,11 +4433,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) /* Re-check under ptl */ if (likely(!vmf_pte_changed(vmf))) { - do_set_pte(vmf, page, vmf->address); - - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, vmf->address, vmf->pte); + struct folio *folio = page_folio(page); + set_pte_range(vmf, folio, page, 1, vmf->address); ret = 0; } else { update_mmu_tlb(vma, vmf->address, vmf->pte); From patchwork Mon Jul 10 20:43:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118109 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp75721vqm; Mon, 10 Jul 2023 14:13:52 -0700 (PDT) X-Google-Smtp-Source: APBJJlHcRhXx1+qhK4prlVfdM8m9BKIpJL0i5kUftp9lQsD+Tzw0obFiOiqNO97dLVTcxmbEWLqs X-Received: by 2002:a17:90b:3b45:b0:262:ea30:2cb8 with SMTP id ot5-20020a17090b3b4500b00262ea302cb8mr10979804pjb.20.1689023632527; Mon, 10 Jul 2023 14:13:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023632; cv=none; d=google.com; s=arc-20160816; b=J6g+rRLCkiq7mqnfuqpS2euXNKpOj2PXB0W6ZUvc9+JQw5uxHhgyuzq+Us3jhAEjOc IUengFQ0Hi3Y3QtLDVcUl5bfaihBWCDve65TIRTAaHz612ATHhCQB9rF19v8RMhKVl/l rBfKD4jKj+n8oH8U1Mg5gibXofC/f6hlyz5gw6B5jP7TBES1dYWvkQl8VdOhwW9sbp3N KiuAhlF3Elcb5tvZxJNoidB4KctLWV9+2NMNCOJK2HKHA7T9mIKzPJx1kIs4jdIh0AIH 0VKVUnZxIUCb+GHwW4eZETnficcnrEGbzGFC49wKHTrnOlOSNjnXIqBW7t/vcLsfjUCs IPdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=3MITzrjsscz3EOfHnUvCpgkBiqSORw5f6CrRiwc69RA=; fh=Qc6GEpVPrF7JduJ1DyLOaCjpXHan3q8l/c6tVyGBTME=; b=PN/PfrJnAGdW5JGDuLeuRDjlaAPCzjrA2hyU8b40OIX3cusJdvUWPHl17/LOGXnZop y9RZCQ2hZYMrea3T/hWs0r3AkKO2011+P117wRPmiU7hBD4CFuEDS+W0eJBJyljZU+C2 m3Hk4FsunUBDWCqC+L6WnTEmzs+asxYgNrknfVDlLdx087uUMmBJIPepvP6YKUz7Bpmw elZI7qdX6FQ8pafcQahK7rX54x1eoOSRw/ymE0VwnQRc8eb4n+vwUUCvNWq1w/RMWFFz /iJVCkzXpkU2D9Gl4sTBmH4//73QNmHmY6PBtXoFnRvrKKuf4q0IrrbaLe35178fRHNh ktkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=c4AwRiQV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s91-20020a17090a69e400b002637d779bf3si7897181pjj.17.2023.07.10.14.13.40; Mon, 10 Jul 2023 14:13:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=c4AwRiQV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232052AbjGJUqB (ORCPT + 99 others); Mon, 10 Jul 2023 16:46:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230200AbjGJUo6 (ORCPT ); Mon, 10 Jul 2023 16:44:58 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32278E7A; Mon, 10 Jul 2023 13:43:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3MITzrjsscz3EOfHnUvCpgkBiqSORw5f6CrRiwc69RA=; b=c4AwRiQVrDhrNfG53FzdnE3K+f r0ZI918ORvWE+/rlklI90tC0rTzLt/bZqIrIPosed6W8sSGZxn5juLhXXar5zemsZK9Vcqxkkq584 hDoHHqbFMo7S/5Kft9b2sm/K+b4TNB+bz5U1evgEm9J9ZpskVhypF9jwxVDDkfidplEasdsIZjYqO hH7enTDz4wEm9cPzJr4rkEdVKpTkoNXBC4520HuxwPXnYULE3dtoqo3/DlkNRUpSSrtJge5TyrO3P 66GSziBTrEilNJlA1FCPqBoUjxKyXL04BGstWBXnLIjizU1uhiKJbsmRFrSuJbkOfbXM/ZC6jW7yK Q5ywJ05w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjV-00Eurk-IK; Mon, 10 Jul 2023 20:43:45 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v5 37/38] filemap: Batch PTE mappings Date: Mon, 10 Jul 2023 21:43:38 +0100 Message-Id: <20230710204339.3554919-38-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069644507751422 X-GMAIL-MSGID: 1771069644507751422 From: Yin Fengwei Call set_pte_range() once per contiguous range of the folio instead of once per page. This batches the updates to mm counters and the rmap. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]) got 15% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. Perf data collected before/after the change: 18.73%--page_add_file_rmap | --11.60%--__mod_lruvec_page_state | |--7.40%--__mod_memcg_lruvec_state | | | --5.58%--cgroup_rstat_updated | --2.53%--__mod_lruvec_state | --1.48%--__mod_node_page_state 9.93%--page_add_file_rmap_range | --2.67%--__mod_lruvec_page_state | |--1.95%--__mod_memcg_lruvec_state | | | --1.57%--cgroup_rstat_updated | --0.61%--__mod_lruvec_state | --0.54%--__mod_node_page_state The running time of __mode_lruvec_page_state() is reduced about 9%. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 43 +++++++++++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 14 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index c06e9d331416..014b73eb96a1 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3480,11 +3480,12 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct file *file = vma->vm_file; struct page *page = folio_page(folio, start); unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); - unsigned int ref_count = 0, count = 0; + unsigned int count = 0; + pte_t *old_ptep = vmf->pte; do { - if (PageHWPoison(page)) - continue; + if (PageHWPoison(page + count)) + goto skip; if (mmap_miss > 0) mmap_miss--; @@ -3494,20 +3495,34 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, * handled in the specific fault path, and it'll prohibit the * fault-around logic. */ - if (!pte_none(*vmf->pte)) - continue; - - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; + if (!pte_none(vmf->pte[count])) + goto skip; - ref_count++; - set_pte_range(vmf, folio, page, 1, addr); - } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + count++; + continue; +skip: + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + if (in_range(vmf->address, addr, count)) + ret = VM_FAULT_NOPAGE; + } - /* Restore the vmf->pte */ - vmf->pte -= nr_pages; + count++; + page += count; + vmf->pte += count; + addr += count * PAGE_SIZE; + count = 0; + } while (--nr_pages > 0); + + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + if (in_range(vmf->address, addr, count)) + ret = VM_FAULT_NOPAGE; + } - folio_ref_add(folio, ref_count); + vmf->pte = old_ptep; WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret; From patchwork Mon Jul 10 20:43:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118113 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp77278vqm; Mon, 10 Jul 2023 14:17:03 -0700 (PDT) X-Google-Smtp-Source: APBJJlG/ltdaIzw7AKZxanRNLh5H8a7CXT/om8fYxG2fR6O7gHO9mvMZwDgvzwnHFwVHsmiQk0Pm X-Received: by 2002:a17:902:70c8:b0:1b9:e913:b5b7 with SMTP id l8-20020a17090270c800b001b9e913b5b7mr1908342plt.44.1689023823627; Mon, 10 Jul 2023 14:17:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023823; cv=none; d=google.com; s=arc-20160816; b=obh+3zsK/OHmAIIGmo/lPhOSRC2rbclIx3Kr8cSA9fY6sWBn8IVN3QT50m/yOwLwWJ JmppyuDK38ZluzvLeclzpH1iW+ojPaX3rNzVv7I2ET/YrSgpnClTEJ5juWYJqfjkculK rPMQDK+6y4TSxc5wYuMZgkut9nwvGGLV79H3kAiSVAm7Hw7ptkQie3Q4ZsThN6BA7wBQ v2TFBEFe3UPigMz0+sZOmJLmBJAQJGWMD5cUuff/UzGK4IjeXu8il+AwD9nCzaBgUvnV XpK+WOpyFcUeB138qoB7FpWUPl6q0WC7LXBjSo6jonzSJz0sKijPvi02L7fgWNF+T70j AV8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nV5u7fUN7eLvvulKehuO2ZLa6X1ni8Kq3NCH3wI1W68=; fh=XtKYrrV24qpJDEOQ96O/Lmd5LL6GP9mEWUOmxLG8cXI=; b=uh0xg7rWeCp3ilka0+viPXITC/riZaHmM9nrc3Pve6wH4uGcSU/EbJePlx6/8Yoqp1 AtIAcn7jAy61naVIYbBVgP65Yl+IK8hDWMQ97B4STnK7YfTlu9EJ1UJ3ug270Pgwbw5z XdgUujWMvXqBiJr/yWnCL4aIdPHb9Rx3WpvZ5nQX/jzcH5BjWAnrbWED5dXWnDDZVs6B T3mBYdyoQWTfLUv3EsNspwIAjCVrkM9FoTh7DElhfQ11aqAKlEqRQBe8CMcKwTzsQRUg qkava2QqYCqWknDKjETXJajgpN7Y6c314bByoXqEicvJQ/0XuAvAdGM6JbJLxPZSAXbk JdBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=TyH0uCOA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c7-20020a170902d48700b001b8a67f1c10si366279plg.468.2023.07.10.14.16.50; Mon, 10 Jul 2023 14:17:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=TyH0uCOA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233221AbjGJUpp (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232026AbjGJUoe (ORCPT ); Mon, 10 Jul 2023 16:44:34 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83846E5F; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nV5u7fUN7eLvvulKehuO2ZLa6X1ni8Kq3NCH3wI1W68=; b=TyH0uCOAc3cacwsNtSZ03toNxz /ldDgYNddIyTOMMCZ1doy4k1qRn3NKk9gaueXlr9Q9ZVwH7d6MVaJwERahiAgCAgQAxDi5JRsVFDi 3eHBvfR1ifNIth5VmyFLKtzah2dSw0acjuFg5RowKxVCVLH2xhcDXdP5tmz+HHmuLnfag6TRgXq0C dDUGPZDTEpllcYWbCHppyfSaiB6v5Mq+VWBJB8++zjxej6Zzy2RAjB9w/UOfYIH27C4iZJqdMPgje lL8zXH5n6a9ozIhoQ12ZP4f/qyLyI35Z5owqmkGKE52ljgsHGgjeLRIaU1mxeXq43+5KR7nDuWuP4 DjHkWKSA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjV-00Euru-MR; Mon, 10 Jul 2023 20:43:45 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 38/38] mm: Call update_mmu_cache_range() in more page fault handling paths Date: Mon, 10 Jul 2023 21:43:39 +0100 Message-Id: <20230710204339.3554919-39-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069844849099423 X-GMAIL-MSGID: 1771069844849099423 Pass the vm_fault to the architecture to help it make smarter decisions about which PTEs to insert into the TLB. Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e712e5fda56e..8dc54e412269 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2867,7 +2867,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, entry = pte_mkyoung(vmf->orig_pte); if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) - update_mmu_cache(vma, addr, vmf->pte); + update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); } /* @@ -3045,7 +3045,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf) entry = pte_mkyoung(vmf->orig_pte); entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); pte_unmap_unlock(vmf->pte, vmf->ptl); count_vm_event(PGREUSE); } @@ -3169,7 +3169,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) */ BUG_ON(unshare && pte_write(entry)); set_pte_at_notify(mm, vmf->address, vmf->pte, entry); - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); if (old_folio) { /* * Only after switching the pte to the new page may @@ -4039,7 +4039,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } /* No need to invalidate - it was non-present before */ - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -4163,7 +4163,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); /* No need to invalidate - it was non-present before */ - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -4837,7 +4837,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (writable) pte = pte_mkwrite(pte); ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -4986,7 +4986,8 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) entry = pte_mkyoung(entry); if (ptep_set_access_flags(vmf->vma, vmf->address, vmf->pte, entry, vmf->flags & FAULT_FLAG_WRITE)) { - update_mmu_cache(vmf->vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vmf->vma, vmf->address, + vmf->pte, 1); } else { /* Skip spurious TLB flush for retried page fault */ if (vmf->flags & FAULT_FLAG_TRIED)