From patchwork Wed Mar 15 05:14:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 69950 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp2144948wrd; Tue, 14 Mar 2023 22:15:55 -0700 (PDT) X-Google-Smtp-Source: AK7set8kcLpHbCzunrkHoQAeDSThITS1PahsHtgS/K2wFZFA9wvcOTd49/Zfi9qbVB4utQM/xEnd X-Received: by 2002:a17:902:d2c4:b0:1a0:7678:5e0b with SMTP id n4-20020a170902d2c400b001a076785e0bmr1463481plc.22.1678857355284; Tue, 14 Mar 2023 22:15:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1678857355; cv=none; d=google.com; s=arc-20160816; b=UM60rDa/GtfuXwPKJr9a7bJC1/D+gX+DrDDL2z80qy/gLcFHnOba+tidKQssoX/ftv En0ZYX2hmqVUc4a4EDB8EsEuSKbjZrZ3l7qYBX6V+WLqBuGHsinKIlb6hl9whXwCPOSv o4EExbuA+q+4h6T6mjGsqyehil8vDoRV6Nl3UCooNLrCWIl68np4MjpOPo+fiD2B6NwG idYReokyjVrsheSbJD/qwfAno4BpINyGfvxs73H/26W8sOzKtKeyV4BuZXkwObcsomSr EaRJGJwCqr3hLx5UggeEmRAi1CZ282DWHPWjUCCzofyB2XHs3AVsIFXazP/m3Zk/2oEd YbPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=LW+93YDvE0rhXiX71/y+XJfftY4tnk1rGrYMJAeDkKc=; b=irG7Cji/zhgCJ6BGXLNKDaIuiQh/b452RPUN2a+yOTByi+zTyDT6fLjUv7mmYPp/F7 w5nG3fIWeeQDZNfM373fUntzQvVzpR0FC8SEAGx0us0oFA9UAF/MBN9mU4baFStQBkQ/ UmqM3acnu+yQaKrg7OU/7XCIFk6IyLMeUNenddpijeCnx3CvSgYC1BGCuuTrlWA0yYzY Jx3ZYazmzjGGwQ4KZYaOkCntZY94yVrmw0y258oTFW9jt5zAwMeiBN2aX0m0d1t8x4Ce 6Gwe9fa3V+ginMyr6b8Q9Aj2wJ0mdoxi+HQR6PeScrSwomnEKWEfLhewtH5vs3BcSwPe hRXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=qvKCLfOT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c8-20020a170903234800b0019ac13b77efsi4575361plh.158.2023.03.14.22.15.42; Tue, 14 Mar 2023 22:15:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=qvKCLfOT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231313AbjCOFPI (ORCPT + 99 others); Wed, 15 Mar 2023 01:15:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230454AbjCOFOv (ORCPT ); Wed, 15 Mar 2023 01:14:51 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4109418A95; Tue, 14 Mar 2023 22:14:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LW+93YDvE0rhXiX71/y+XJfftY4tnk1rGrYMJAeDkKc=; b=qvKCLfOTWAstPHTM9tNUxHlRc+ WTnelyFVDsGQ+YeKDJ7XNwmOTH40+1/2qIkRkn9GsgIXXnV8pFpZ78kxf1Z0+WtC4l1pQ8pZPSZql pvr9OgyHeLbgFVmtSQNnH7c595wjFINTxqzxYnA+blzPnexMwpkd+qfI+YGRTyvlnEXyE099EDw/f NZ74j+2vo8ioFryx+hOe5ygAYZusqhed/xEQ0QPI2H2IhUPsbU90DTyotrfXwX1mzQdyQJSb/q5MC p1f+ShNT5H/yGybCgtisIsE0Ur1WlTLHBRu12qtRz2r2bXAizHLN7v9XvsSi5eOW1r4QO+u17d7ot WWXsf+Xw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTL-00DYBS-Fe; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v4 10/36] csky: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:18 +0000 Message-Id: <20230315051444.3229621-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760409530293682244?= X-GMAIL-MSGID: =?utf-8?q?1760409530293682244?= Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Guo Ren Cc: linux-csky@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/csky/abiv1/cacheflush.c | 32 +++++++++++++++++----------- arch/csky/abiv1/inc/abi/cacheflush.h | 2 ++ arch/csky/abiv2/cacheflush.c | 32 ++++++++++++++-------------- arch/csky/abiv2/inc/abi/cacheflush.h | 10 +++++++-- arch/csky/include/asm/pgtable.h | 8 ++++--- 5 files changed, 50 insertions(+), 34 deletions(-) diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c index fb91b069dc69..ba43f6c26b4f 100644 --- a/arch/csky/abiv1/cacheflush.c +++ b/arch/csky/abiv1/cacheflush.c @@ -14,43 +14,49 @@ #define PG_dcache_clean PG_arch_1 -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); - if (mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + if (mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { dcache_wbinv_all(); if (mapping) icache_inv_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) dcache_wbinv_all(); - if (page_mapping_file(page)) { + if (folio_flush_mapping(folio)) { if (vma->vm_flags & VM_EXEC) icache_inv_all(); } diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..0d6cb65624c4 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,8 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 39c51399dd81..622e5b1b3f8a 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -6,30 +6,30 @@ #include #include -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr) { - unsigned long addr; - struct page *page; + unsigned long pfn = pte_pfn(*pte); + struct folio *folio; + unsigned int i; - if (!pfn_valid(pte_pfn(*pte))) + if (!pfn_valid(pfn) || is_zero_pfn(pfn)) return; - page = pfn_to_page(pte_pfn(*pte)); - if (page == ZERO_PAGE(0)) - return; + folio = page_folio(pfn_to_page(pfn)); - if (test_and_set_bit(PG_dcache_clean, &page->flags)) + if (test_and_set_bit(PG_dcache_clean, &folio->flags)) return; - addr = (unsigned long) kmap_atomic(page); - - dcache_wb_range(addr, addr + PAGE_SIZE); + for (i = 0; i < folio_nr_pages(folio); i++) { + unsigned long addr = (unsigned long) kmap_local_folio(folio, + i * PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - - kunmap_atomic((void *) addr); + dcache_wb_range(addr, addr + PAGE_SIZE); + if (vma->vm_flags & VM_EXEC) + icache_inv_range(addr, addr + PAGE_SIZE); + kunmap_local((void *) addr); + } } void flush_icache_deferred(struct mm_struct *mm) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..9c728933a776 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -18,11 +18,17 @@ #define PG_dcache_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..8cd27104f408 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -28,6 +28,7 @@ #define pgd_ERROR(e) \ pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pmd_pfn(pmd) (pmd_phys(pmd) >> PAGE_SHIFT) #define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT)) #define pte_clear(mm, addr, ptep) set_pte((ptep), \ @@ -90,7 +91,6 @@ static inline void set_pte(pte_t *p, pte_t pte) /* prevent out of order excution */ smp_mb(); } -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { @@ -263,8 +263,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot)