From patchwork Mon Jul 10 20:43:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 118113 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp77278vqm; Mon, 10 Jul 2023 14:17:03 -0700 (PDT) X-Google-Smtp-Source: APBJJlG/ltdaIzw7AKZxanRNLh5H8a7CXT/om8fYxG2fR6O7gHO9mvMZwDgvzwnHFwVHsmiQk0Pm X-Received: by 2002:a17:902:70c8:b0:1b9:e913:b5b7 with SMTP id l8-20020a17090270c800b001b9e913b5b7mr1908342plt.44.1689023823627; Mon, 10 Jul 2023 14:17:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023823; cv=none; d=google.com; s=arc-20160816; b=obh+3zsK/OHmAIIGmo/lPhOSRC2rbclIx3Kr8cSA9fY6sWBn8IVN3QT50m/yOwLwWJ JmppyuDK38ZluzvLeclzpH1iW+ojPaX3rNzVv7I2ET/YrSgpnClTEJ5juWYJqfjkculK rPMQDK+6y4TSxc5wYuMZgkut9nwvGGLV79H3kAiSVAm7Hw7ptkQie3Q4ZsThN6BA7wBQ v2TFBEFe3UPigMz0+sZOmJLmBJAQJGWMD5cUuff/UzGK4IjeXu8il+AwD9nCzaBgUvnV XpK+WOpyFcUeB138qoB7FpWUPl6q0WC7LXBjSo6jonzSJz0sKijPvi02L7fgWNF+T70j AV8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nV5u7fUN7eLvvulKehuO2ZLa6X1ni8Kq3NCH3wI1W68=; fh=XtKYrrV24qpJDEOQ96O/Lmd5LL6GP9mEWUOmxLG8cXI=; b=uh0xg7rWeCp3ilka0+viPXITC/riZaHmM9nrc3Pve6wH4uGcSU/EbJePlx6/8Yoqp1 AtIAcn7jAy61naVIYbBVgP65Yl+IK8hDWMQ97B4STnK7YfTlu9EJ1UJ3ug270Pgwbw5z XdgUujWMvXqBiJr/yWnCL4aIdPHb9Rx3WpvZ5nQX/jzcH5BjWAnrbWED5dXWnDDZVs6B T3mBYdyoQWTfLUv3EsNspwIAjCVrkM9FoTh7DElhfQ11aqAKlEqRQBe8CMcKwTzsQRUg qkava2QqYCqWknDKjETXJajgpN7Y6c314bByoXqEicvJQ/0XuAvAdGM6JbJLxPZSAXbk JdBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=TyH0uCOA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c7-20020a170902d48700b001b8a67f1c10si366279plg.468.2023.07.10.14.16.50; Mon, 10 Jul 2023 14:17:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=TyH0uCOA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233221AbjGJUpp (ORCPT + 99 others); Mon, 10 Jul 2023 16:45:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232026AbjGJUoe (ORCPT ); Mon, 10 Jul 2023 16:44:34 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83846E5F; Mon, 10 Jul 2023 13:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nV5u7fUN7eLvvulKehuO2ZLa6X1ni8Kq3NCH3wI1W68=; b=TyH0uCOAc3cacwsNtSZ03toNxz /ldDgYNddIyTOMMCZ1doy4k1qRn3NKk9gaueXlr9Q9ZVwH7d6MVaJwERahiAgCAgQAxDi5JRsVFDi 3eHBvfR1ifNIth5VmyFLKtzah2dSw0acjuFg5RowKxVCVLH2xhcDXdP5tmz+HHmuLnfag6TRgXq0C dDUGPZDTEpllcYWbCHppyfSaiB6v5Mq+VWBJB8++zjxej6Zzy2RAjB9w/UOfYIH27C4iZJqdMPgje lL8zXH5n6a9ozIhoQ12ZP4f/qyLyI35Z5owqmkGKE52ljgsHGgjeLRIaU1mxeXq43+5KR7nDuWuP4 DjHkWKSA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qIxjV-00Euru-MR; Mon, 10 Jul 2023 20:43:45 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 38/38] mm: Call update_mmu_cache_range() in more page fault handling paths Date: Mon, 10 Jul 2023 21:43:39 +0100 Message-Id: <20230710204339.3554919-39-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230710204339.3554919-1-willy@infradead.org> References: <20230710204339.3554919-1-willy@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771069844849099423 X-GMAIL-MSGID: 1771069844849099423 Pass the vm_fault to the architecture to help it make smarter decisions about which PTEs to insert into the TLB. Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e712e5fda56e..8dc54e412269 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2867,7 +2867,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, entry = pte_mkyoung(vmf->orig_pte); if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) - update_mmu_cache(vma, addr, vmf->pte); + update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); } /* @@ -3045,7 +3045,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf) entry = pte_mkyoung(vmf->orig_pte); entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); pte_unmap_unlock(vmf->pte, vmf->ptl); count_vm_event(PGREUSE); } @@ -3169,7 +3169,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) */ BUG_ON(unshare && pte_write(entry)); set_pte_at_notify(mm, vmf->address, vmf->pte, entry); - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); if (old_folio) { /* * Only after switching the pte to the new page may @@ -4039,7 +4039,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } /* No need to invalidate - it was non-present before */ - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -4163,7 +4163,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); /* No need to invalidate - it was non-present before */ - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -4837,7 +4837,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (writable) pte = pte_mkwrite(pte); ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -4986,7 +4986,8 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) entry = pte_mkyoung(entry); if (ptep_set_access_flags(vmf->vma, vmf->address, vmf->pte, entry, vmf->flags & FAULT_FLAG_WRITE)) { - update_mmu_cache(vmf->vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vmf->vma, vmf->address, + vmf->pte, 1); } else { /* Skip spurious TLB flush for retried page fault */ if (vmf->flags & FAULT_FLAG_TRIED)