From patchwork Tue Sep 26 00:52:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 144659 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:cae8:0:b0:403:3b70:6f57 with SMTP id r8csp1590538vqu; Mon, 25 Sep 2023 17:56:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHy+L4/Lb5KCdykZbxOi4/Et2lSa1kPKwXHJUMlItGixHTEe+ysB8+4WWCGaMQeEoDtvrVp X-Received: by 2002:a17:903:24d:b0:1c3:4b24:d89d with SMTP id j13-20020a170903024d00b001c34b24d89dmr11776076plh.40.1695689781912; Mon, 25 Sep 2023 17:56:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695689781; cv=none; d=google.com; s=arc-20160816; b=AvCxliUeVYrm2fg+TTxJCjpp68NnuGoD31f/OpIwczFYoGo4P2/0J0I2VMpJQv4C3e YzbM4SH5U5bVTka5XnE8nHctmds+Si/fTk5HDK9yosT4uwcmH+Iqb6Bxp4NolCWtRKJd 32y31qFjdJ21LbIHgyAVNM6l7oeWKpkabCsnCqANcwJ2gFfZPb+BzJGzwAGXCDF9iw2I z4ejGVD+4dz8jUJm56OEcYAFF6XzmSmtsLzYtjfXjTZG/32L308OD1VOHHbuGq9qDM6z 4HEuORbwOVJfEPAcxowrc1TNKrpuMI14ikNqsdNuAqOEN6cEvyN3uX+1gKfT8nJq2KIc ozNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=DXDkkBvU79MWZKI+0442dVxorWxgJcWylyLHViP7wEs=; fh=FiTqiBStz9H28Qh/9rW0myzOFac7M9VOPFJTunt8oTE=; b=i4L1UtQA09L2RGqSDcCC++CLYlKe5nCjtZOe50xkHo+MkmiXSfPb8X7ODaryB3UZVN yEhl3R8tW6XBjq6zVJaRbaNm34GeqbnhoaPUFmDEfjLWDc/xAaXqO5xv3mFN7jQB67UG qamEV1Kd+6wRin/QQC1OHY1krGlGatGuTXdNa1dhk1+bJZvabTPE/j6Q4bxJWaLSnx8p MdFRZWlUAOsBsF9FIyiV/XWfJIzptFzpuWrmdkB/GGYf9oE/Wg1pI/vfjCH7307I0sFo KhPQbjrq/mohGGfKcgJDcc6VhRWyudwa8UcnntG9+xZNy8HIASbuYozyX8PE+EE6DN+S yswQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id f9-20020a17090274c900b001bf5753e0ccsi11462067plt.119.2023.09.25.17.56.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Sep 2023 17:56:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 413C28251732; Mon, 25 Sep 2023 17:53:51 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233320AbjIZAxe (ORCPT + 27 others); Mon, 25 Sep 2023 20:53:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232372AbjIZAxT (ORCPT ); Mon, 25 Sep 2023 20:53:19 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1470CD6 for ; Mon, 25 Sep 2023 17:53:13 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Rvh2B6SqZztT0Z; Tue, 26 Sep 2023 08:48:50 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 26 Sep 2023 08:53:10 +0800 From: Kefeng Wang To: Andrew Morton CC: Mike Rapoport , Matthew Wilcox , David Hildenbrand , , , , Zi Yan , Kefeng Wang Subject: [PATCH -next 6/9] mm: make wp_page_reuse() and finish_mkwrite_fault() to take a folio Date: Tue, 26 Sep 2023 08:52:51 +0800 Message-ID: <20230926005254.2861577-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230926005254.2861577-1-wangkefeng.wang@huawei.com> References: <20230926005254.2861577-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 25 Sep 2023 17:53:51 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778059608753634459 X-GMAIL-MSGID: 1778059608753634459 Make finish_mkwrite_fault() to a static function, and convert wp_page_reuse() and finish_mkwrite_fault() to take a folio in preparation for page_cpupid_xchg_last() to folio conversion. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 1 - mm/memory.c | 37 ++++++++++++++++++++----------------- 2 files changed, 20 insertions(+), 18 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index aa7fdda1b56c..9933f6345e66 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1335,7 +1335,6 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio, struct page *page, unsigned int nr, unsigned long addr); vm_fault_t finish_fault(struct vm_fault *vmf); -vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #endif /* diff --git a/mm/memory.c b/mm/memory.c index 5ab6e8d45a7d..119c40e4465e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3014,23 +3014,24 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf) * case, all we need to do here is to mark the page as writable and update * any related book-keeping. */ -static inline void wp_page_reuse(struct vm_fault *vmf) +static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio) __releases(vmf->ptl) { struct vm_area_struct *vma = vmf->vma; - struct page *page = vmf->page; pte_t entry; VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE)); - VM_BUG_ON(page && PageAnon(page) && !PageAnonExclusive(page)); + if (folio) { + VM_BUG_ON(folio_test_anon(folio) && + !PageAnonExclusive(vmf->page)); - /* - * Clear the pages cpupid information as the existing - * information potentially belongs to a now completely - * unrelated process. - */ - if (page) - page_cpupid_xchg_last(page, (1 << LAST_CPUPID_SHIFT) - 1); + /* + * Clear the pages cpupid information as the existing + * information potentially belongs to a now completely + * unrelated process. + */ + page_cpupid_xchg_last(vmf->page, (1 << LAST_CPUPID_SHIFT) - 1); + } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = pte_mkyoung(vmf->orig_pte); @@ -3223,6 +3224,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * writeable once the page is prepared * * @vmf: structure describing the fault + * @folio: the folio of vmf->page * * This function handles all that is needed to finish a write page fault in a * shared mapping due to PTE being read-only once the mapped page is prepared. @@ -3234,7 +3236,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * Return: %0 on success, %VM_FAULT_NOPAGE when PTE got changed before * we acquired PTE lock. */ -vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf) +static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, + struct folio *folio) { WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED)); vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, @@ -3250,7 +3253,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); return VM_FAULT_NOPAGE; } - wp_page_reuse(vmf); + wp_page_reuse(vmf, folio); return 0; } @@ -3275,9 +3278,9 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf) ret = vma->vm_ops->pfn_mkwrite(vmf); if (ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE)) return ret; - return finish_mkwrite_fault(vmf); + return finish_mkwrite_fault(vmf, NULL); } - wp_page_reuse(vmf); + wp_page_reuse(vmf, NULL); return 0; } @@ -3305,14 +3308,14 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio) folio_put(folio); return tmp; } - tmp = finish_mkwrite_fault(vmf); + tmp = finish_mkwrite_fault(vmf, folio); if (unlikely(tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) { folio_unlock(folio); folio_put(folio); return tmp; } } else { - wp_page_reuse(vmf); + wp_page_reuse(vmf, folio); folio_lock(folio); } ret |= fault_dirty_shared_page(vmf); @@ -3436,7 +3439,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; } - wp_page_reuse(vmf); + wp_page_reuse(vmf, folio); return 0; } copy: