From patchwork Wed Dec 20 05:18:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nanyong Sun X-Patchwork-Id: 181450 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2409992dyi; Tue, 19 Dec 2023 20:22:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IF8xxI+K3o8i4z22cHnYC3oYqqVzYsjCHqm8h5ysWZWfcFzrVZgBbkXF/MNA2gnVR6bpAaw X-Received: by 2002:a05:620a:1124:b0:781:f93:4738 with SMTP id p4-20020a05620a112400b007810f934738mr803650qkk.121.1703046136107; Tue, 19 Dec 2023 20:22:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703046136; cv=none; d=google.com; s=arc-20160816; b=hpy0gfcEp0NbdBXHvIyhFzWbtSZL96uDQbuXEdzIz8Z0w9+BzNDwVN8bLeodIvj6eJ 8hbJaxPuL2cg+rLgRK9TWRfoEyiV4sN/OeUP+R57Jg0ZfuW973r+onlA0RnfUI8KYnvn iAJ+cE80WuLuQBlCMoqumVuroP71yVFwHxmlDLqzVP2wAtbGO+CMT2V08ZwyCcY+c/qf NvzP9w3jVssR6ttnoWFLexrVGYlsC8uU4PFyAlUs4l/9du6e0Qhl2LujRGk8zB02b7sV suEvKNZL/Z4EX1qtvo71py8isLDJGoAzP2Le3LRsZ/ZHs0PkvrL8CoOnkJZg1JSd0SbQ Wjow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=sdpGPIry7Om9J5JJWjA8qr98spmVPAsS0vg4JpIjNWw=; fh=IBRY2OfN0iTWW2ZjSSKvUJynIiV/JLaiDZe55wN3MqA=; b=vK4L+DqQtXsc27vfSGn+Enfg36nMZSS/BAucENdzTo5nNeFawPDWtGv0B/nhhyHLwI NXNQ5gIYQGpvDhXgiW0V+qhuYZ7TYK9tbsGWs5V8KfwvesE7YyVh0eeBs/cPKjZXS7/F Exara5k9JYufll5bTyaQLDdNa23wllZC4uL2jnTZmDG5AWMGr6KKk0S73ZAidR6LC39E VG3TFJfLlUhvRBJ6qOnIuPfq7mSTt7d6Q+dbyjwyxsoicjXJLEJDqXcBeEwD6fXZoPq4 0bh2XEQInLUUTBMZXoTA+1etmw57Q1PQOmxrAcVuypHL1T+x1fGnKJsE2sJIghPtgzLp lV3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-6364-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6364-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id de14-20020a05620a370e00b0077da5abcd43si28353113qkb.270.2023.12.19.20.22.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 20:22:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6364-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-6364-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6364-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id DB5201C20BB3 for ; Wed, 20 Dec 2023 04:22:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AE2EA18625; Wed, 20 Dec 2023 04:21:48 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A558B156DD for ; Wed, 20 Dec 2023 04:21:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Sw0kF2gX3zsSDx; Wed, 20 Dec 2023 12:21:25 +0800 (CST) Received: from kwepemm000003.china.huawei.com (unknown [7.193.23.66]) by mail.maildlp.com (Postfix) with ESMTPS id 0F7BB18007A; Wed, 20 Dec 2023 12:21:42 +0800 (CST) Received: from huawei.com (10.175.113.32) by kwepemm000003.china.huawei.com (7.193.23.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 20 Dec 2023 12:21:41 +0800 From: Nanyong Sun To: , , , , , CC: , , , , , Subject: [PATCH v2 1/3] mm: HVO: introduce helper function to update and flush pgtable Date: Wed, 20 Dec 2023 13:18:53 +0800 Message-ID: <20231220051855.47547-2-sunnanyong@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231220051855.47547-1-sunnanyong@huawei.com> References: <20231220051855.47547-1-sunnanyong@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm000003.china.huawei.com (7.193.23.66) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785773305291187091 X-GMAIL-MSGID: 1785773305291187091 Add pmd/pte update and tlb flush helper function to update page table. This refactoring patch is designed to facilitate each architecture to implement its own special logic in preparation for the arm64 architecture to follow the necessary break-before-make sequence when updating page tables. Signed-off-by: Nanyong Sun Reviewed-by: Muchun Song --- mm/hugetlb_vmemmap.c | 55 ++++++++++++++++++++++++++++++++++---------- 1 file changed, 43 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 87818ee7f01d..2187e5410a94 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -45,6 +45,37 @@ struct vmemmap_remap_walk { unsigned long flags; }; +#ifndef vmemmap_update_pmd +static inline void vmemmap_update_pmd(unsigned long addr, + pmd_t *pmdp, pte_t *ptep) +{ + pmd_populate_kernel(&init_mm, pmdp, ptep); +} +#endif + +#ifndef vmemmap_update_pte +static inline void vmemmap_update_pte(unsigned long addr, + pte_t *ptep, pte_t pte) +{ + set_pte_at(&init_mm, addr, ptep, pte); +} +#endif + +#ifndef vmemmap_flush_tlb_all +static inline void vmemmap_flush_tlb_all(void) +{ + flush_tlb_all(); +} +#endif + +#ifndef vmemmap_flush_tlb_range +static inline void vmemmap_flush_tlb_range(unsigned long start, + unsigned long end) +{ + flush_tlb_kernel_range(start, end); +} +#endif + static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, bool flush) { pmd_t __pmd; @@ -87,9 +118,9 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, bool flush) /* Make pte visible before pmd. See comment in pmd_install(). */ smp_wmb(); - pmd_populate_kernel(&init_mm, pmd, pgtable); + vmemmap_update_pmd(start, pmd, pgtable); if (flush) - flush_tlb_kernel_range(start, start + PMD_SIZE); + vmemmap_flush_tlb_range(start, start + PMD_SIZE); } else { pte_free_kernel(&init_mm, pgtable); } @@ -217,7 +248,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end, } while (pgd++, addr = next, addr != end); if (walk->remap_pte && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) - flush_tlb_kernel_range(start, end); + vmemmap_flush_tlb_range(start, end); return 0; } @@ -263,15 +294,15 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, /* * Makes sure that preceding stores to the page contents from - * vmemmap_remap_free() become visible before the set_pte_at() - * write. + * vmemmap_remap_free() become visible before the + * vmemmap_update_pte() write. */ smp_wmb(); } entry = mk_pte(walk->reuse_page, pgprot); list_add(&page->lru, walk->vmemmap_pages); - set_pte_at(&init_mm, addr, pte, entry); + vmemmap_update_pte(addr, pte, entry); } /* @@ -310,10 +341,10 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, /* * Makes sure that preceding stores to the page contents become visible - * before the set_pte_at() write. + * before the vmemmap_update_pte() write. */ smp_wmb(); - set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); + vmemmap_update_pte(addr, pte, mk_pte(page, pgprot)); } /** @@ -576,7 +607,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, } if (restored) - flush_tlb_all(); + vmemmap_flush_tlb_all(); if (!ret) ret = restored; return ret; @@ -744,7 +775,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l break; } - flush_tlb_all(); + vmemmap_flush_tlb_all(); list_for_each_entry(folio, folio_list, lru) { int ret = __hugetlb_vmemmap_optimize_folio(h, folio, @@ -760,7 +791,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l * allowing more vmemmap remaps to occur. */ if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) { - flush_tlb_all(); + vmemmap_flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); INIT_LIST_HEAD(&vmemmap_pages); __hugetlb_vmemmap_optimize_folio(h, folio, @@ -769,7 +800,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l } } - flush_tlb_all(); + vmemmap_flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); }