Message ID | 20230731074829.79309-4-wangkefeng.wang@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:918b:0:b0:3e4:2afc:c1 with SMTP id s11csp1867592vqg; Mon, 31 Jul 2023 01:15:12 -0700 (PDT) X-Google-Smtp-Source: APBJJlHAhAWn47kmWG1+wicSySRumOmOfRehBn5D04XAMWWHejmlMkVxcZsziB4HNz6zn/iVrGjh X-Received: by 2002:a17:90b:3908:b0:268:d1c:2421 with SMTP id ob8-20020a17090b390800b002680d1c2421mr8300069pjb.38.1690791312515; Mon, 31 Jul 2023 01:15:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690791312; cv=none; d=google.com; s=arc-20160816; b=QKFWaKxcpXZbPsjZH0KL0x0dqFBUuwWOcrfEXzuXi5wiRljflMAbkxATA3JSu2NS+K tZHwGZ46j29qcZH8Ogqx0IdDmYiH82IYbzjrS8KAjhdFBtMllw5C3JV0kSBXQ8oXs8ZW OJMM5xzBC83rEIEN6ppAGGaGO6l/YYp7H8LDnQYsJsLwzJITmrpIMFoHjJjeCRi62i3H WRehnNStIbn9vjgB+0XZZ8MinmJTSP62ByKX5ZlZ2TJT+Wrn44b66Kl1sEy1EaszhpEA JXTpsjRnRhx5X7Oh0cCbclW8BlFbux2SDpfbVzN+HJZ8dg1twqmZCFcrEtldPJki3XIy hPJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=y2xwMHmp2+AF+LplUKlYJQ4KHQF98DXrKqXWA6DlewM=; fh=pWGPqOKfNGLee4HdbDJvfX9Bibz+qaOAq2a2PB8WHto=; b=lYlWu1bKZAk4xHh30/44u2PVnNRS6gE0c78WUanUrqT8nJjBaPBPqZzBZVBcgUoYDD P/oXL7m2wpUu/2u1yxWL8QjIVihbfHP8DI9nBV01jUpHOU0Yvq4d/4lAaV/qOxYeyiCH npjveQbEFXEmiDfH+dT5r8045WOfaissefGB3rQvpmHGRGRxCDalP/tVlKPY3EmpFAMV 7kb6FQ5yeaWX8tWrSD2cF4jlKWZxF3uZ2eR7ez0IcVpy3yE64UvgYuyrjJKk/Sg/qLJf rih+OajX1NK1OGAeUH9qtOh4K9XRnBuWpe2cSViyIrYSvYphu7Mh9/YWMR6O4P4+EW2F EEQA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 41-20020a17090a09ac00b00267e9ad4f40si5707847pjo.54.2023.07.31.01.14.59; Mon, 31 Jul 2023 01:15:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231468AbjGaHjf (ORCPT <rfc822;dengxinlin2429@gmail.com> + 99 others); Mon, 31 Jul 2023 03:39:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229755AbjGaHjV (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 31 Jul 2023 03:39:21 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EB7F115 for <linux-kernel@vger.kernel.org>; Mon, 31 Jul 2023 00:39:18 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RDqm74d7YzNmbC; Mon, 31 Jul 2023 15:35:51 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Mon, 31 Jul 2023 15:39:15 +0800 From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Andrew Morton <akpm@linux-foundation.org>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Mike Kravetz <mike.kravetz@oracle.com>, Muchun Song <muchun.song@linux.dev>, Mina Almasry <almasrymina@google.com>, <kirill@shutemov.name>, <joel@joelfernandes.org>, <william.kucharski@oracle.com>, <kaleshsingh@google.com>, <linux-mm@kvack.org> CC: <linux-arm-kernel@lists.infradead.org>, <linux-kernel@vger.kernel.org>, Kefeng Wang <wangkefeng.wang@huawei.com> Subject: [PATCH 3/4] mm: mremap: use flush_pud_tlb_range in move_normal_pud() Date: Mon, 31 Jul 2023 15:48:28 +0800 Message-ID: <20230731074829.79309-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230731074829.79309-1-wangkefeng.wang@huawei.com> References: <20230731074829.79309-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772923191518044652 X-GMAIL-MSGID: 1772923191518044652 |
Series |
mm: mremap: fix move page tables
|
|
Commit Message
Kefeng Wang
July 31, 2023, 7:48 a.m. UTC
Archs may need to do special things when flushing thp tlb,
so use the more applicable flush_pud_tlb_range() instead of
flush_tlb_range().
Fixes: c49dd3401802 ("mm: speedup mremap on 1GB or larger regions")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/mremap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Comments
Hi Kefeng, kernel test robot noticed the following build errors: [auto build test ERROR on arm64/for-next/core] [also build test ERROR on arm-perf/for-next/perf linus/master v6.5-rc4 next-20230731] [cannot apply to akpm-mm/mm-everything] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-hugetlb-use-flush_hugetlb_tlb_range-in-move_hugetlb_page_tables/20230731-154016 base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core patch link: https://lore.kernel.org/r/20230731074829.79309-4-wangkefeng.wang%40huawei.com patch subject: [PATCH 3/4] mm: mremap: use flush_pud_tlb_range in move_normal_pud() config: riscv-allmodconfig (https://download.01.org/0day-ci/archive/20230801/202308010022.uY01vAew-lkp@intel.com/config) compiler: riscv64-linux-gcc (GCC) 12.3.0 reproduce: (https://download.01.org/0day-ci/archive/20230801/202308010022.uY01vAew-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202308010022.uY01vAew-lkp@intel.com/ All errors (new ones prefixed by >>): mm/mremap.c: In function 'move_normal_pud': >> mm/mremap.c:336:9: error: implicit declaration of function 'flush_pud_tlb_range'; did you mean 'flush_pmd_tlb_range'? [-Werror=implicit-function-declaration] 336 | flush_pud_tlb_range(vma, old_addr, old_addr + PUD_SIZE); | ^~~~~~~~~~~~~~~~~~~ | flush_pmd_tlb_range cc1: some warnings being treated as errors vim +336 mm/mremap.c 302 303 #if CONFIG_PGTABLE_LEVELS > 2 && defined(CONFIG_HAVE_MOVE_PUD) 304 static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, 305 unsigned long new_addr, pud_t *old_pud, pud_t *new_pud) 306 { 307 spinlock_t *old_ptl, *new_ptl; 308 struct mm_struct *mm = vma->vm_mm; 309 pud_t pud; 310 311 if (!arch_supports_page_table_move()) 312 return false; 313 /* 314 * The destination pud shouldn't be established, free_pgtables() 315 * should have released it. 316 */ 317 if (WARN_ON_ONCE(!pud_none(*new_pud))) 318 return false; 319 320 /* 321 * We don't have to worry about the ordering of src and dst 322 * ptlocks because exclusive mmap_lock prevents deadlock. 323 */ 324 old_ptl = pud_lock(vma->vm_mm, old_pud); 325 new_ptl = pud_lockptr(mm, new_pud); 326 if (new_ptl != old_ptl) 327 spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); 328 329 /* Clear the pud */ 330 pud = *old_pud; 331 pud_clear(old_pud); 332 333 VM_BUG_ON(!pud_none(*new_pud)); 334 335 pud_populate(mm, new_pud, pud_pgtable(pud)); > 336 flush_pud_tlb_range(vma, old_addr, old_addr + PUD_SIZE); 337 if (new_ptl != old_ptl) 338 spin_unlock(new_ptl); 339 spin_unlock(old_ptl); 340 341 return true; 342 } 343 #else 344 static inline bool move_normal_pud(struct vm_area_struct *vma, 345 unsigned long old_addr, unsigned long new_addr, pud_t *old_pud, 346 pud_t *new_pud) 347 { 348 return false; 349 } 350 #endif 351
diff --git a/mm/mremap.c b/mm/mremap.c index 1883205fa22b..25114e56901f 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -333,7 +333,7 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pud_none(*new_pud)); pud_populate(mm, new_pud, pud_pgtable(pud)); - flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + flush_pud_tlb_range(vma, old_addr, old_addr + PUD_SIZE); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl);