Message ID | 20230731074829.79309-2-wangkefeng.wang@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:918b:0:b0:3e4:2afc:c1 with SMTP id s11csp1866882vqg; Mon, 31 Jul 2023 01:13:25 -0700 (PDT) X-Google-Smtp-Source: APBJJlHayw5aUM5su/DQWyDeUVCVfc+IY6IyTi27P3dR6yPc4ee+orVEnTIPvuKxpGZXatnKAcUe X-Received: by 2002:a05:6a21:7182:b0:133:b3a9:90d with SMTP id wq2-20020a056a21718200b00133b3a9090dmr8979337pzb.36.1690791205106; Mon, 31 Jul 2023 01:13:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690791205; cv=none; d=google.com; s=arc-20160816; b=eicH5oRuiKazNoWK/CUiRMCXiCKotASgXes2Tj3Xv7OUsyGMvtPVd5rHG1AoDTjK27 SoFdDpo3ZxUPYUTTGWAEsBmsW54OS3ZIW9vBJoyEUZwsJY2QNIMwnS9mb7HA5LRBe2yb MUsmkwuJdo/Gt0KGrlRoFH7kBbH7QsYg1Yl3RYt7Vwznnx9q3DJPeI0bkpYh0wa2yLay J29XPIo2w1+KyDVuzM4aMfeZi0Zan8zT8TvmD3ggUtoDwjpiZhYAxf8+62xNIcjtUmIA Tft/y1VIK6bUXBxQFr6knSGbqHHoPXqhFQSuPpDc4W87YwjcKTPR0i6wACuREvmZzbJB F8hA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Yj4az2ALhhEyvky0TbaSWKAJzQx3pWTWZySo+ZOZ/q0=; fh=pWGPqOKfNGLee4HdbDJvfX9Bibz+qaOAq2a2PB8WHto=; b=i6JSJrMTG9PtywYeQco94G+IQxiYMykLtNNrQeVW392FGPNhg540Erzzo9BIP7/9pC bwANSn8jGCuFCXWLJ0pzD1EICcIy0+domIOhcSsZKzr5LUD/dcy0N9R5zSuysMYTOUdK zaj5kWQb6f0ZZjRNxWb13BpEcu5/uMIm9RYEGeHx1Q8GJeKViHvswgIC66N5lbayfidE vwkaUQtE3/1lr5xjsJLRsrLLnCZmWlYYymMr+l9jfeuXF7ie/tKjqn9boTxCQTc6/BKH FvyekU8yh/GG8fOHOe6g3sFi6KutHTanaDtD3Bog96h/cEadg7cEYPOmsS0O3D1jhDa/ +UWw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fd28-20020a056a002e9c00b0067d12984ec5si3174383pfb.289.2023.07.31.01.13.12; Mon, 31 Jul 2023 01:13:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230458AbjGaHjY (ORCPT <rfc822;dengxinlin2429@gmail.com> + 99 others); Mon, 31 Jul 2023 03:39:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229487AbjGaHjT (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 31 Jul 2023 03:39:19 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8D8110C for <linux-kernel@vger.kernel.org>; Mon, 31 Jul 2023 00:39:17 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RDqm627jmzNmP9; Mon, 31 Jul 2023 15:35:50 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Mon, 31 Jul 2023 15:39:14 +0800 From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Andrew Morton <akpm@linux-foundation.org>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Mike Kravetz <mike.kravetz@oracle.com>, Muchun Song <muchun.song@linux.dev>, Mina Almasry <almasrymina@google.com>, <kirill@shutemov.name>, <joel@joelfernandes.org>, <william.kucharski@oracle.com>, <kaleshsingh@google.com>, <linux-mm@kvack.org> CC: <linux-arm-kernel@lists.infradead.org>, <linux-kernel@vger.kernel.org>, Kefeng Wang <wangkefeng.wang@huawei.com> Subject: [PATCH 1/4] mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables() Date: Mon, 31 Jul 2023 15:48:26 +0800 Message-ID: <20230731074829.79309-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230731074829.79309-1-wangkefeng.wang@huawei.com> References: <20230731074829.79309-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772923078727086084 X-GMAIL-MSGID: 1772923078727086084 |
Series |
mm: mremap: fix move page tables
|
|
Commit Message
Kefeng Wang
July 31, 2023, 7:48 a.m. UTC
Archs may need to do special things when flushing hugepage tlb,
so use the more applicable flush_hugetlb_tlb_range() instead of
flush_tlb_range().
Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/hugetlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Comments
On 07/31/23 15:48, Kefeng Wang wrote: > Archs may need to do special things when flushing hugepage tlb, > so use the more applicable flush_hugetlb_tlb_range() instead of > flush_tlb_range(). > > Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Thanks! Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Although, I missed this in 550a7d60bd5e :( Looks like only powerpc provides an arch specific flush_hugetlb_tlb_range today.
> On Jul 31, 2023, at 15:48, Kefeng Wang <wangkefeng.wang@huawei.com> wrote: > > Archs may need to do special things when flushing hugepage tlb, > so use the more applicable flush_hugetlb_tlb_range() instead of > flush_tlb_range(). > > Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Muchun Song <songmuchun@bytedance.com>
On Mon, Jul 31, 2023 at 4:40 PM Mike Kravetz <mike.kravetz@oracle.com> wrote: > > On 07/31/23 15:48, Kefeng Wang wrote: > > Archs may need to do special things when flushing hugepage tlb, > > so use the more applicable flush_hugetlb_tlb_range() instead of > > flush_tlb_range(). > > > > Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > > Thanks! > > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > Sorry for jumping in late, but given the concerns raised around HGM and the deviation between hugetlb and the rest of MM, does it make sense to try to make an incremental effort towards avoiding hugetlb specialization? In the context of this patch, I would prefer that the arch upgrade flush_tlb_range() to handle hugetlb correctly, instead of adding more hugetlb specific deviations, ala flush_hugetlb_tlb_range. While it's at it, maybe replace flush_hugetlb_tlb_range() in the code with flush_tlb_range(). Although, I don't have the expertise to judge if upgrading flush_tlb_range() to handle hugetlb is easy or feasible at all. > Although, I missed this in 550a7d60bd5e :( > > Looks like only powerpc provides an arch specific flush_hugetlb_tlb_range > today. > -- > Mike Kravetz > > > --- > > mm/hugetlb.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 64a3239b6407..ac876bfba340 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -5281,9 +5281,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, > > } > > > > if (shared_pmd) > > - flush_tlb_range(vma, range.start, range.end); > > + flush_hugetlb_tlb_range(vma, range.start, range.end); > > else > > - flush_tlb_range(vma, old_end - len, old_end); > > + flush_hugetlb_tlb_range(vma, old_end - len, old_end); > > mmu_notifier_invalidate_range_end(&range); > > i_mmap_unlock_write(mapping); > > hugetlb_vma_unlock_write(vma); > > -- > > 2.41.0 > >
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 64a3239b6407..ac876bfba340 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5281,9 +5281,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, } if (shared_pmd) - flush_tlb_range(vma, range.start, range.end); + flush_hugetlb_tlb_range(vma, range.start, range.end); else - flush_tlb_range(vma, old_end - len, old_end); + flush_hugetlb_tlb_range(vma, old_end - len, old_end); mmu_notifier_invalidate_range_end(&range); i_mmap_unlock_write(mapping); hugetlb_vma_unlock_write(vma);