Message ID | 20230918103213.4166210-1-wangkefeng.wang@huawei.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp2563841vqi; Mon, 18 Sep 2023 03:55:05 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHUMQkc5KBMGvWTj5jzra2++BXPTaLYKV+8UUYHnCEnIt0ycnOnGuDNSY0EczJkSZopDAvx X-Received: by 2002:a05:6a00:10c9:b0:68f:d6db:5d66 with SMTP id d9-20020a056a0010c900b0068fd6db5d66mr8178247pfu.16.1695034504793; Mon, 18 Sep 2023 03:55:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695034504; cv=none; d=google.com; s=arc-20160816; b=nmFNRHk5xjcUUXP1Yex9y6GP0sdzGHcOmUppH4KkQCLIMp6wmCZFgIGCYoWBBMMn7U yk1r6v/bLzuaAnSwDPYD189dObsK2+DaUlNJrIEki1SLgVZBtv/LGhtI0Da9z+3v1PFJ kEwX7NHGt5zDNapqlzUOrTd6bVkxpX+Cn3PdozSOfooS/2bETtofG3xSyVZAzwXipMgF KZHO23GNYU9Lkkn2OslzZ8nIvOSW/ocUwKr0LiVMB0fgst4Q2GMEqfgfvhsngLcRJSn7 dSVrj7BJFM5OhKE82gL/nwubZYXgSGKoPqzC9D3mH5Kg1iQMw57JsGeAiputJSYal0bR EQbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=EkTuRrb4II2hCzG9QkRhR8vsW+hK+/fFdQbR2ztLeXw=; fh=Je6qNyAM729b1goCtB6PkBk+yQrlQCIjJMRY6BFgKTo=; b=kQ8l2L2MssdRUHmUgleQm6PzssJQH2bWy7IO9MFk2O6eKeNf/6Fy7Txc96YSm5zHQN kFXVKDOUM5N2wIGsqif4aIAFiVRQQOnoz/B81ZJIMl0y/KhWU+FVS+/5gfM0BRuUX5lB j5uvTrVcvMH76FXI0vEjStlU7RdyrJ4B8yLcaxFwGuFq+N1lhYpF/lTmy2p0Dukm9fkC 0QbsRRLHauVbQEeuj0WlMwp74NwO7ey6fBuvaXK1lSYQIeuQop4db2QftCqkxuHkpmwJ +YvVTkcQGkK6rb8tAJkFcNkdAMZr/KtP0bpEC6bQuvJylseTHoSr8WmXeja8SP5cyI70 +k9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id y3-20020a636403000000b005694f4b9e85si7878574pgb.229.2023.09.18.03.55.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Sep 2023 03:55:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 1B1CD80B029E; Mon, 18 Sep 2023 03:34:14 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241303AbjIRKdg (ORCPT <rfc822;kernel.ruili@gmail.com> + 27 others); Mon, 18 Sep 2023 06:33:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241327AbjIRKdN (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 18 Sep 2023 06:33:13 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8429121 for <linux-kernel@vger.kernel.org>; Mon, 18 Sep 2023 03:32:48 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Rq1JH2n8mzVkyT; Mon, 18 Sep 2023 18:29:51 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 18 Sep 2023 18:32:45 +0800 From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Andrew Morton <akpm@linux-foundation.org> CC: <willy@infradead.org>, <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>, <ying.huang@intel.com>, <david@redhat.com>, Zi Yan <ziy@nvidia.com>, Mike Kravetz <mike.kravetz@oracle.com>, <hughd@google.com>, Kefeng Wang <wangkefeng.wang@huawei.com> Subject: [PATCH 0/6] mm: convert numa balancing functions to use a folio Date: Mon, 18 Sep 2023 18:32:07 +0800 Message-ID: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 18 Sep 2023 03:34:14 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777372500912928627 X-GMAIL-MSGID: 1777372500912928627 |
Series |
mm: convert numa balancing functions to use a folio
|
|
Message
Kefeng Wang
Sept. 18, 2023, 10:32 a.m. UTC
The do_numa_pages only handle non-compound page, and only PMD-mapped THP is handled in do_huge_pmd_numa_page(), but large, PTE-mapped folio will be supported, let's convert more numa balancing functions to use/take a folio in preparation for that, no functional change intended for now. Kefeng Wang (6): sched/numa, mm: make numa migrate functions to take a folio mm: mempolicy: make mpol_misplaced() to take a folio mm: memory: make numa_migrate_prep() to take a folio mm: memory: use a folio in do_numa_page() mm: memory: add vm_normal_pmd_folio() mm: huge_memory: use a folio in do_huge_pmd_numa_page() include/linux/mempolicy.h | 4 +-- include/linux/mm.h | 2 ++ include/linux/sched/numa_balancing.h | 4 +-- kernel/sched/fair.c | 12 +++---- mm/huge_memory.c | 28 ++++++++-------- mm/internal.h | 2 +- mm/memory.c | 49 ++++++++++++++++------------ mm/mempolicy.c | 20 ++++++------ 8 files changed, 65 insertions(+), 56 deletions(-)
Comments
On Mon, Sep 18, 2023 at 06:32:07PM +0800, Kefeng Wang wrote: > The do_numa_pages only handle non-compound page, and only PMD-mapped THP > is handled in do_huge_pmd_numa_page(), but large, PTE-mapped folio will > be supported, let's convert more numa balancing functions to use/take a > folio in preparation for that, no functional change intended for now. > > Kefeng Wang (6): > sched/numa, mm: make numa migrate functions to take a folio > mm: mempolicy: make mpol_misplaced() to take a folio > mm: memory: make numa_migrate_prep() to take a folio > mm: memory: use a folio in do_numa_page() > mm: memory: add vm_normal_pmd_folio() > mm: huge_memory: use a folio in do_huge_pmd_numa_page() This all seems OK. It's kind of hard to review though because you change the same line multiple times. I think it works out better to go top-down instead of bottom-up. That is, start with do_numa_page() and pass &folio->page to numa_migrate_prep. Then do vm_normal_pmd_folio() followed by do_huge_pmd_numa_page(). Fourth would have been numa_migrate_prep(), etc. I don't want to ask you to redo the entire series, but for future patch series. Also, it's nce to do things like remove the unnecessary 'extern' from function declarations when you change them from page to folio. And please try to stick to 80 columns; I know it's not always easy/possible.
On 2023/9/18 20:57, Matthew Wilcox wrote: > On Mon, Sep 18, 2023 at 06:32:07PM +0800, Kefeng Wang wrote: >> The do_numa_pages only handle non-compound page, and only PMD-mapped THP >> is handled in do_huge_pmd_numa_page(), but large, PTE-mapped folio will >> be supported, let's convert more numa balancing functions to use/take a >> folio in preparation for that, no functional change intended for now. >> >> Kefeng Wang (6): >> sched/numa, mm: make numa migrate functions to take a folio >> mm: mempolicy: make mpol_misplaced() to take a folio >> mm: memory: make numa_migrate_prep() to take a folio >> mm: memory: use a folio in do_numa_page() >> mm: memory: add vm_normal_pmd_folio() >> mm: huge_memory: use a folio in do_huge_pmd_numa_page() > > This all seems OK. It's kind of hard to review though because you change > the same line multiple times. I think it works out better to go top-down > instead of bottom-up. That is, start with do_numa_page() and pass > &folio->page to numa_migrate_prep. Then do vm_normal_pmd_folio() followed > by do_huge_pmd_numa_page(). Fourth would have been numa_migrate_prep(), > etc. I don't want to ask you to redo the entire series, but for future > patch series. > > Also, it's nce to do things like remove the unnecessary 'extern' from > function declarations when you change them from page to folio. And > please try to stick to 80 columns; I know it's not always easy/possible. > Thanks for your review and suggestion, I will keep them in mind when sending new patch, thanks.