From patchwork Tue Aug 22 00:53:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 136432 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b82d:0:b0:3f2:4152:657d with SMTP id z13csp3346408vqi; Mon, 21 Aug 2023 18:18:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IElt7BGs4bAv/7sTsxrtY4L3A/eT+fnivwcPLmrMcWPj8Wsh3Jaz0eU0EUzLHgXzFpenWNn X-Received: by 2002:a17:90a:7021:b0:268:ac99:4bb4 with SMTP id f30-20020a17090a702100b00268ac994bb4mr5384536pjk.46.1692667097796; Mon, 21 Aug 2023 18:18:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692667097; cv=none; d=google.com; s=arc-20160816; b=r/Sd+UC3SJG5ky0EYPWkp30TlVUAwbizD7NAo/Nw3I2cJMotA+EsMxZ9YG2mVtpRji PAPJpiIM1VJ+LGWv5E0E7cyjDuDTzWKwt4riWyBaxTpPhhmzmid33V8AFoCpcls9p2ZW +h1J+3cflEPq8DyjjTHvMnPMorySc/EQiN65MgIT0Arn08ocCGSu3+jckHqkSRYEERR1 p1PH03oVpGjheFc3kvwV4IMqPM/ryN7wrvGvShRowcxqqgCvZUptPdCmUCAOStLhR1Zd JYNXoYil4b6NOoeH7I7Iz6L/kEnk1VdBqQ7RBHdPiqfU1uFmZkUoCZ75nB8GU6HzEXhv KDrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=1TrGZO3wr5saKYv6Q1F9WzGr6uTMiFtX/tOe2G0ZEm0=; fh=9/y/4i4wURkq6VSoO06mJGKzAg0FgvlCtHCXS3nYPok=; b=a5oYNIUp3soRpATXI0euvRklJTr894NlZkomeL/41QTIV7VDubbfHzA+Z0YG1C4iur 0RejcxojZhi6zNskR9QrGGTnIaGs95xfkosvcEsTZVhklenuy27uk6m31MvgIem3pNvS f7cH1Nou8XenMAwUNlD5zxSxGd0fUEN0OQ2ZryWIISgU0uLkOSl5zjP9uRz+mZno5Exl 4PMnaloo90MggQjq8j0yQ6aJsCTb1oWL9Yfr1SJE/Pz0zDcsjqAQX5VChP+ty/uFybIG YOtufXCWRQ4MePUyhYZCu+wVogyuoq0jgHqk+AOvNMCHedmtIRnsGB+2Ruqa6eBGpVkJ ohqA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id pf12-20020a17090b1d8c00b0026384c02c03si9131937pjb.140.2023.08.21.18.18.00; Mon, 21 Aug 2023 18:18:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231963AbjHVAyT (ORCPT + 99 others); Mon, 21 Aug 2023 20:54:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231950AbjHVAyN (ORCPT ); Mon, 21 Aug 2023 20:54:13 -0400 Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCE26184 for ; Mon, 21 Aug 2023 17:54:11 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R601e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VqK0DP4_1692665648; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VqK0DP4_1692665648) by smtp.aliyun-inc.com; Tue, 22 Aug 2023 08:54:08 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, shy828301@gmail.com, david@redhat.com, ying.huang@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/4] mm: migrate: factor out migration validation into numa_page_can_migrate() Date: Tue, 22 Aug 2023 08:53:49 +0800 Message-Id: <6e1c5a86b8d960294582a1221a1a20eb66e53b37.1692665449.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1774890094994942736 X-GMAIL-MSGID: 1774890094994942736 Now there are several places will validate if a page can migrate or not, so factoring out these validation into a new function to make them more maintainable. Signed-off-by: Baolin Wang --- mm/huge_memory.c | 6 ++++++ mm/internal.h | 1 + mm/memory.c | 30 ++++++++++++++++++++++++++++++ mm/migrate.c | 19 ------------------- 4 files changed, 37 insertions(+), 19 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4465915711c3..4a9b34a89854 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1540,11 +1540,17 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) spin_unlock(vmf->ptl); writable = false; + if (!numa_page_can_migrate(vma, page)) { + put_page(page); + goto migrate_fail; + } + migrated = migrate_misplaced_page(page, vma, target_nid); if (migrated) { flags |= TNF_MIGRATED; page_nid = target_nid; } else { +migrate_fail: flags |= TNF_MIGRATE_FAIL; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { diff --git a/mm/internal.h b/mm/internal.h index f59a53111817..1e00b8a30910 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -933,6 +933,7 @@ void __vunmap_range_noflush(unsigned long start, unsigned long end); int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags); +bool numa_page_can_migrate(struct vm_area_struct *vma, struct page *page); void free_zone_device_page(struct page *page); int migrate_device_coherent_page(struct page *page); diff --git a/mm/memory.c b/mm/memory.c index 12647d139a13..fc6f6b7a70e1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4735,6 +4735,30 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, return mpol_misplaced(page, vma, addr); } +bool numa_page_can_migrate(struct vm_area_struct *vma, struct page *page) +{ + /* + * Don't migrate file pages that are mapped in multiple processes + * with execute permissions as they are probably shared libraries. + */ + if (page_mapcount(page) != 1 && page_is_file_lru(page) && + (vma->vm_flags & VM_EXEC)) + return false; + + /* + * Also do not migrate dirty pages as not all filesystems can move + * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles. + */ + if (page_is_file_lru(page) && PageDirty(page)) + return false; + + /* Do not migrate THP mapped by multiple processes */ + if (PageTransHuge(page) && total_mapcount(page) > 1) + return false; + + return true; +} + static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; @@ -4815,11 +4839,17 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); writable = false; + if (!numa_page_can_migrate(vma, page)) { + put_page(page); + goto migrate_fail; + } + /* Migrate to the requested node */ if (migrate_misplaced_page(page, vma, target_nid)) { page_nid = target_nid; flags |= TNF_MIGRATED; } else { +migrate_fail: flags |= TNF_MIGRATE_FAIL; vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); diff --git a/mm/migrate.c b/mm/migrate.c index e21d5a7e7447..9cc98fb1d6ec 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2485,10 +2485,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); - /* Do not migrate THP mapped by multiple processes */ - if (PageTransHuge(page) && total_mapcount(page) > 1) - return 0; - /* Avoid migrating to a node that is nearly full */ if (!migrate_balanced_pgdat(pgdat, nr_pages)) { int z; @@ -2533,21 +2529,6 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, LIST_HEAD(migratepages); int nr_pages = thp_nr_pages(page); - /* - * Don't migrate file pages that are mapped in multiple processes - * with execute permissions as they are probably shared libraries. - */ - if (page_mapcount(page) != 1 && page_is_file_lru(page) && - (vma->vm_flags & VM_EXEC)) - goto out; - - /* - * Also do not migrate dirty pages as not all filesystems can move - * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles. - */ - if (page_is_file_lru(page) && PageDirty(page)) - goto out; - isolated = numamigrate_isolate_page(pgdat, page); if (!isolated) goto out; From patchwork Tue Aug 22 00:53:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 136441 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b82d:0:b0:3f2:4152:657d with SMTP id z13csp3356836vqi; Mon, 21 Aug 2023 18:54:28 -0700 (PDT) X-Google-Smtp-Source: AGHT+IER444NXVI/JJD7lW0dK9iqrSLcHGqHdkicBgSTAt0QhmtaRlX+5YHFQSBnarOQH3Jf/nur X-Received: by 2002:a05:6512:2256:b0:500:808c:9ee6 with SMTP id i22-20020a056512225600b00500808c9ee6mr3594767lfu.6.1692669267810; Mon, 21 Aug 2023 18:54:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692669267; cv=none; d=google.com; s=arc-20160816; b=J9SRT8C7iwLy8U/if7hvBkhZjLI5UNkU5QDxPHB9Lb1uyD5Ku0bZ6r4BRyeU4enCFO sAHcjsnTEO9fdc1dCccSEONjB+pP3C3uWyrmDaNRO6qRwJL17u1D0wGfQrBc6eM6yOW1 bU1KSI62xneReBP/3V6TYXvDNn+edDmVk+3aPoclrQJmz5/tURVfQlmKXCQxqiOgBfmX Xkg0ESedf1fq5WqSnBqsHosScBCXzxjFujXMLFvyS1NsgUPQsFUdImdL9zu1e/MYhabm JI/DOSDgMXP2zHM88xM8zRAyQJ1y3t3Dq8+AO3vvGEg9JGJgPm/SDFA0NYdvPmhRHqtc rbZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=4QgFgSjJ3qaZoWytVxyqoyhuxJGCk4xYZPpkdBoY63c=; fh=9/y/4i4wURkq6VSoO06mJGKzAg0FgvlCtHCXS3nYPok=; b=VAu0wsllQFHrzxcdB6YXKsDqta6UJKDjvgLDS6YDDdy6BU0WGpBsWiJnXzaN1EcD1O L7OKZSfl+6oPwdpOyzZ54GXM3CxbvlYRStjf5fkRgZ9GHIxy/3U6uR2XvZ3V3LgbWsD0 i3HwZEv4QHP3hewwurt2vzY5qbBkFTgTy+wnWtpAuZM+1hyO8tDQ6YIBylVy/76LZeWV +6avfNiHQRAcXyRM1sPQ/FN3BEmDYm8meqlEkt7oLOTAWIEbzYOEzlaq/FATJfnkrKOa SkOgNQ2fdIqDv/8wOzjwIDDQqLTifZqR4vc7jBcVIvb4JYravraICzcIuJ02RKQQqpgu ylVQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p2-20020a1709060dc200b009930371a03csi6359841eji.1001.2023.08.21.18.53.55; Mon, 21 Aug 2023 18:54:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231974AbjHVAyY (ORCPT + 99 others); Mon, 21 Aug 2023 20:54:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231952AbjHVAyO (ORCPT ); Mon, 21 Aug 2023 20:54:14 -0400 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC8AAD1 for ; Mon, 21 Aug 2023 17:54:12 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VqK0DPh_1692665649; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VqK0DPh_1692665649) by smtp.aliyun-inc.com; Tue, 22 Aug 2023 08:54:10 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, shy828301@gmail.com, david@redhat.com, ying.huang@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/4] mm: migrate: move the numamigrate_isolate_page() into do_numa_page() Date: Tue, 22 Aug 2023 08:53:50 +0800 Message-Id: <9ff2a9e3e644103a08b9b84b76b39bbd4c60020b.1692665449.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1774892370054287521 X-GMAIL-MSGID: 1774892370054287521 Move the numamigrate_isolate_page() into do_numa_page() to simplify the migrate_misplaced_page(), which now only focuses on page migration, and it also serves as a preparation for supporting batch migration for migrate_misplaced_page(). While we are at it, change the numamigrate_isolate_page() to boolean type to make the return value more clear. Signed-off-by: Baolin Wang --- include/linux/migrate.h | 6 ++++++ mm/huge_memory.c | 7 +++++++ mm/memory.c | 7 +++++++ mm/migrate.c | 22 +++++++--------------- 4 files changed, 27 insertions(+), 15 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 711dd9412561..ddcd62ec2c12 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -144,12 +144,18 @@ const struct movable_operations *page_movable_ops(struct page *page) #ifdef CONFIG_NUMA_BALANCING int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node); +bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page); #else static inline int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node) { return -EAGAIN; /* can't migrate now */ } + +static inline bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) +{ + return false; +} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_MIGRATION diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4a9b34a89854..07149ead11e4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1496,6 +1496,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); bool migrated = false, writable = false; int flags = 0; + pg_data_t *pgdat; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { @@ -1545,6 +1546,12 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) goto migrate_fail; } + pgdat = NODE_DATA(target_nid); + if (!numamigrate_isolate_page(pgdat, page)) { + put_page(page); + goto migrate_fail; + } + migrated = migrate_misplaced_page(page, vma, target_nid); if (migrated) { flags |= TNF_MIGRATED; diff --git a/mm/memory.c b/mm/memory.c index fc6f6b7a70e1..4e451b041488 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4769,6 +4769,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) int target_nid; pte_t pte, old_pte; int flags = 0; + pg_data_t *pgdat; /* * The "pte" at this point cannot be used safely without @@ -4844,6 +4845,12 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) goto migrate_fail; } + pgdat = NODE_DATA(target_nid); + if (!numamigrate_isolate_page(pgdat, page)) { + put_page(page); + goto migrate_fail; + } + /* Migrate to the requested node */ if (migrate_misplaced_page(page, vma, target_nid)) { page_nid = target_nid; diff --git a/mm/migrate.c b/mm/migrate.c index 9cc98fb1d6ec..0b2b69a2a7ab 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2478,7 +2478,7 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src, return __folio_alloc_node(gfp, order, nid); } -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) +bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) { int nr_pages = thp_nr_pages(page); int order = compound_order(page); @@ -2496,11 +2496,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) break; } wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); - return 0; + return false; } if (!isolate_lru_page(page)) - return 0; + return false; mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), nr_pages); @@ -2511,7 +2511,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) * disappearing underneath us during migration. */ put_page(page); - return 1; + return true; } /* @@ -2523,16 +2523,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node) { pg_data_t *pgdat = NODE_DATA(node); - int isolated; + int migrated = 1; int nr_remaining; unsigned int nr_succeeded; LIST_HEAD(migratepages); int nr_pages = thp_nr_pages(page); - isolated = numamigrate_isolate_page(pgdat, page); - if (!isolated) - goto out; - list_add(&page->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio, NULL, node, MIGRATE_ASYNC, @@ -2544,7 +2540,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, page_is_file_lru(page), -nr_pages); putback_lru_page(page); } - isolated = 0; + migrated = 0; } if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); @@ -2553,11 +2549,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, nr_succeeded); } BUG_ON(!list_empty(&migratepages)); - return isolated; - -out: - put_page(page); - return 0; + return migrated; } #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_NUMA */ From patchwork Tue Aug 22 00:53:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 136518 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b82d:0:b0:3f2:4152:657d with SMTP id z13csp3559379vqi; Tue, 22 Aug 2023 04:33:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE7GhHRZrtQRtfBvXderAi/c8rKKSy0CRfyZX9MKyO5pdAhDthzdIg+gErn63hAkWlXKUtq X-Received: by 2002:a17:903:24f:b0:1bf:cac:13b4 with SMTP id j15-20020a170903024f00b001bf0cac13b4mr8061689plh.5.1692703996550; Tue, 22 Aug 2023 04:33:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692703996; cv=none; d=google.com; s=arc-20160816; b=WcrbfXQqA21fHuQS3zgcO3hJ/9NIwE3nInv6dBbg2I7oCs9o0j3d466oUr19hAbINJ D0mb62DpucT2LoTIswqlatVvJA7Qtlpr1IROFUZIW4w6AYSosoqAlk4eLuJE8HXxNkC/ sVAs6tT4z5O8eg0x+sjNmdrictXOLUBYACkKEI+TeROEPB8PBXry0Yuv9lAAjet+/Ni7 ko6oFf2Ru70J9qUqyrxE8vnqcHYjCCgRfFwVEgZSyq8EcTjAXsuoJFy4o0cxVqK/Xpup TitmeP1wCqUNi2B4wtYLLzVTtWowHuBSWx8S1SCuDwvOb67539MzZrw8+gG5R4hzNZgU WKHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=DlJp3/3E7Xb8ya6yRKzSEIygAUPOQPvI4nkRhXzfxUw=; fh=9/y/4i4wURkq6VSoO06mJGKzAg0FgvlCtHCXS3nYPok=; b=p+jW9GCDvwRQ46qpJG1uNG+IYk4L+/HbEE4HwDsCS20PwO3ZgBIY6boTteWpTOiAq1 MHs5AAM+1iVtGwJl++xjzI+2zEhTfJszFrEE2TsQbL4vHj/+v/m2ziGVhH3F7Dc+GqeJ b8O8QEY+9lrxNfhU/+dSLxwiZ5cia1/oyy8cAc2hzI5PgMu07nDlJizj+emGqPvKYKOA xprRm6bf7UGAQKexTobM5/2iXrXhDIh+wcw2PziO6/OCJbOSBuYlgNQAOMKnjI56o1XY WxlAaWMdemW0ldec42N9muS+gUgGrLAKwucn9+iiZUMDMr46JkULRW/YMBQslkPs14Di ZFcg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jy5-20020a17090342c500b001b9c7300823si8533106plb.221.2023.08.22.04.33.03; Tue, 22 Aug 2023 04:33:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231979AbjHVAyd (ORCPT + 99 others); Mon, 21 Aug 2023 20:54:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231954AbjHVAyP (ORCPT ); Mon, 21 Aug 2023 20:54:15 -0400 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E50BA184 for ; Mon, 21 Aug 2023 17:54:13 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VqK0DQ6_1692665650; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VqK0DQ6_1692665650) by smtp.aliyun-inc.com; Tue, 22 Aug 2023 08:54:11 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, shy828301@gmail.com, david@redhat.com, ying.huang@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/4] mm: migrate: change migrate_misplaced_page() to support multiple pages migration Date: Tue, 22 Aug 2023 08:53:51 +0800 Message-Id: <02c3d36270705f0dfec1ea583e252464cb48d802.1692665449.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1774928786257824170 X-GMAIL-MSGID: 1774928786257824170 Expanding the migrate_misplaced_page() function to allow passing in a list to support multiple pages migration as a preparation to support batch migration for NUMA balancing as well as compound page NUMA balancing in future. Signed-off-by: Baolin Wang --- include/linux/migrate.h | 9 +++++---- mm/huge_memory.c | 5 ++++- mm/memory.c | 4 +++- mm/migrate.c | 26 ++++++++++---------------- 4 files changed, 22 insertions(+), 22 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index ddcd62ec2c12..87edce8e939d 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -142,12 +142,13 @@ const struct movable_operations *page_movable_ops(struct page *page) } #ifdef CONFIG_NUMA_BALANCING -int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, - int node); +int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct *vma, + int source_nid, int target_nid); bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page); #else -static inline int migrate_misplaced_page(struct page *page, - struct vm_area_struct *vma, int node) +static inline int migrate_misplaced_page(struct list_head *migratepages, + struct vm_area_struct *vma, + int source_nid, int target_nid) { return -EAGAIN; /* can't migrate now */ } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 07149ead11e4..4401a3493544 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1497,6 +1497,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) bool migrated = false, writable = false; int flags = 0; pg_data_t *pgdat; + LIST_HEAD(migratepages); vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { @@ -1552,7 +1553,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) goto migrate_fail; } - migrated = migrate_misplaced_page(page, vma, target_nid); + list_add(&page->lru, &migratepages); + migrated = migrate_misplaced_page(&migratepages, vma, + page_nid, target_nid); if (migrated) { flags |= TNF_MIGRATED; page_nid = target_nid; diff --git a/mm/memory.c b/mm/memory.c index 4e451b041488..9e417e8dd5d5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4770,6 +4770,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) pte_t pte, old_pte; int flags = 0; pg_data_t *pgdat; + LIST_HEAD(migratepages); /* * The "pte" at this point cannot be used safely without @@ -4851,8 +4852,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) goto migrate_fail; } + list_add(&page->lru, &migratepages); /* Migrate to the requested node */ - if (migrate_misplaced_page(page, vma, target_nid)) { + if (migrate_misplaced_page(&migratepages, vma, page_nid, target_nid)) { page_nid = target_nid; flags |= TNF_MIGRATED; } else { diff --git a/mm/migrate.c b/mm/migrate.c index 0b2b69a2a7ab..fae7224b8e64 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2519,36 +2519,30 @@ bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) * node. Caller is expected to have an elevated reference count on * the page that will be dropped by this function before returning. */ -int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, - int node) +int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct *vma, + int source_nid, int target_nid) { - pg_data_t *pgdat = NODE_DATA(node); + pg_data_t *pgdat = NODE_DATA(target_nid); int migrated = 1; int nr_remaining; unsigned int nr_succeeded; - LIST_HEAD(migratepages); - int nr_pages = thp_nr_pages(page); - list_add(&page->lru, &migratepages); - nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio, - NULL, node, MIGRATE_ASYNC, + nr_remaining = migrate_pages(migratepages, alloc_misplaced_dst_folio, + NULL, target_nid, MIGRATE_ASYNC, MR_NUMA_MISPLACED, &nr_succeeded); if (nr_remaining) { - if (!list_empty(&migratepages)) { - list_del(&page->lru); - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -nr_pages); - putback_lru_page(page); - } + if (!list_empty(migratepages)) + putback_movable_pages(migratepages); + migrated = 0; } if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); - if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node)) + if (!node_is_toptier(source_nid) && node_is_toptier(target_nid)) mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded); } - BUG_ON(!list_empty(&migratepages)); + BUG_ON(!list_empty(migratepages)); return migrated; } #endif /* CONFIG_NUMA_BALANCING */ From patchwork Tue Aug 22 00:53:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 136464 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b82d:0:b0:3f2:4152:657d with SMTP id z13csp3472080vqi; Tue, 22 Aug 2023 00:59:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEEAqLL5Bq6h4j10o54aAxxQGBvSB36CNzE7Di3jk/IthLW6MR9wKrzfffVXybOg+mn/0Bl X-Received: by 2002:a05:6402:348b:b0:522:ab20:368a with SMTP id v11-20020a056402348b00b00522ab20368amr9789273edc.13.1692691197132; Tue, 22 Aug 2023 00:59:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692691197; cv=none; d=google.com; s=arc-20160816; b=RaiCM+Du1cZaJKxW+PXabxXz2nyabuEcJ81AH3vrDLNEC5X2DXmN7aPethS441dbao YXQmaZt94+9tfEWC1tSBASoTH7xIeekNHb2W1TOqGo/8rQvIQ0JNbbplvjj4oXXqxJ/3 wQFMTAr1+QT/3VnM2LimgzF8JbKUTeByyC3ndPa/2nA3Qi4YmEmeN4feVYrdl3MTWsmk oV8Fmu7i3iGUSinx71cSzGNui0MVuXOiNuoKFHNd3q5vpsp54IMGTvvWpxvVAhyTq1BQ ZbZSyxL5zzZ9K4CHduehhA9LmRGvfwB35NJt3cta73JaQeVUQb1jccmXR+G8xnwJDsBb MVCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=yVDhgbFmYSfxlu1ENlgkUpOD4znVYtMHFKsHUNldCus=; fh=9/y/4i4wURkq6VSoO06mJGKzAg0FgvlCtHCXS3nYPok=; b=lfKVEspmEUAy1dLtjk+mkR+vjxLTUInqRaxBqrnCc53TKjYQ+tK9RjTWx0wXjy5/rm XpwJXyGzp2jlUg+h9GQyuX6RAImJ24u7RuzYI6eUPmN+UQQkuhsjo5hdM5RWkIlV6h30 QkH2aRUuBeVQB1MhLbCM9tMjsilVqXK+qT+Pv7L52C4YEKcEojifpFa7puiZ4Diz/e5I 6QauygMFgAEGrYcegCSl/RXZn4j4epWn4K4xBl/gQ983B7lYcOHFaovm+vay6IN4dcY+ iecI5FGvJFMT77ibb5r/W88V12PofeqHiEad2InQcOGNpsZuRKZozq4IiWhoFeuGH9z4 1Rig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h22-20020aa7c956000000b00525738acb11si6892847edt.515.2023.08.22.00.59.33; Tue, 22 Aug 2023 00:59:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231987AbjHVAyk (ORCPT + 99 others); Mon, 21 Aug 2023 20:54:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231959AbjHVAyQ (ORCPT ); Mon, 21 Aug 2023 20:54:16 -0400 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4078D1 for ; Mon, 21 Aug 2023 17:54:14 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VqJzDob_1692665651; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VqJzDob_1692665651) by smtp.aliyun-inc.com; Tue, 22 Aug 2023 08:54:12 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, shy828301@gmail.com, david@redhat.com, ying.huang@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/4] mm: migrate: change to return the number of pages migrated successfully Date: Tue, 22 Aug 2023 08:53:52 +0800 Message-Id: <9688ba40be86d7d0af0961e74d2a182ce65f5f8c.1692665449.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1774915365013278846 X-GMAIL-MSGID: 1774915365013278846 Change the migrate_misplaced_page() to return the number of pages migrated successfully, which is used to calculate how many pages are failed to migrate for batch migration. For the compound page's NUMA balancing support, it is possible that partial pages were successfully migrated, so it is necessary to return the number of pages that were successfully migrated from migrate_misplaced_page(). Signed-off-by: Baolin Wang --- mm/huge_memory.c | 9 +++++---- mm/memory.c | 4 +++- mm/migrate.c | 5 +---- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4401a3493544..951f73d6b5bf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1494,10 +1494,11 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) unsigned long haddr = vmf->address & HPAGE_PMD_MASK; int page_nid = NUMA_NO_NODE; int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); - bool migrated = false, writable = false; + bool writable = false; int flags = 0; pg_data_t *pgdat; LIST_HEAD(migratepages); + int nr_successed; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { @@ -1554,9 +1555,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) } list_add(&page->lru, &migratepages); - migrated = migrate_misplaced_page(&migratepages, vma, - page_nid, target_nid); - if (migrated) { + nr_successed = migrate_misplaced_page(&migratepages, vma, + page_nid, target_nid); + if (nr_successed) { flags |= TNF_MIGRATED; page_nid = target_nid; } else { diff --git a/mm/memory.c b/mm/memory.c index 9e417e8dd5d5..2773cd804ee9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4771,6 +4771,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) int flags = 0; pg_data_t *pgdat; LIST_HEAD(migratepages); + int nr_succeeded; /* * The "pte" at this point cannot be used safely without @@ -4854,7 +4855,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) list_add(&page->lru, &migratepages); /* Migrate to the requested node */ - if (migrate_misplaced_page(&migratepages, vma, page_nid, target_nid)) { + nr_succeeded = migrate_misplaced_page(&migratepages, vma, page_nid, target_nid); + if (nr_succeeded) { page_nid = target_nid; flags |= TNF_MIGRATED; } else { diff --git a/mm/migrate.c b/mm/migrate.c index fae7224b8e64..5435cfb225ab 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2523,7 +2523,6 @@ int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct int source_nid, int target_nid) { pg_data_t *pgdat = NODE_DATA(target_nid); - int migrated = 1; int nr_remaining; unsigned int nr_succeeded; @@ -2533,8 +2532,6 @@ int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct if (nr_remaining) { if (!list_empty(migratepages)) putback_movable_pages(migratepages); - - migrated = 0; } if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); @@ -2543,7 +2540,7 @@ int migrate_misplaced_page(struct list_head *migratepages, struct vm_area_struct nr_succeeded); } BUG_ON(!list_empty(migratepages)); - return migrated; + return nr_succeeded; } #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_NUMA */