Message ID | c44225ae71b1be21e32891e2143044863a0b91b1.1666251624.git.baolin.wang@linux.alibaba.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp123331wrs; Thu, 20 Oct 2022 00:57:35 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5JGlRhlJMIZMbyJyBahmqInYXr+VMtmAMC7jVtViwio1fNnEnOFoiZH+rHQz/49evimPKQ X-Received: by 2002:a05:6a00:1828:b0:563:24ea:5728 with SMTP id y40-20020a056a00182800b0056324ea5728mr12737908pfa.3.1666252655040; Thu, 20 Oct 2022 00:57:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666252655; cv=none; d=google.com; s=arc-20160816; b=Z7HmHmd+oJY5XMBHvdF+BJLsZ9ul97N9r9h5Gvqhc7WITxxnHi+xPdCNswVsN8FOSq ipjqI4RtyiOWVRIxTurcCOk2ue+hvYXL8lTXC5xjtg+smxwrZwVp8jfhpSKtjFPswOie 7VEWd8hmDWpperjbnMrCDEB6yD9XHZk2F2Q7Pc1es9/mi9wc7gFgj2Qtgd/QQEIbnAnl OeJvBzOPPd/qnLN7NkhMZlsNVmZ1qgSPeVsSZidnxcL3KGa4CmTciN4IAx3qJenujTkr 0HO81A+PGyP14fFFoQmDA0WnbfT2Mc6NkvUkjRJxe4b4CaWhm1VL8Z4Uw5doUdD9sKZw HzrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:references:in-reply-to :message-id:date:subject:cc:to:from; bh=ZltSmbhiU0ljivOsJWaxq+ZuWDtrgMfiiTpfEGjto18=; b=JUZx82Pa5uup0NxpW1diNczEEdxZbRRwWrMkfWQM4Bai6e4ScojJrhS+UjRe3Jz6Pt KBSP0GIC9rnyc5M6B3aNqXGradon8sPV89MmaxZobXU3P0nRamN2Rz9q1LU/zvNEjqNu L+NGKLsUWMZFm+0fJGeScVHpMkfCQtc/vNPRfmUDTzAQQUbdxHNnqRf9rel/nIx8tn08 f+WsY4NptmeaxxkTUCZ0eSNO9buGfVdcucHG60ecb7j7upQCOG0abZlV5/dTnhS6lXAd Sh8bK/NSAFTNSI3F0mHwU+HITeZ9HCVre4zGoPjtB5F9fE1ybZ7eRBa1nKKd5Gyvc+7d hWWA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p14-20020a631e4e000000b0043560d14c72si20405300pgm.105.2022.10.20.00.57.14; Thu, 20 Oct 2022 00:57:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230492AbiJTHtZ (ORCPT <rfc822;realc9580@gmail.com> + 99 others); Thu, 20 Oct 2022 03:49:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231153AbiJTHtS (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 20 Oct 2022 03:49:18 -0400 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A5BA17A005 for <linux-kernel@vger.kernel.org>; Thu, 20 Oct 2022 00:49:16 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VSeILvE_1666252153; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VSeILvE_1666252153) by smtp.aliyun-inc.com; Thu, 20 Oct 2022 15:49:14 +0800 From: Baolin Wang <baolin.wang@linux.alibaba.com> To: akpm@linux-foundation.org Cc: david@redhat.com, ying.huang@intel.com, ziy@nvidia.com, shy828301@gmail.com, baolin.wang@linux.alibaba.com, jingshan@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/2] mm: migrate: Try again if THP split is failed due to page refcnt Date: Thu, 20 Oct 2022 15:49:01 +0800 Message-Id: <c44225ae71b1be21e32891e2143044863a0b91b1.1666251624.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <cc48dc1e4db8c33289f168cf380ab3641f45f8ad.1666251624.git.baolin.wang@linux.alibaba.com> References: <cc48dc1e4db8c33289f168cf380ab3641f45f8ad.1666251624.git.baolin.wang@linux.alibaba.com> In-Reply-To: <cc48dc1e4db8c33289f168cf380ab3641f45f8ad.1666251624.git.baolin.wang@linux.alibaba.com> References: <cc48dc1e4db8c33289f168cf380ab3641f45f8ad.1666251624.git.baolin.wang@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747192543955722728?= X-GMAIL-MSGID: =?utf-8?q?1747192543955722728?= |
Series |
[1/2] mm: gup: Re-pin pages in case of trying several times to migrate
|
|
Commit Message
Baolin Wang
Oct. 20, 2022, 7:49 a.m. UTC
When creating a virtual machine, we will use memfd_create() to get
a file descriptor which can be used to create share memory mappings
using the mmap function, meanwhile the mmap() will set the MAP_POPULATE
flag to allocate physical pages for the virtual machine.
When allocating physical pages for the guest, the host can fallback to
allocate some CMA pages for the guest when over half of the zone's free
memory is in the CMA area.
In guest os, when the application wants to do some data transaction with
DMA, our QEMU will call VFIO_IOMMU_MAP_DMA ioctl to do longterm-pin and
create IOMMU mappings for the DMA pages. However, when calling
VFIO_IOMMU_MAP_DMA ioctl to pin the physical pages, we found it will be
failed to longterm-pin sometimes.
After some invetigation, we found the pages used to do DMA mapping can
contain some CMA pages, and these CMA pages will cause a possible
failure of the longterm-pin, due to failed to migrate the CMA pages.
The reason of migration failure may be temporary reference count or
memory allocation failure. So that will cause the VFIO_IOMMU_MAP_DMA
ioctl returns error, which makes the application failed to start.
I observed one migration failure case (which is not easy to reproduce) is
that, the 'thp_migration_fail' count is 1 and the 'thp_split_page_failed'
count is also 1.
That means when migrating a THP which is in CMA area, but can not allocate
a new THP due to memory fragmentation, so it will split the THP. However
THP split is also failed, probably the reason is temporary reference count
of this THP. And the temporary reference count can be caused by dropping
page caches (I observed the drop caches operation in the system), but we
can not drop the shmem page caches due to they are already dirty at that time.
Especially for THP split failure, which is caused by temporary reference
count, we can try again to mitigate the failure of migration in this case
according to previous discussion [1].
[1] https://lore.kernel.org/all/470dc638-a300-f261-94b4-e27250e42f96@redhat.com/
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/huge_memory.c | 4 ++--
mm/migrate.c | 18 +++++++++++++++---
2 files changed, 17 insertions(+), 5 deletions(-)
Comments
Baolin Wang <baolin.wang@linux.alibaba.com> writes: > When creating a virtual machine, we will use memfd_create() to get > a file descriptor which can be used to create share memory mappings > using the mmap function, meanwhile the mmap() will set the MAP_POPULATE > flag to allocate physical pages for the virtual machine. > > When allocating physical pages for the guest, the host can fallback to > allocate some CMA pages for the guest when over half of the zone's free > memory is in the CMA area. > > In guest os, when the application wants to do some data transaction with > DMA, our QEMU will call VFIO_IOMMU_MAP_DMA ioctl to do longterm-pin and > create IOMMU mappings for the DMA pages. However, when calling > VFIO_IOMMU_MAP_DMA ioctl to pin the physical pages, we found it will be > failed to longterm-pin sometimes. > > After some invetigation, we found the pages used to do DMA mapping can > contain some CMA pages, and these CMA pages will cause a possible > failure of the longterm-pin, due to failed to migrate the CMA pages. > The reason of migration failure may be temporary reference count or > memory allocation failure. So that will cause the VFIO_IOMMU_MAP_DMA > ioctl returns error, which makes the application failed to start. > > I observed one migration failure case (which is not easy to reproduce) is > that, the 'thp_migration_fail' count is 1 and the 'thp_split_page_failed' > count is also 1. > > That means when migrating a THP which is in CMA area, but can not allocate > a new THP due to memory fragmentation, so it will split the THP. However > THP split is also failed, probably the reason is temporary reference count > of this THP. And the temporary reference count can be caused by dropping > page caches (I observed the drop caches operation in the system), but we > can not drop the shmem page caches due to they are already dirty at that time. > > Especially for THP split failure, which is caused by temporary reference > count, we can try again to mitigate the failure of migration in this case > according to previous discussion [1]. Does the patch solved your problem? > [1] https://lore.kernel.org/all/470dc638-a300-f261-94b4-e27250e42f96@redhat.com/ > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/huge_memory.c | 4 ++-- > mm/migrate.c | 18 +++++++++++++++--- > 2 files changed, 17 insertions(+), 5 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index ad17c8d..a79f03b 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2666,7 +2666,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > * split PMDs > */ > if (!can_split_folio(folio, &extra_pins)) { > - ret = -EBUSY; > + ret = -EAGAIN; > goto out_unlock; > } > > @@ -2716,7 +2716,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > xas_unlock(&xas); > local_irq_enable(); > remap_page(folio, folio_nr_pages(folio)); > - ret = -EBUSY; > + ret = -EAGAIN; > } > > out_unlock: > diff --git a/mm/migrate.c b/mm/migrate.c > index 8e5eb6e..55c7855 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1506,9 +1506,21 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > if (is_thp) { > nr_thp_failed++; > /* THP NUMA faulting doesn't split THP to retry. */ > - if (!nosplit && !try_split_thp(page, &thp_split_pages)) { > - nr_thp_split++; > - break; > + if (!nosplit) { > + rc = try_split_thp(page, &thp_split_pages); > + if (!rc) { > + nr_thp_split++; > + break; > + } else if (reason == MR_LONGTERM_PIN && > + rc == -EAGAIN) { In case reason != MR_LONGTERM_PIN, you change the return value of migrate_pages(). So you need to use another variable for return value. > + /* > + * Try again to split THP to mitigate > + * the failure of longterm pinning. > + */ > + thp_retry++; > + nr_retry_pages += nr_subpages; > + break; > + } > } > } else if (!no_subpage_counting) { > nr_failed++; Best Regards, Huang, Ying
On 10/20/2022 4:24 PM, Huang, Ying wrote: > Baolin Wang <baolin.wang@linux.alibaba.com> writes: > >> When creating a virtual machine, we will use memfd_create() to get >> a file descriptor which can be used to create share memory mappings >> using the mmap function, meanwhile the mmap() will set the MAP_POPULATE >> flag to allocate physical pages for the virtual machine. >> >> When allocating physical pages for the guest, the host can fallback to >> allocate some CMA pages for the guest when over half of the zone's free >> memory is in the CMA area. >> >> In guest os, when the application wants to do some data transaction with >> DMA, our QEMU will call VFIO_IOMMU_MAP_DMA ioctl to do longterm-pin and >> create IOMMU mappings for the DMA pages. However, when calling >> VFIO_IOMMU_MAP_DMA ioctl to pin the physical pages, we found it will be >> failed to longterm-pin sometimes. >> >> After some invetigation, we found the pages used to do DMA mapping can >> contain some CMA pages, and these CMA pages will cause a possible >> failure of the longterm-pin, due to failed to migrate the CMA pages. >> The reason of migration failure may be temporary reference count or >> memory allocation failure. So that will cause the VFIO_IOMMU_MAP_DMA >> ioctl returns error, which makes the application failed to start. >> >> I observed one migration failure case (which is not easy to reproduce) is >> that, the 'thp_migration_fail' count is 1 and the 'thp_split_page_failed' >> count is also 1. >> >> That means when migrating a THP which is in CMA area, but can not allocate >> a new THP due to memory fragmentation, so it will split the THP. However >> THP split is also failed, probably the reason is temporary reference count >> of this THP. And the temporary reference count can be caused by dropping >> page caches (I observed the drop caches operation in the system), but we >> can not drop the shmem page caches due to they are already dirty at that time. >> >> Especially for THP split failure, which is caused by temporary reference >> count, we can try again to mitigate the failure of migration in this case >> according to previous discussion [1]. > > Does the patch solved your problem? The problem is not easy to reproduce and I will test this patch on our products. However I think this is a likely case to fail the migration, which need to be addressed to mitigate the failure. >> [1] https://lore.kernel.org/all/470dc638-a300-f261-94b4-e27250e42f96@redhat.com/ >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> --- >> mm/huge_memory.c | 4 ++-- >> mm/migrate.c | 18 +++++++++++++++--- >> 2 files changed, 17 insertions(+), 5 deletions(-) >> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index ad17c8d..a79f03b 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -2666,7 +2666,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) >> * split PMDs >> */ >> if (!can_split_folio(folio, &extra_pins)) { >> - ret = -EBUSY; >> + ret = -EAGAIN; >> goto out_unlock; >> } >> >> @@ -2716,7 +2716,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) >> xas_unlock(&xas); >> local_irq_enable(); >> remap_page(folio, folio_nr_pages(folio)); >> - ret = -EBUSY; >> + ret = -EAGAIN; >> } >> >> out_unlock: >> diff --git a/mm/migrate.c b/mm/migrate.c >> index 8e5eb6e..55c7855 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1506,9 +1506,21 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> if (is_thp) { >> nr_thp_failed++; >> /* THP NUMA faulting doesn't split THP to retry. */ >> - if (!nosplit && !try_split_thp(page, &thp_split_pages)) { >> - nr_thp_split++; >> - break; >> + if (!nosplit) { >> + rc = try_split_thp(page, &thp_split_pages); >> + if (!rc) { >> + nr_thp_split++; >> + break; >> + } else if (reason == MR_LONGTERM_PIN && >> + rc == -EAGAIN) { > > In case reason != MR_LONGTERM_PIN, you change the return value of > migrate_pages(). So you need to use another variable for return value. Good catch, will fix in next version. Thanks for your comments.
On Thu, Oct 20, 2022 at 2:33 AM Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > > > > On 10/20/2022 4:24 PM, Huang, Ying wrote: > > Baolin Wang <baolin.wang@linux.alibaba.com> writes: > > > >> When creating a virtual machine, we will use memfd_create() to get > >> a file descriptor which can be used to create share memory mappings > >> using the mmap function, meanwhile the mmap() will set the MAP_POPULATE > >> flag to allocate physical pages for the virtual machine. > >> > >> When allocating physical pages for the guest, the host can fallback to > >> allocate some CMA pages for the guest when over half of the zone's free > >> memory is in the CMA area. > >> > >> In guest os, when the application wants to do some data transaction with > >> DMA, our QEMU will call VFIO_IOMMU_MAP_DMA ioctl to do longterm-pin and > >> create IOMMU mappings for the DMA pages. However, when calling > >> VFIO_IOMMU_MAP_DMA ioctl to pin the physical pages, we found it will be > >> failed to longterm-pin sometimes. > >> > >> After some invetigation, we found the pages used to do DMA mapping can > >> contain some CMA pages, and these CMA pages will cause a possible > >> failure of the longterm-pin, due to failed to migrate the CMA pages. > >> The reason of migration failure may be temporary reference count or > >> memory allocation failure. So that will cause the VFIO_IOMMU_MAP_DMA > >> ioctl returns error, which makes the application failed to start. > >> > >> I observed one migration failure case (which is not easy to reproduce) is > >> that, the 'thp_migration_fail' count is 1 and the 'thp_split_page_failed' > >> count is also 1. > >> > >> That means when migrating a THP which is in CMA area, but can not allocate > >> a new THP due to memory fragmentation, so it will split the THP. However > >> THP split is also failed, probably the reason is temporary reference count > >> of this THP. And the temporary reference count can be caused by dropping > >> page caches (I observed the drop caches operation in the system), but we > >> can not drop the shmem page caches due to they are already dirty at that time. > >> > >> Especially for THP split failure, which is caused by temporary reference > >> count, we can try again to mitigate the failure of migration in this case > >> according to previous discussion [1]. > > > > Does the patch solved your problem? > > The problem is not easy to reproduce and I will test this patch on our > products. However I think this is a likely case to fail the migration, > which need to be addressed to mitigate the failure. You may try to trace all migrations across your fleet (or just pick some sample machines, this should make data analysis easier) and filter the migration by reasons, for example, MR_LONGTERM_PIN, then compare the migration success rate before and after the patch. It should be a good justification. But it may need some work on data aggregation, process and analysis, not sure how feasible it is. > > >> [1] https://lore.kernel.org/all/470dc638-a300-f261-94b4-e27250e42f96@redhat.com/ > >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > >> --- > >> mm/huge_memory.c | 4 ++-- > >> mm/migrate.c | 18 +++++++++++++++--- > >> 2 files changed, 17 insertions(+), 5 deletions(-) > >> > >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > >> index ad17c8d..a79f03b 100644 > >> --- a/mm/huge_memory.c > >> +++ b/mm/huge_memory.c > >> @@ -2666,7 +2666,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > >> * split PMDs > >> */ > >> if (!can_split_folio(folio, &extra_pins)) { > >> - ret = -EBUSY; > >> + ret = -EAGAIN; > >> goto out_unlock; > >> } > >> > >> @@ -2716,7 +2716,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > >> xas_unlock(&xas); > >> local_irq_enable(); > >> remap_page(folio, folio_nr_pages(folio)); > >> - ret = -EBUSY; > >> + ret = -EAGAIN; > >> } > >> > >> out_unlock: > >> diff --git a/mm/migrate.c b/mm/migrate.c > >> index 8e5eb6e..55c7855 100644 > >> --- a/mm/migrate.c > >> +++ b/mm/migrate.c > >> @@ -1506,9 +1506,21 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > >> if (is_thp) { > >> nr_thp_failed++; > >> /* THP NUMA faulting doesn't split THP to retry. */ > >> - if (!nosplit && !try_split_thp(page, &thp_split_pages)) { > >> - nr_thp_split++; > >> - break; > >> + if (!nosplit) { > >> + rc = try_split_thp(page, &thp_split_pages); > >> + if (!rc) { > >> + nr_thp_split++; > >> + break; > >> + } else if (reason == MR_LONGTERM_PIN && > >> + rc == -EAGAIN) { > > > > In case reason != MR_LONGTERM_PIN, you change the return value of > > migrate_pages(). So you need to use another variable for return value. > > Good catch, will fix in next version. Thanks for your comments.
On 10/21/2022 3:21 AM, Yang Shi wrote: > On Thu, Oct 20, 2022 at 2:33 AM Baolin Wang > <baolin.wang@linux.alibaba.com> wrote: >> >> >> >> On 10/20/2022 4:24 PM, Huang, Ying wrote: >>> Baolin Wang <baolin.wang@linux.alibaba.com> writes: >>> >>>> When creating a virtual machine, we will use memfd_create() to get >>>> a file descriptor which can be used to create share memory mappings >>>> using the mmap function, meanwhile the mmap() will set the MAP_POPULATE >>>> flag to allocate physical pages for the virtual machine. >>>> >>>> When allocating physical pages for the guest, the host can fallback to >>>> allocate some CMA pages for the guest when over half of the zone's free >>>> memory is in the CMA area. >>>> >>>> In guest os, when the application wants to do some data transaction with >>>> DMA, our QEMU will call VFIO_IOMMU_MAP_DMA ioctl to do longterm-pin and >>>> create IOMMU mappings for the DMA pages. However, when calling >>>> VFIO_IOMMU_MAP_DMA ioctl to pin the physical pages, we found it will be >>>> failed to longterm-pin sometimes. >>>> >>>> After some invetigation, we found the pages used to do DMA mapping can >>>> contain some CMA pages, and these CMA pages will cause a possible >>>> failure of the longterm-pin, due to failed to migrate the CMA pages. >>>> The reason of migration failure may be temporary reference count or >>>> memory allocation failure. So that will cause the VFIO_IOMMU_MAP_DMA >>>> ioctl returns error, which makes the application failed to start. >>>> >>>> I observed one migration failure case (which is not easy to reproduce) is >>>> that, the 'thp_migration_fail' count is 1 and the 'thp_split_page_failed' >>>> count is also 1. >>>> >>>> That means when migrating a THP which is in CMA area, but can not allocate >>>> a new THP due to memory fragmentation, so it will split the THP. However >>>> THP split is also failed, probably the reason is temporary reference count >>>> of this THP. And the temporary reference count can be caused by dropping >>>> page caches (I observed the drop caches operation in the system), but we >>>> can not drop the shmem page caches due to they are already dirty at that time. >>>> >>>> Especially for THP split failure, which is caused by temporary reference >>>> count, we can try again to mitigate the failure of migration in this case >>>> according to previous discussion [1]. >>> >>> Does the patch solved your problem? >> >> The problem is not easy to reproduce and I will test this patch on our >> products. However I think this is a likely case to fail the migration, >> which need to be addressed to mitigate the failure. > > You may try to trace all migrations across your fleet (or just pick > some sample machines, this should make data analysis easier) and > filter the migration by reasons, for example, MR_LONGTERM_PIN, then > compare the migration success rate before and after the patch. It > should be a good justification. But it may need some work on data > aggregation, process and analysis, not sure how feasible it is. IMO the migration of MR_LONGTERM_PIN is very rare in this case, so we can obeserve the migraion failure of longterm pin, once obeserved, the application will be aborted. However like I said before, the problem is not easy to reproduce :( Anyway we'll test this 2 patches on our products. >>>> [1] https://lore.kernel.org/all/470dc638-a300-f261-94b4-e27250e42f96@redhat.com/ >>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>> --- >>>> mm/huge_memory.c | 4 ++-- >>>> mm/migrate.c | 18 +++++++++++++++--- >>>> 2 files changed, 17 insertions(+), 5 deletions(-) >>>> >>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>> index ad17c8d..a79f03b 100644 >>>> --- a/mm/huge_memory.c >>>> +++ b/mm/huge_memory.c >>>> @@ -2666,7 +2666,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) >>>> * split PMDs >>>> */ >>>> if (!can_split_folio(folio, &extra_pins)) { >>>> - ret = -EBUSY; >>>> + ret = -EAGAIN; >>>> goto out_unlock; >>>> } >>>> >>>> @@ -2716,7 +2716,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) >>>> xas_unlock(&xas); >>>> local_irq_enable(); >>>> remap_page(folio, folio_nr_pages(folio)); >>>> - ret = -EBUSY; >>>> + ret = -EAGAIN; >>>> } >>>> >>>> out_unlock: >>>> diff --git a/mm/migrate.c b/mm/migrate.c >>>> index 8e5eb6e..55c7855 100644 >>>> --- a/mm/migrate.c >>>> +++ b/mm/migrate.c >>>> @@ -1506,9 +1506,21 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> if (is_thp) { >>>> nr_thp_failed++; >>>> /* THP NUMA faulting doesn't split THP to retry. */ >>>> - if (!nosplit && !try_split_thp(page, &thp_split_pages)) { >>>> - nr_thp_split++; >>>> - break; >>>> + if (!nosplit) { >>>> + rc = try_split_thp(page, &thp_split_pages); >>>> + if (!rc) { >>>> + nr_thp_split++; >>>> + break; >>>> + } else if (reason == MR_LONGTERM_PIN && >>>> + rc == -EAGAIN) { >>> >>> In case reason != MR_LONGTERM_PIN, you change the return value of >>> migrate_pages(). So you need to use another variable for return value. >> >> Good catch, will fix in next version. Thanks for your comments.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ad17c8d..a79f03b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2666,7 +2666,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) * split PMDs */ if (!can_split_folio(folio, &extra_pins)) { - ret = -EBUSY; + ret = -EAGAIN; goto out_unlock; } @@ -2716,7 +2716,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) xas_unlock(&xas); local_irq_enable(); remap_page(folio, folio_nr_pages(folio)); - ret = -EBUSY; + ret = -EAGAIN; } out_unlock: diff --git a/mm/migrate.c b/mm/migrate.c index 8e5eb6e..55c7855 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1506,9 +1506,21 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, if (is_thp) { nr_thp_failed++; /* THP NUMA faulting doesn't split THP to retry. */ - if (!nosplit && !try_split_thp(page, &thp_split_pages)) { - nr_thp_split++; - break; + if (!nosplit) { + rc = try_split_thp(page, &thp_split_pages); + if (!rc) { + nr_thp_split++; + break; + } else if (reason == MR_LONGTERM_PIN && + rc == -EAGAIN) { + /* + * Try again to split THP to mitigate + * the failure of longterm pinning. + */ + thp_retry++; + nr_retry_pages += nr_subpages; + break; + } } } else if (!no_subpage_counting) { nr_failed++;