From patchwork Tue Oct 25 05:35:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naoya Horiguchi X-Patchwork-Id: 10500 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp823146wru; Mon, 24 Oct 2022 22:41:13 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5SA1EEBa13Fyg1tfW3Cwa5Njbx//ATD7DLZMxDl+zF8nhCyXu3Do8UAV0fPbt7K8UHLvly X-Received: by 2002:a17:906:fe0a:b0:76f:e373:d84b with SMTP id wy10-20020a170906fe0a00b0076fe373d84bmr30980556ejb.297.1666676473124; Mon, 24 Oct 2022 22:41:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666676473; cv=none; d=google.com; s=arc-20160816; b=WWtI3ppTc66vEHBcBL+1FJy81un/Ma6h6POAhkEKVNKMxRyBPDzeW+wfGJKpQ0NVGy eannTL1Sp5EswBZyrBcsSANM3jlr3EiqA95JM1yo4VuVPTxve07Cftaqi0wGG5LiEHFJ Sawh/jSidNcDCJpYJ+PlHbmArC3rYnoWww0YElxIOct2qpG+26g+Dk+emoQIeWCpSWnn G7aPQZ2Qr8tBAD1xgTUKX4lOBIdsJ5oHMD5dE5AQIW3ga2Unhm2EtmEpFMlTMdPtiPIh oUAh8JGdoGDaX0eEaPY+C+4h5kzSfyheGFsxik0+daYEZYNbcscQK7qnCE2Qw1SV2mZs SDqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:dkim-signature:date; bh=lG01Ahpo4ENuldLEAS9Qh2m3uNUJs2/EyBGEGwc5EcI=; b=VVd4JcPGWYFucTOCKPBm9Ro6lN3kIhkGhrrBxZtneVslSJrRnzdgeLhvSvVF0PrqPj f8G1kmvnmvRMrHr99X7PXUCxCV1GFXu1R7ycCbBg5jw3+Qxx29ou6EmgL5uYfd/iO2L/ qa2S2qMEEVT8CKYNZh4CnFO3A94wYkAm/e1Qqs7C59y4F3N6XzINyuwjfL6+coJ221RE e+2OW2NU+95ZEhuWzxr0qSUxCCvKa57mAXaIx5zTCCMNJ62zZvDTG+6G0XyVdBjo5KfE xz1LlqzS0WFDbdSqZ+d6LHoyv0wPln+piXiKLV6kUzBak0fZ9fC/HaUMvUxqVnmbJVwi +6dg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=img6RSRO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f8-20020a0564021e8800b00461a32e0e38si2128079edf.306.2022.10.24.22.40.49; Mon, 24 Oct 2022 22:41:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=img6RSRO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230111AbiJYFgN (ORCPT + 99 others); Tue, 25 Oct 2022 01:36:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229597AbiJYFgM (ORCPT ); Tue, 25 Oct 2022 01:36:12 -0400 Received: from out0.migadu.com (out0.migadu.com [94.23.1.103]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7DEFA99D6 for ; Mon, 24 Oct 2022 22:36:09 -0700 (PDT) Date: Tue, 25 Oct 2022 14:35:59 +0900 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1666676167; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=lG01Ahpo4ENuldLEAS9Qh2m3uNUJs2/EyBGEGwc5EcI=; b=img6RSROb0vt8i38+T/353SRbYlB1u0iSa3fyoTyGc23o26DCxpdp6CFggFaevAKwlKQLL fqAYG/r8oqd0hAqXUGThMzYjl0C4PP4X6snl/TeyaZlpwCT0PxG99OmPgFprusKhWIEOko x7IVrJD7F6oge+OIsCCKpBZuWCLKQXk= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Naoya Horiguchi To: Miaohe Lin Cc: linux-mm@kvack.org, Andrew Morton , David Hildenbrand , Mike Kravetz , Yang Shi , Oscar Salvador , Muchun Song , Jane Chu , Naoya Horiguchi , linux-kernel@vger.kernel.org Subject: [PATCH v8 1/4] mm,hwpoison,hugetlb,memory_hotplug: hotremove memory section with hwpoisoned hugepage Message-ID: <20221025053559.GA2104800@ik1-406-35019.vs.sakura.ne.jp> References: <20221024062012.1520887-1-naoya.horiguchi@linux.dev> <20221024062012.1520887-2-naoya.horiguchi@linux.dev> <2484c6be-5075-2203-4f16-9cfdc4876e11@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <2484c6be-5075-2203-4f16-9cfdc4876e11@huawei.com> X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_PASS, SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747548875660029082?= X-GMAIL-MSGID: =?utf-8?q?1747636949439528597?= On Tue, Oct 25, 2022 at 10:38:11AM +0800, Miaohe Lin wrote: > On 2022/10/24 14:20, Naoya Horiguchi wrote: > > From: Naoya Horiguchi > > > > HWPoisoned page is not supposed to be accessed once marked, but currently > > such accesses can happen during memory hotremove because do_migrate_range() > > can be called before dissolve_free_huge_pages() is called. > > > > Clear HPageMigratable for hwpoisoned hugepages to prevent them from being > > migrated. This should be done in hugetlb_lock to avoid race against > > isolate_hugetlb(). > > > > get_hwpoison_huge_page() needs to have a flag to show it's called from > > unpoison to take refcount of hwpoisoned hugepages, so add it. > > > > Reported-by: Miaohe Lin > > Signed-off-by: Naoya Horiguchi > > Reviewed-by: Oscar Salvador > > Reviewed-by: Miaohe Lin > > --- > > ChangeLog v3 -> v7: > > - introduce TESTCLEARHPAGEFLAG() to determine the value of migratable_cleared > > Many thanks for update, Naoya. I'm sorry but TestClearHPageMigratable() might be somewhat > overkill. As we discussed in previous thread: > > """ > I think I might be nitpicking... But it seems ClearHPageMigratable is not enough here. > 1. In MF_COUNT_INCREASED case, we don't know whether HPageMigratable is set. > 2. Even if HPageMigratable is set, there might be a race window before we clear HPageMigratable? > So "*migratable_cleared = TestClearHPageMigratable" might be better? But I might be wrong. > """ > > The case 2 should be a dumb problem(sorry about it). HPageMigratable() is always cleared while holding > the hugetlb_lock which is already held by get_huge_page_for_hwpoison(). So the only case we should care > about is case 1 and that can be handled by below more efficient pattern: > if (HPageMigratable) > ClearHPageMigratable() > > So the overhead of test and clear atomic ops can be avoided. But this is trival. > > Anyway, this patch still looks good to me. And my Reviewed-by tag still applies. Many thanks. OK, so I replace this 1/4 with the following one, thank you. - Naoya Horiguchi --- From 22cbd7649d1e23db272306f8d066edb15d4e322c Mon Sep 17 00:00:00 2001 From: Naoya Horiguchi Date: Tue, 25 Oct 2022 14:18:10 +0900 Subject: [PATCH v8 1/4] mm,hwpoison,hugetlb,memory_hotplug: hotremove memory section with hwpoisoned hugepage HWPoisoned page is not supposed to be accessed once marked, but currently such accesses can happen during memory hotremove because do_migrate_range() can be called before dissolve_free_huge_pages() is called. Clear HPageMigratable for hwpoisoned hugepages to prevent them from being migrated. This should be done in hugetlb_lock to avoid race against isolate_hugetlb(). get_hwpoison_huge_page() needs to have a flag to show it's called from unpoison to take refcount of hwpoisoned hugepages, so add it. Reported-by: Miaohe Lin Signed-off-by: Naoya Horiguchi Reviewed-by: Oscar Salvador Reviewed-by: Miaohe Lin --- ChangeLog v7 -> v8: - remove TestClearHPageMigratable and reduce to test and clear separately. ChangeLog v3 -> v7: - introduce TESTCLEARHPAGEFLAG() to determine the value of migratable_cleared ChangeLog v3 -> v6: - introduce migratable_cleared to remember that HPageMigratable is cleared in error handling. It's needed to cancel when an error event is filtered by hwpoison_filter(). (Thanks to Miaohe) ChangeLog v2 -> v3 - move to the approach of clearing HPageMigratable instead of shifting dissolve_free_huge_pages. --- include/linux/hugetlb.h | 10 ++++++---- include/linux/mm.h | 6 ++++-- mm/hugetlb.c | 9 +++++---- mm/memory-failure.c | 21 +++++++++++++++++---- 4 files changed, 32 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index a899bc76d677..3568b90b397d 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -183,8 +183,9 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to, long hugetlb_unreserve_pages(struct inode *inode, long start, long end, long freed); int isolate_hugetlb(struct page *page, struct list_head *list); -int get_hwpoison_huge_page(struct page *page, bool *hugetlb); -int get_huge_page_for_hwpoison(unsigned long pfn, int flags); +int get_hwpoison_huge_page(struct page *page, bool *hugetlb, bool unpoison); +int get_huge_page_for_hwpoison(unsigned long pfn, int flags, + bool *migratable_cleared); void putback_active_hugepage(struct page *page); void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int reason); void free_huge_page(struct page *page); @@ -391,12 +392,13 @@ static inline int isolate_hugetlb(struct page *page, struct list_head *list) return -EBUSY; } -static inline int get_hwpoison_huge_page(struct page *page, bool *hugetlb) +static inline int get_hwpoison_huge_page(struct page *page, bool *hugetlb, bool unpoison) { return 0; } -static inline int get_huge_page_for_hwpoison(unsigned long pfn, int flags) +static inline int get_huge_page_for_hwpoison(unsigned long pfn, int flags, + bool *migratable_cleared) { return 0; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 58345f06a2f4..3da6283c9d30 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3246,9 +3246,11 @@ extern void shake_page(struct page *p); extern atomic_long_t num_poisoned_pages __read_mostly; extern int soft_offline_page(unsigned long pfn, int flags); #ifdef CONFIG_MEMORY_FAILURE -extern int __get_huge_page_for_hwpoison(unsigned long pfn, int flags); +extern int __get_huge_page_for_hwpoison(unsigned long pfn, int flags, + bool *migratable_cleared); #else -static inline int __get_huge_page_for_hwpoison(unsigned long pfn, int flags) +static inline int __get_huge_page_for_hwpoison(unsigned long pfn, int flags, + bool *migratable_cleared) { return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 931789a8f734..88d2dc756822 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7292,7 +7292,7 @@ int isolate_hugetlb(struct page *page, struct list_head *list) return ret; } -int get_hwpoison_huge_page(struct page *page, bool *hugetlb) +int get_hwpoison_huge_page(struct page *page, bool *hugetlb, bool unpoison) { int ret = 0; @@ -7302,7 +7302,7 @@ int get_hwpoison_huge_page(struct page *page, bool *hugetlb) *hugetlb = true; if (HPageFreed(page)) ret = 0; - else if (HPageMigratable(page)) + else if (HPageMigratable(page) || unpoison) ret = get_page_unless_zero(page); else ret = -EBUSY; @@ -7311,12 +7311,13 @@ int get_hwpoison_huge_page(struct page *page, bool *hugetlb) return ret; } -int get_huge_page_for_hwpoison(unsigned long pfn, int flags) +int get_huge_page_for_hwpoison(unsigned long pfn, int flags, + bool *migratable_cleared) { int ret; spin_lock_irq(&hugetlb_lock); - ret = __get_huge_page_for_hwpoison(pfn, flags); + ret = __get_huge_page_for_hwpoison(pfn, flags, migratable_cleared); spin_unlock_irq(&hugetlb_lock); return ret; } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 62cf1e0fbc8e..0ba7032be8c0 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1250,7 +1250,7 @@ static int __get_hwpoison_page(struct page *page, unsigned long flags) int ret = 0; bool hugetlb = false; - ret = get_hwpoison_huge_page(head, &hugetlb); + ret = get_hwpoison_huge_page(head, &hugetlb, false); if (hugetlb) return ret; @@ -1340,7 +1340,7 @@ static int __get_unpoison_page(struct page *page) int ret = 0; bool hugetlb = false; - ret = get_hwpoison_huge_page(head, &hugetlb); + ret = get_hwpoison_huge_page(head, &hugetlb, true); if (hugetlb) return ret; @@ -1791,7 +1791,8 @@ void hugetlb_clear_page_hwpoison(struct page *hpage) * -EBUSY - the hugepage is busy (try to retry) * -EHWPOISON - the hugepage is already hwpoisoned */ -int __get_huge_page_for_hwpoison(unsigned long pfn, int flags) +int __get_huge_page_for_hwpoison(unsigned long pfn, int flags, + bool *migratable_cleared) { struct page *page = pfn_to_page(pfn); struct page *head = compound_head(page); @@ -1821,6 +1822,15 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags) goto out; } + /* + * Clearing HPageMigratable for hwpoisoned hugepages to prevent them + * from being migrated by memory hotremove. + */ + if (count_increased && HPageMigratable(head)) { + ClearHPageMigratable(head); + *migratable_cleared = true; + } + return ret; out: if (count_increased) @@ -1840,10 +1850,11 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb struct page *p = pfn_to_page(pfn); struct page *head; unsigned long page_flags; + bool migratable_cleared = false; *hugetlb = 1; retry: - res = get_huge_page_for_hwpoison(pfn, flags); + res = get_huge_page_for_hwpoison(pfn, flags, &migratable_cleared); if (res == 2) { /* fallback to normal page handling */ *hugetlb = 0; return 0; @@ -1867,6 +1878,8 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb if (hwpoison_filter(p)) { hugetlb_clear_page_hwpoison(head); + if (migratable_cleared) + SetHPageMigratable(head); unlock_page(head); if (res == 1) put_page(head);