Message ID | 20230508022233.13890-1-wangkefeng.wang@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1867773vqo; Sun, 7 May 2023 19:27:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Hx8DMXmuaP1wlCej5u5MHVa5BbVS22kOpq5AzUdx1y841uCwFPAXUoeKT0Rl6lVnRUKPL X-Received: by 2002:a17:90a:f488:b0:247:8ce1:996e with SMTP id bx8-20020a17090af48800b002478ce1996emr9006095pjb.29.1683512866793; Sun, 07 May 2023 19:27:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683512866; cv=none; d=google.com; s=arc-20160816; b=ieui6X1Po1zlQzvqccA5OleQBM+J/4fC+jpWon6TS2hmLYc8m2kxkzqS/TZiKGtC2w iWrIP2YWV1GYoFe/ZL0bwa1BuFiXkdbDCF6GNS5uTl7/UrLD65gvmUTSQixUY8NjGx1k 49y+qVShw9DrQMxG4egtcd0A+VvorWtdYcDnDkWHshsUwIvlecvYS123gWTw8GUJcy69 ZqiPDZynvcEtU16GOL3tTs/kVexDA/7M6Cz9Sps5TnKFNsjWc5geXgX+gyiKswtfcHrD xmJ7cBhRKOg9AC8PlgaQzYkSeM/PyGfqYSi/ifrH3AMmNo4ojAzbxWidbuxi9RkmfwS3 eeSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=z9kQ340/YozssMs9GMvc9XLaOYdIKzCFrwb9AwkGXvU=; b=ftvpe/0RlRWjptzIpBCxAkaO5PiRXJnOlw2+7IBI0DiADJo06Q9IR8EX8bIv7y9uCa 7i4JHkeLfBZD94kUzWcqc/G+S4V0opvYnOSAifltgsgwyCX8ZysVHKp5O7B+GbyA77TC HfwLyoy9aFADSOtwB0mZhdsDEbRxj5iLXdyo9OcbbSK3kPDt5WIxVfxBjQ/hGdjWdrAL S1iV8TOC0fBlgVQm5Pf7Cjtyax60RWDlFd4ekpPD4LiEzPLd8IUnMUpNHxChwJoMiee0 FhSicVAILYw5GOf3SGhAcd/F27u+Ycab80ruVqkDHGttO6ctKc3aiB0nV+v6r926eEcV Y/Jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q13-20020a17090aa00d00b002478d1e81c7si23416766pjp.174.2023.05.07.19.27.34; Sun, 07 May 2023 19:27:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232363AbjEHCJ3 (ORCPT <rfc822;baris.duru.linux@gmail.com> + 99 others); Sun, 7 May 2023 22:09:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbjEHCJ1 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sun, 7 May 2023 22:09:27 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84318124BE; Sun, 7 May 2023 19:09:25 -0700 (PDT) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QF4Qy28c7zLntj; Mon, 8 May 2023 10:06:34 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 10:09:22 +0800 From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Tony Luck <tony.luck@intel.com>, Borislav Petkov <bp@alien8.de>, Naoya Horiguchi <naoya.horiguchi@nec.com> CC: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>, <x86@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, <linux-edac@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>, <jane.chu@oracle.com>, Kefeng Wang <wangkefeng.wang@huawei.com> Subject: [PATCH] x86/mce: set MCE_IN_KERNEL_COPYIN for all MC-Safe Copy Date: Mon, 8 May 2023 10:22:33 +0800 Message-ID: <20230508022233.13890-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765291187433485018?= X-GMAIL-MSGID: =?utf-8?q?1765291187433485018?= |
Series |
x86/mce: set MCE_IN_KERNEL_COPYIN for all MC-Safe Copy
|
|
Commit Message
Kefeng Wang
May 8, 2023, 2:22 a.m. UTC
Both EX_TYPE_FAULT_MCE_SAFE and EX_TYPE_DEFAULT_MCE_SAFE exception
fixup types are used to identify fixups which allow in kernel #MC
recovery, that is the Machine Check Safe Copy.
For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY
and EX_TYPE_UACCESS when copy from user, and corrupted page is
isolated in this case, for MC-safe copy, memory_failure() is not
always called, some places, like __wp_page_copy_user, copy_subpage,
copy_user_gigantic_page and ksm_might_need_to_copy manually call
memory_failure_queue() to cope with such unhandled error pages,
recently coredump hwposion recovery support[1] is asked to do the
same thing, and there are some other already existed MC-safe copy
scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue.
The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE
exception, then kill_me_never() will be queued to call memory_failure()
in do_machine_check() to isolate corrupted page, which avoid calling
memory_failure_queue() after every MC-safe copy return.
[1] https://lkml.kernel.org/r/20230417045323.11054-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/x86/kernel/cpu/mce/severity.c | 3 +--
mm/ksm.c | 1 -
mm/memory.c | 12 +++---------
3 files changed, 4 insertions(+), 12 deletions(-)
Comments
On Mon, May 08, 2023 at 10:22:33AM +0800, Kefeng Wang wrote: > Both EX_TYPE_FAULT_MCE_SAFE and EX_TYPE_DEFAULT_MCE_SAFE exception > fixup types are used to identify fixups which allow in kernel #MC > recovery, that is the Machine Check Safe Copy. > > For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY > and EX_TYPE_UACCESS when copy from user, and corrupted page is > isolated in this case, for MC-safe copy, memory_failure() is not > always called, some places, like __wp_page_copy_user, copy_subpage, > copy_user_gigantic_page and ksm_might_need_to_copy manually call > memory_failure_queue() to cope with such unhandled error pages, > recently coredump hwposion recovery support[1] is asked to do the > same thing, and there are some other already existed MC-safe copy > scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue. > > The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE > exception, then kill_me_never() will be queued to call memory_failure() > in do_machine_check() to isolate corrupted page, which avoid calling > memory_failure_queue() after every MC-safe copy return. > > [1] https://lkml.kernel.org/r/20230417045323.11054-1-wangkefeng.wang@huawei.com > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Looks good to me, thank you. Reviewed-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Hi Tony and all x86 maintainers, kindly ping, thanks. On 2023/5/8 10:22, Kefeng Wang wrote: > Both EX_TYPE_FAULT_MCE_SAFE and EX_TYPE_DEFAULT_MCE_SAFE exception > fixup types are used to identify fixups which allow in kernel #MC > recovery, that is the Machine Check Safe Copy. > > For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY > and EX_TYPE_UACCESS when copy from user, and corrupted page is > isolated in this case, for MC-safe copy, memory_failure() is not > always called, some places, like __wp_page_copy_user, copy_subpage, > copy_user_gigantic_page and ksm_might_need_to_copy manually call > memory_failure_queue() to cope with such unhandled error pages, > recently coredump hwposion recovery support[1] is asked to do the > same thing, and there are some other already existed MC-safe copy > scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue. > > The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE > exception, then kill_me_never() will be queued to call memory_failure() > in do_machine_check() to isolate corrupted page, which avoid calling > memory_failure_queue() after every MC-safe copy return. > > [1] https://lkml.kernel.org/r/20230417045323.11054-1-wangkefeng.wang@huawei.com > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > arch/x86/kernel/cpu/mce/severity.c | 3 +-- > mm/ksm.c | 1 - > mm/memory.c | 12 +++--------- > 3 files changed, 4 insertions(+), 12 deletions(-) > > diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c > index c4477162c07d..63e94484c5d6 100644 > --- a/arch/x86/kernel/cpu/mce/severity.c > +++ b/arch/x86/kernel/cpu/mce/severity.c > @@ -293,12 +293,11 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs) > case EX_TYPE_COPY: > if (!copy_user) > return IN_KERNEL; > - m->kflags |= MCE_IN_KERNEL_COPYIN; > fallthrough; > > case EX_TYPE_FAULT_MCE_SAFE: > case EX_TYPE_DEFAULT_MCE_SAFE: > - m->kflags |= MCE_IN_KERNEL_RECOV; > + m->kflags |= MCE_IN_KERNEL_RECOV | MCE_IN_KERNEL_COPYIN; > return IN_KERNEL_RECOV; > > default: > diff --git a/mm/ksm.c b/mm/ksm.c > index 0156bded3a66..7abdf4892387 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2794,7 +2794,6 @@ struct page *ksm_might_need_to_copy(struct page *page, > if (new_page) { > if (copy_mc_user_highpage(new_page, page, address, vma)) { > put_page(new_page); > - memory_failure_queue(page_to_pfn(page), 0); > return ERR_PTR(-EHWPOISON); > } > SetPageDirty(new_page); > diff --git a/mm/memory.c b/mm/memory.c > index 5e2c6b1fc00e..c0f586257017 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2814,10 +2814,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, > unsigned long addr = vmf->address; > > if (likely(src)) { > - if (copy_mc_user_highpage(dst, src, addr, vma)) { > - memory_failure_queue(page_to_pfn(src), 0); > + if (copy_mc_user_highpage(dst, src, addr, vma)) > return -EHWPOISON; > - } > return 0; > } > > @@ -5852,10 +5850,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, > > cond_resched(); > if (copy_mc_user_highpage(dst_page, src_page, > - addr + i*PAGE_SIZE, vma)) { > - memory_failure_queue(page_to_pfn(src_page), 0); > + addr + i*PAGE_SIZE, vma)) > return -EHWPOISON; > - } > } > return 0; > } > @@ -5871,10 +5867,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) > struct copy_subpage_arg *copy_arg = arg; > > if (copy_mc_user_highpage(copy_arg->dst + idx, copy_arg->src + idx, > - addr, copy_arg->vma)) { > - memory_failure_queue(page_to_pfn(copy_arg->src + idx), 0); > + addr, copy_arg->vma)) > return -EHWPOISON; > - } > return 0; > } >
> For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY > and EX_TYPE_UACCESS when copy from user, and corrupted page is > isolated in this case, for MC-safe copy, memory_failure() is not > always called, some places, like __wp_page_copy_user, copy_subpage, > copy_user_gigantic_page and ksm_might_need_to_copy manually call > memory_failure_queue() to cope with such unhandled error pages, > recently coredump hwposion recovery support[1] is asked to do the > same thing, and there are some other already existed MC-safe copy > scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue. > > The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE > exception, then kill_me_never() will be queued to call memory_failure() > in do_machine_check() to isolate corrupted page, which avoid calling > memory_failure_queue() after every MC-safe copy return. > > [1] https://lkml.kernel.org/r/20230417045323.11054-1-wangkefeng.wang@huawei.com Is this patch in addition to, or instead of, the earlier core dump patch? I'd like to run some tests. Can you point me a the precise set of patches that I should apply please? -Tony
On 2023/5/20 0:17, Luck, Tony wrote: >> For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY >> and EX_TYPE_UACCESS when copy from user, and corrupted page is >> isolated in this case, for MC-safe copy, memory_failure() is not >> always called, some places, like __wp_page_copy_user, copy_subpage, >> copy_user_gigantic_page and ksm_might_need_to_copy manually call >> memory_failure_queue() to cope with such unhandled error pages, >> recently coredump hwposion recovery support[1] is asked to do the >> same thing, and there are some other already existed MC-safe copy >> scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue. >> >> The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE >> exception, then kill_me_never() will be queued to call memory_failure() >> in do_machine_check() to isolate corrupted page, which avoid calling >> memory_failure_queue() after every MC-safe copy return. >> >> [1] https://lkml.kernel.org/r/20230417045323.11054-1-wangkefeng.wang@huawei.com > > Is this patch in addition to, or instead of, the earlier core dump patch? This is an addition, in previous coredump patch, manually call memory_failure_queue() to be asked to cope with corrupted page, and it is similar to your "Copy-on-write poison recovery"[1], but after some discussion, I think we could add MCE_IN_KERNEL_COPYIN to all MC-safe copy, which will cope with corrupted page in the core do_machine_check() instead of do it one-by-one. The related patch is normal page CoW [1] huge page CoW [2] coredump [3] ksm might copy [4] [1] d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults") [2] 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] 245f09226893 ("mm: hwpoison: coredump: support recovery from dump_user_range()") [4] 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") All of them are in v6.4-rc1. Thanks. Kefeng > > I'd like to run some tests. Can you point me a the precise set of patches > that I should apply please? > > -Tony >
>> Is this patch in addition to, or instead of, the earlier core dump patch? > > This is an addition, in previous coredump patch, manually call > memory_failure_queue() > to be asked to cope with corrupted page, and it is similar to your > "Copy-on-write poison recovery"[1], but after some discussion, I think > we could add MCE_IN_KERNEL_COPYIN to all MC-safe copy, which will > cope with corrupted page in the core do_machine_check() instead of > do it one-by-one. Thanks for the context. I see how this all fits together now). Your patch looks good. Reviewed-by: Tony Luck <tony.luck@intel.com> -Tony One small observation from testing. I injected to an application which consumed the poisoned data and was sent a SIGBUS. Kernel did not crash (hurrah!) Console log said: [ 417.610930] mce: [Hardware Error]: Machine check events logged [ 417.618372] Memory failure: 0x89167f: recovery action for dirty LRU page: Recovered ... EDAC messages [ 423.666918] MCE: Killing testprog:4770 due to hardware memory corruption fault at 7f8eccf35000 A core file was generated and saved in /var/lib/systemd/coredump But my shell (/bin/bash) only said: Bus error not Bus error (core dumped) -Tony
On 2023/5/23 2:02, Luck, Tony wrote: >>> Is this patch in addition to, or instead of, the earlier core dump patch? >> >> This is an addition, in previous coredump patch, manually call >> memory_failure_queue() >> to be asked to cope with corrupted page, and it is similar to your >> "Copy-on-write poison recovery"[1], but after some discussion, I think >> we could add MCE_IN_KERNEL_COPYIN to all MC-safe copy, which will >> cope with corrupted page in the core do_machine_check() instead of >> do it one-by-one. > > Thanks for the context. I see how this all fits together now). > > Your patch looks good. > > Reviewed-by: Tony Luck <tony.luck@intel.com> Thanks for your confirm. > > -Tony > > One small observation from testing. I injected to an application which consumed > the poisoned data and was sent a SIGBUS. > > Kernel did not crash (hurrah!) Yes, no crash is always great. > > Console log said: > > [ 417.610930] mce: [Hardware Error]: Machine check events logged > [ 417.618372] Memory failure: 0x89167f: recovery action for dirty LRU page: Recovered > ... EDAC messages > [ 423.666918] MCE: Killing testprog:4770 due to hardware memory corruption fault at 7f8eccf35000 > > A core file was generated and saved in /var/lib/systemd/coredump > > But my shell (/bin/bash) only said: > > Bus error > > not > > Bus error (core dumped) No sure about the effect, but since there is kernel message and mcelog, it seems that there is no big deal for the different :) > > -Tony >
Hi x86/mm maintainers, could you pick this up as it has be reviewed by Naoya and Tony, many thanks. On 2023/5/8 10:22, Kefeng Wang wrote: > Both EX_TYPE_FAULT_MCE_SAFE and EX_TYPE_DEFAULT_MCE_SAFE exception > fixup types are used to identify fixups which allow in kernel #MC > recovery, that is the Machine Check Safe Copy. > > For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY > and EX_TYPE_UACCESS when copy from user, and corrupted page is > isolated in this case, for MC-safe copy, memory_failure() is not > always called, some places, like __wp_page_copy_user, copy_subpage, > copy_user_gigantic_page and ksm_might_need_to_copy manually call > memory_failure_queue() to cope with such unhandled error pages, > recently coredump hwposion recovery support[1] is asked to do the > same thing, and there are some other already existed MC-safe copy > scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue. > > The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE > exception, then kill_me_never() will be queued to call memory_failure() > in do_machine_check() to isolate corrupted page, which avoid calling > memory_failure_queue() after every MC-safe copy return. > > [1] https://lkml.kernel.org/r/20230417045323.11054-1-wangkefeng.wang@huawei.com > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
On 5/7/23 19:22, Kefeng Wang wrote: > Both EX_TYPE_FAULT_MCE_SAFE and EX_TYPE_DEFAULT_MCE_SAFE exception > fixup types are used to identify fixups which allow in kernel #MC > recovery, that is the Machine Check Safe Copy. > > For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY > and EX_TYPE_UACCESS when copy from user, and corrupted page is > isolated in this case, for MC-safe copy, memory_failure() is not > always called, some places, like __wp_page_copy_user, copy_subpage, > copy_user_gigantic_page and ksm_might_need_to_copy manually call > memory_failure_queue() to cope with such unhandled error pages, > recently coredump hwposion recovery support[1] is asked to do the > same thing, and there are some other already existed MC-safe copy > scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue. That has to set some kind of record for run-on sentences. Could you please try to rewrite this coherently? > The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE > exception, then kill_me_never() will be queued to call memory_failure() > in do_machine_check() to isolate corrupted page, which avoid calling > memory_failure_queue() after every MC-safe copy return. Could you try to send a v2 of this with a clear problem statement? What is the end user visible effect of the problem and of your solution?
On 2023/5/26 1:18, Dave Hansen wrote: > On 5/7/23 19:22, Kefeng Wang wrote: >> Both EX_TYPE_FAULT_MCE_SAFE and EX_TYPE_DEFAULT_MCE_SAFE exception >> fixup types are used to identify fixups which allow in kernel #MC >> recovery, that is the Machine Check Safe Copy. >> >> For now, the MCE_IN_KERNEL_COPYIN flag is only set for EX_TYPE_COPY >> and EX_TYPE_UACCESS when copy from user, and corrupted page is >> isolated in this case, for MC-safe copy, memory_failure() is not >> always called, some places, like __wp_page_copy_user, copy_subpage, >> copy_user_gigantic_page and ksm_might_need_to_copy manually call >> memory_failure_queue() to cope with such unhandled error pages, >> recently coredump hwposion recovery support[1] is asked to do the >> same thing, and there are some other already existed MC-safe copy >> scenarios, eg, nvdimm, dm-writecache, dax, which has similar issue. > > That has to set some kind of record for run-on sentences. Could you > please try to rewrite this coherently? > >> The best way to fix them is set MCE_IN_KERNEL_COPYIN to MCE_SAFE >> exception, then kill_me_never() will be queued to call memory_failure() >> in do_machine_check() to isolate corrupted page, which avoid calling >> memory_failure_queue() after every MC-safe copy return. > > Could you try to send a v2 of this with a clear problem statement? > :( will try to make it more clear. > What is the end user visible effect of the problem and of your solution? The corrupted page won't be isolated for MC-safe copy scenario, and it could be accessed again by use application.
diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c index c4477162c07d..63e94484c5d6 100644 --- a/arch/x86/kernel/cpu/mce/severity.c +++ b/arch/x86/kernel/cpu/mce/severity.c @@ -293,12 +293,11 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs) case EX_TYPE_COPY: if (!copy_user) return IN_KERNEL; - m->kflags |= MCE_IN_KERNEL_COPYIN; fallthrough; case EX_TYPE_FAULT_MCE_SAFE: case EX_TYPE_DEFAULT_MCE_SAFE: - m->kflags |= MCE_IN_KERNEL_RECOV; + m->kflags |= MCE_IN_KERNEL_RECOV | MCE_IN_KERNEL_COPYIN; return IN_KERNEL_RECOV; default: diff --git a/mm/ksm.c b/mm/ksm.c index 0156bded3a66..7abdf4892387 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2794,7 +2794,6 @@ struct page *ksm_might_need_to_copy(struct page *page, if (new_page) { if (copy_mc_user_highpage(new_page, page, address, vma)) { put_page(new_page); - memory_failure_queue(page_to_pfn(page), 0); return ERR_PTR(-EHWPOISON); } SetPageDirty(new_page); diff --git a/mm/memory.c b/mm/memory.c index 5e2c6b1fc00e..c0f586257017 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2814,10 +2814,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - if (copy_mc_user_highpage(dst, src, addr, vma)) { - memory_failure_queue(page_to_pfn(src), 0); + if (copy_mc_user_highpage(dst, src, addr, vma)) return -EHWPOISON; - } return 0; } @@ -5852,10 +5850,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, cond_resched(); if (copy_mc_user_highpage(dst_page, src_page, - addr + i*PAGE_SIZE, vma)) { - memory_failure_queue(page_to_pfn(src_page), 0); + addr + i*PAGE_SIZE, vma)) return -EHWPOISON; - } } return 0; } @@ -5871,10 +5867,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) struct copy_subpage_arg *copy_arg = arg; if (copy_mc_user_highpage(copy_arg->dst + idx, copy_arg->src + idx, - addr, copy_arg->vma)) { - memory_failure_queue(page_to_pfn(copy_arg->src + idx), 0); + addr, copy_arg->vma)) return -EHWPOISON; - } return 0; }