From patchwork Wed Feb 1 07:44:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 51202 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2388:b0:96:219d:e725 with SMTP id i8csp244128dyf; Tue, 31 Jan 2023 23:26:32 -0800 (PST) X-Google-Smtp-Source: AK7set+KL+ZR67C+p9bYNf8+r7Uf6hQhF1pOyEJq35czdWERh7GD8pNk5/8XsfLRgNAhbXTwfHIP X-Received: by 2002:a17:906:d8f:b0:88c:8c2e:af13 with SMTP id m15-20020a1709060d8f00b0088c8c2eaf13mr1204739eji.32.1675236391997; Tue, 31 Jan 2023 23:26:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675236391; cv=none; d=google.com; s=arc-20160816; b=retQxbeZXpCqL/cT3aFD33nSDSQC7GZFF2THWl0gy/hPmXNf7l/KA5VzpPsAf6+iCq qvRreoJVTeyOAYe6r8rxd/hOD0m7LmfF6OqXlYvVhUiy6baMMzuUjdMP7/CICx5jcJp4 IK0SNKr2w2x4bV16ZGuTBal/8aQJBsxiggQPgS+Q2JWInZt41nj+ddGNpPmZdP/uiVy8 L614wQu134DgWf8YAzteKpGp73s+bUDt4X90Hek4GLIgaTZVMvsxRYeRzVuT5V3ZkTRW DKlr8ZeLQT12Y33vsf4fcP1DoAgyocRmpBz55+hMOrGbl/eBCEQNUW+1alXq/SFA1I67 n8hw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=wOE2rNOQeC9MALN/LAk6BAmeeVNDkJWtTLHzRF+Df0A=; b=OIshFnxwxPWJRS6AGUCgCchYeeX12JQ2sCVkhXRCb8vN60Pxs82Vk6bJ6d5ZrJty37 M6iRzKuPT927sDBcdfRcUbpXiPkBzjJpbQCoFrPC5vBOkDM7Lc9s2CeCuIVw6U5OhGg2 VpcZG4ROVUc5rkuiLE4AN/3kdvA2QXRESHynmppMNOgyfbnBrlztnr1jTQmG9cilaRm1 L2OHCEL6zmAqCFCISOZDgRzDVBR7quzxvZ6xq6qAz3KuuZSgBxpmtxqZH2McuQIreQ3W v8wJqhMG5bdAyF4H8n4uyfKs5oHobom11NKhOGfUWVP/BPgJp4bMcOcNMS4ogl3jAuKD McOA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ec12-20020a170906b6cc00b0084d0b4b4fcdsi20874372ejb.194.2023.01.31.23.26.07; Tue, 31 Jan 2023 23:26:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231859AbjBAHWp (ORCPT + 99 others); Wed, 1 Feb 2023 02:22:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231808AbjBAHWg (ORCPT ); Wed, 1 Feb 2023 02:22:36 -0500 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A5525D128 for ; Tue, 31 Jan 2023 23:22:16 -0800 (PST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4P6CtK18jBzJqtB; Wed, 1 Feb 2023 15:17:45 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 1 Feb 2023 15:22:14 +0800 From: Kefeng Wang To: CC: , , , , , Kefeng Wang Subject: [PATCH v4] mm: hwposion: support recovery from ksm_might_need_to_copy() Date: Wed, 1 Feb 2023 15:44:33 +0800 Message-ID: <20230201074433.96641-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752065632973114147?= X-GMAIL-MSGID: =?utf-8?q?1756612675435884241?= When the kernel copy a page from ksm_might_need_to_copy(), but runs into an uncorrectable error, it will crash since poisoned page is consumed by kernel, this is similar to the issue recently fixed by Copy-on-write poison recovery. When an error is detected during the page copy, return VM_FAULT_HWPOISON in do_swap_page(), and install a hwpoison entry in unuse_pte() when swapoff, which help us to avoid system crash. Note, memory failure on a KSM page will be skipped, but still call memory_failure_queue() to be consistent with general memory failure process, and we could support KSM page recovery in the feature. Signed-off-by: Kefeng Wang Reviewed-by: Naoya Horiguchi --- v4: - update changelog and directly return ERR_PTR(-EHWPOISON) in ksm_might_need_to_copy() suggested HORIGUCHI NAOYA - add back unlikely in unuse_pte() mm/ksm.c | 7 +++++-- mm/memory.c | 3 +++ mm/swapfile.c | 20 ++++++++++++++------ 3 files changed, 22 insertions(+), 8 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index dd02780c387f..addf490da146 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2629,8 +2629,11 @@ struct page *ksm_might_need_to_copy(struct page *page, new_page = NULL; } if (new_page) { - copy_user_highpage(new_page, page, address, vma); - + if (copy_mc_user_highpage(new_page, page, address, vma)) { + put_page(new_page); + memory_failure_queue(page_to_pfn(page), 0); + return ERR_PTR(-EHWPOISON); + } SetPageDirty(new_page); __SetPageUptodate(new_page); __SetPageLocked(new_page); diff --git a/mm/memory.c b/mm/memory.c index aad226daf41b..5b2c137dfb2a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!page)) { ret = VM_FAULT_OOM; goto out_page; + } else if (unlikely(PTR_ERR(page) == -EHWPOISON)) { + ret = VM_FAULT_HWPOISON; + goto out_page; } folio = page_folio(page); diff --git a/mm/swapfile.c b/mm/swapfile.c index 908a529bca12..3ef2468d7130 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1763,12 +1763,15 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, struct page *swapcache; spinlock_t *ptl; pte_t *pte, new_pte; + bool hwposioned = false; int ret = 1; swapcache = page; page = ksm_might_need_to_copy(page, vma, addr); if (unlikely(!page)) return -ENOMEM; + else if (unlikely(PTR_ERR(page) == -EHWPOISON)) + hwposioned = true; pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) { @@ -1776,15 +1779,19 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, goto out; } - if (unlikely(!PageUptodate(page))) { - pte_t pteval; + if (unlikely(hwposioned || !PageUptodate(page))) { + swp_entry_t swp_entry; dec_mm_counter(vma->vm_mm, MM_SWAPENTS); - pteval = swp_entry_to_pte(make_swapin_error_entry()); - set_pte_at(vma->vm_mm, addr, pte, pteval); - swap_free(entry); + if (hwposioned) { + swp_entry = make_hwpoison_entry(swapcache); + page = swapcache; + } else { + swp_entry = make_swapin_error_entry(); + } + new_pte = swp_entry_to_pte(swp_entry); ret = 0; - goto out; + goto setpte; } /* See do_swap_page() */ @@ -1816,6 +1823,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, new_pte = pte_mksoft_dirty(new_pte); if (pte_swp_uffd_wp(*pte)) new_pte = pte_mkuffd_wp(new_pte); +setpte: set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); out: