From patchwork Mon May 8 01:44:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 90916 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1864476vqo; Sun, 7 May 2023 19:17:37 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ56Kz2vJgUQlWGOHFlOCOdYRiK7JYaYMKCfR6koBITnPD4Kkz2J6yLspR3c/CqaOSFBbZIL X-Received: by 2002:a05:6a00:15ca:b0:640:defd:a6b9 with SMTP id o10-20020a056a0015ca00b00640defda6b9mr11326086pfu.3.1683512256944; Sun, 07 May 2023 19:17:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683512256; cv=none; d=google.com; s=arc-20160816; b=elQLlLCxk6S6+bZm7U/3jpQaK8VeMjEl+8DY9GsG3rEHnaa0OUxV8S+HbVW082OzOh irLshNhFSS5AEPWKE0TzJ33G/wa//x5qG1+Dhpgt0U1HlO2HsTQdRuqyxLRUPXOtvnpP nkR7ExutnaVZX6kkb96DdMHozOX4rakfNb/EkmrIzEu3OCKogexIV/j+n62nLXapadvs RlrKU8elcwiqA5UldSjSgNChWDVSmZ2/OXz7rsh6fz2jydWW6uimSo3od7Q5RcZq6c48 QVSxjJwQxz/Ydl6k7dKS6iuiyx03Bs6X2Z7Gk9+WApDwitGImxHCOmtpYNluQPUob1zf oZKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Zq23zzyZVMkdc/EXpTivR7acVXFe/p9y542EhEVDwrI=; b=tnkSSmhE5Uo6SMyu/m1VlUpfJyQfZlYZNup6XeCUdrP1Yq9VzfkaU3EISRLfb+hiA1 MOzdWXPpV9mT2h26nyXXoY+aMuupT/XTju0Ra7QdF0rZpJJLilFpnkIBS67CcvqJ9mj4 yV3j216tEetaq/JqHf+n6bS/xT/Xb0Zde5Wj9ImCkB3Wtz2+7pPxP6gASOEDRwBJG5Cm maoD5xsGPg80Ky2BK2fSDpxVj4nqU8w7suwoZHobwwziU/86dWngG4ebGzyH6AbOILcm 7Dq4I7PVDjP5RD6vI+wRGQgqWRphz0o2PkDOBetBWuiZqS0E/cwQKohn+zuS2rqiKv/k jONA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i15-20020a056a00004f00b00625c6092a12si4092802pfk.215.2023.05.07.19.17.24; Sun, 07 May 2023 19:17:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232571AbjEHBpC (ORCPT + 99 others); Sun, 7 May 2023 21:45:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232542AbjEHBo6 (ORCPT ); Sun, 7 May 2023 21:44:58 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35CF612093 for ; Sun, 7 May 2023 18:44:57 -0700 (PDT) Received: from kwepemm600017.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QF3vr6r9rzsR5p; Mon, 8 May 2023 09:43:04 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 09:44:53 +0800 From: Tong Tiangen To: Catalin Marinas , Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Will Deacon , Alexander Viro , , "H . Peter Anvin" CC: , , , Kefeng Wang , Guohanjun , Xie XiuQi , Tong Tiangen Subject: [PATCH -next v9 1/5] uaccess: add generic fallback version of copy_mc_to_user() Date: Mon, 8 May 2023 09:44:32 +0800 Message-ID: <20230508014436.198717-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230508014436.198717-1-tongtiangen@huawei.com> References: <20230508014436.198717-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765290548282910904?= X-GMAIL-MSGID: =?utf-8?q?1765290548282910904?= x86/powerpc has it's implementation of copy_mc_to_user(), we add generic fallback in include/linux/uaccess.h prepare for other architechures to enable CONFIG_ARCH_HAS_COPY_MC. Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/uaccess.h | 9 +++++++++ 3 files changed, 11 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index a2d255aa9627..1713dff36568 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 457e814712af..37d3c078d768 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -573,6 +573,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..550287c92990 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -205,6 +205,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Mon May 8 01:44:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 90912 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1855034vqo; Sun, 7 May 2023 18:50:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7mfNty9w87stDO6B4BtYQ27sD1KLLqlJsh2+ZkukK1UNsl2Mr7azSFsaF2VCMuKM18J/7E X-Received: by 2002:a17:902:aa07:b0:1a6:3c64:513c with SMTP id be7-20020a170902aa0700b001a63c64513cmr9980022plb.37.1683510648226; Sun, 07 May 2023 18:50:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683510648; cv=none; d=google.com; s=arc-20160816; b=ZBXfO7008DHHQOiID/UslmGJ/+MZ34k3u4x84zzB391zBA+jjZCs8g0ePxk6E2v9Z+ YD2EpDN6Lrd9Z6T9AezWrYBeXsG5MnipktoHM2Lgr4oOz5BiY6jcG/PhPHShkOBMaJ+v 11I/4Y4o1Pz1ufk05PZr1ZEfl10pkqNce7q9pnQ4kjO/s0/0Bmn8DbFNReCZ6dEnIu5L EWQ+C89FcGrVbk30UVKpaQT0D2DChqKoX/Vpo+iFRzSuihB638FZ96tkVbeAj1EKuMzd mo8cbV6CzNYokW33JKW8O7NgdwLPuB+oh5moncOHGbXgGmK2vAHZtPCyEDPaJFLJJYOh vpJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=QjuZkIP7hYnIzRYJNO8BhtA6qSDslyffDnsR7IPtQX4=; b=YYN3WdOMPGQbrq6n8OeGkTUU7xKLJBa5GqeyzNcKJhxV3tZRkocIuTJPOaaG92Z5dq p86yecSUHlzyCkGlPNwalwrjbSqJHVyxMimZ9lBbbblX4C/QuenO65gh/XuYYdyp90rV 9oX2xn8l+MSQRVm8UDJxPsvBIsWfMHMogxQ1UMfPYLcxb1ICZmm49afPaRLZMxdjyU52 /1G3UUIpsJt4L8nNvw+glOEl1o3IP87Y/qTPvSi3J3BO23TTbInn3bihMz/cRmMaRi0A 4qid03SzGDTDMRL1M3Wqw2ywxjXt6bdlRX/hio/s/wKHiS+XkL2F1jEzh76J7aBC05AX A3LQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t29-20020a63955d000000b0051b578dfc51si7123772pgn.744.2023.05.07.18.50.34; Sun, 07 May 2023 18:50:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232670AbjEHBpM (ORCPT + 99 others); Sun, 7 May 2023 21:45:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232624AbjEHBpG (ORCPT ); Sun, 7 May 2023 21:45:06 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC0281387E for ; Sun, 7 May 2023 18:44:58 -0700 (PDT) Received: from kwepemm600017.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4QF3sD5MZFz18KyN; Mon, 8 May 2023 09:40:48 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 09:44:55 +0800 From: Tong Tiangen To: Catalin Marinas , Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Will Deacon , Alexander Viro , , "H . Peter Anvin" CC: , , , Kefeng Wang , Guohanjun , Xie XiuQi , Tong Tiangen Subject: [PATCH -next v9 2/5] arm64: add support for machine check error safe Date: Mon, 8 May 2023 09:44:33 +0800 Message-ID: <20230508014436.198717-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230508014436.198717-1-tongtiangen@huawei.com> References: <20230508014436.198717-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765288861293499818?= X-GMAIL-MSGID: =?utf-8?q?1765288861293499818?= For the arm64 kernel, when it processes hardware memory errors for synchronize notifications(do_sea()), if the errors is consumed within the kernel, the current processing is panic. However, it is not optimal. Take uaccess for example, if the uaccess operation fails due to memory error, only the user process will be affected. Killing the user process and isolating the corrupt page is a better choice. This patch only enable machine error check framework and adds an exception fixup before the kernel panic in do_sea(). Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/extable.h | 1 + arch/arm64/mm/extable.c | 16 ++++++++++++++++ arch/arm64/mm/fault.c | 29 ++++++++++++++++++++++++++++- 4 files changed, 46 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b1201d25a8a4..730b815acfca 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -20,6 +20,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f80ebd0addfd 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_mc(struct pt_regs *regs); #endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 228d681a8715..478e639f8680 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -76,3 +76,19 @@ bool fixup_exception(struct pt_regs *regs) BUG(); } + +bool fixup_exception_mc(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + /* + * This is not complete, More Machine check safe extable type can + * be processed here. + */ + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 9e0db5c387e3..4d490d820f1a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -749,6 +749,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs)) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + if (!fixup_exception_mc(regs)) + return false; + + if (current->flags & PF_KTHREAD) + return true; + + set_thread_esr(0, esr); + arm64_force_sig_fault(sig, code, addr, + "Uncorrected memory error on access to user memory\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -774,7 +799,9 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } - arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); + + if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; } From patchwork Mon May 8 01:44:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 90917 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1867552vqo; Sun, 7 May 2023 19:27:06 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7u3MIg9Xcng1Von9Qj9+H9W2We1SNCtujjipZgPI93QtpdQh/3BPDRXGBrykk1eM1xnMjU X-Received: by 2002:a05:6a20:1608:b0:f2:c2a3:3a1 with SMTP id l8-20020a056a20160800b000f2c2a303a1mr11236084pzj.43.1683512825711; Sun, 07 May 2023 19:27:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683512825; cv=none; d=google.com; s=arc-20160816; b=YUeM4/oHBk4J59XX8n5iY6a0tNTngvvmcDPOHdjD91P0AutZT6FA/Cw6pGE9N5Xj5P GMNYNCcSPxI654ma1PsG9CVXZmXHOcL4t5zvEf0c9J8aat9FrQp5QVVkX0mO3kQtJYjF YTmBhrYA58nDMOrJj3Q+VkMprQXWoUf+7lImPWomUwbwBZ71Z8zhWkEH00AoImgw9sk/ yFXwT7jLQEBOyyGSr0wNKiBXiH8rAZoQYyH1zICwE/x0tvNtNMWC7xwsyZ1373i8zYuj PPRX4w+izhkotpRnYnS7WufQD5U1dFd+AhASGOzA8KEyMgpbU1HbM1eWmX4ihZDDrB3z 23pQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=BNzaMLmT/fOSvwJsmb5uKyh0qMThOxpDTIW90fWaaZ0=; b=B7+aAasadJmS5f1oorphgznDCy3yv85Wnpby+bN3PV2r9Gqgtys6h29IKBgWq+qNS9 wiaN88J0r9YfSmNdDtj1o8evYF4wkHi55QMVG6+8icieJKPA9rTJu2cqdzSt9Gj11HqV NGzr5ltiPcFQZXZm7HJ3TEsIOH0I/asX23hVGoykgIzC0z6eAIrPRn9+QHv9/keQRfm/ 62ExQ00Wga86eqhzEjWLGnMzvT8+tYbs4uX5CFH6+IUoXywB86tjcBnXdDs4IdUshmFf r3oXTPyr181gUs1OCkNN2RMcur5vuDkNAll4B9tZM3v/9LU/+WH5Yh/TIUULN5jqPLEH 4g7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n3-20020a6546c3000000b0051b32aa1ddasi7910068pgr.383.2023.05.07.19.26.50; Sun, 07 May 2023 19:27:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232638AbjEHBpQ (ORCPT + 99 others); Sun, 7 May 2023 21:45:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232580AbjEHBpG (ORCPT ); Sun, 7 May 2023 21:45:06 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C016911D89 for ; Sun, 7 May 2023 18:44:59 -0700 (PDT) Received: from kwepemm600017.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QF3sF2GZSzpTRk; Mon, 8 May 2023 09:40:49 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 09:44:56 +0800 From: Tong Tiangen To: Catalin Marinas , Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Will Deacon , Alexander Viro , , "H . Peter Anvin" CC: , , , Kefeng Wang , Guohanjun , Xie XiuQi , Tong Tiangen Subject: [PATCH -next v9 3/5] arm64: add uaccess to machine check safe Date: Mon, 8 May 2023 09:44:34 +0800 Message-ID: <20230508014436.198717-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230508014436.198717-1-tongtiangen@huawei.com> References: <20230508014436.198717-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765291144638427344?= X-GMAIL-MSGID: =?utf-8?q?1765291144638427344?= If user process access memory fails due to hardware memory error, only the relevant processes are affected, so it is more reasonable to kill the user process and isolate the corrupt page than to panic the kernel. Signed-off-by: Tong Tiangen --- arch/arm64/mm/extable.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 478e639f8680..28ec35e3d210 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs) if (!ex) return false; - /* - * This is not complete, More Machine check safe extable type can - * be processed here. - */ + switch (ex->type) { + case EX_TYPE_UACCESS_ERR_ZERO: + return ex_handler_uaccess_err_zero(ex, regs); + } return false; } From patchwork Mon May 8 01:44:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 90915 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1863375vqo; Sun, 7 May 2023 19:14:20 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4AYDK5y22uSTOwtRjTS7pClVYVwx3G0jLQMkJ0BJrMMRsgrWXTNCdVGuyRhPSp06KflHUp X-Received: by 2002:a17:902:d2c9:b0:1ac:61ad:d6bd with SMTP id n9-20020a170902d2c900b001ac61add6bdmr5634273plc.65.1683512060475; Sun, 07 May 2023 19:14:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683512060; cv=none; d=google.com; s=arc-20160816; b=qav8K4kEiceoVg/x5Ys0lNG8rIMOyf/yyw9fkjURwx5QCbe5ZqijyO/RxaCQbdSFpp AS1LjE0RscxbfB9lLXUFmOkacB9kQUdWYkkY1dh/3nuG7tVOCrsHSk4tFFH1Y3LuZuDd kCWi/QJTbUso+0nHE15cXM4aBloK1AqZz5awr4oMQtjU7joiCX7j+DNLs248V2psJr5/ YrDvSOq8zj9LxkxLWDMSzT9UrzXu+lOEn3+B1x9sQBXvp02bEWcJBTIwZTDkplKgqfI0 JnYvb/kKDtHvZmhiFsTNU5KQFqO35565wQ7svOXPAG+j3RgbTIGu13i+iPVq8tprMIHi cV/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=dDxjDtgyFwFbxUTgZqlBdm7TmtNwdDEKJzeF52MA/B8=; b=KRWWWSyxfIBIppYJIQzcDVIx21GgVuTNNVVId7s+gMRh595R9VVyVXM27J3QtU3k6Q NYBICHhC+QPeOkgfi5fs04nb2b3OZMH4LBtgYAu63q/NskThZo8o+Gn8IDLPggzGg71x 02PDeW0YTJ2rJgbneBPkfWdGQcSCrDwXLV/sf8tC9UcgHYafSNk7+uIG0U3zVcRfYvGZ awl5Tp2G1gmL+O8ShwnKxl1XQeDPXXJ41Nb5sGmrCai9/lJn7YsjZMbUq24YsQmVTQ6U euBgFDVdm1w0uVXWyVXQMQXdQjf/j7iwI2Rs5DhfKyvDM60yqo1r6TXuyBiZNXO3s+8l KIkg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c16-20020a170902d49000b001ac307c4ef4si7364912plg.301.2023.05.07.19.14.07; Sun, 07 May 2023 19:14:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232700AbjEHBpT (ORCPT + 99 others); Sun, 7 May 2023 21:45:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232596AbjEHBpK (ORCPT ); Sun, 7 May 2023 21:45:10 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 464171437F for ; Sun, 7 May 2023 18:45:01 -0700 (PDT) Received: from kwepemm600017.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QF3tp09PtzLnwc; Mon, 8 May 2023 09:42:09 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 09:44:57 +0800 From: Tong Tiangen To: Catalin Marinas , Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Will Deacon , Alexander Viro , , "H . Peter Anvin" CC: , , , Kefeng Wang , Guohanjun , Xie XiuQi , Tong Tiangen Subject: [PATCH -next v9 4/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Date: Mon, 8 May 2023 09:44:35 +0800 Message-ID: <20230508014436.198717-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230508014436.198717-1-tongtiangen@huawei.com> References: <20230508014436.198717-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765290342431795040?= X-GMAIL-MSGID: =?utf-8?q?1765290342431795040?= If hardware errors are encountered during page copying, returning the bytes not copied is not meaningful, and the caller cannot do any processing on the remaining data. Returning -EFAULT is more reasonable, which represents a hardware error encountered during the copying. Signed-off-by: Tong Tiangen --- include/linux/highmem.h | 8 ++++---- mm/khugepaged.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 4de1dbcd3ef6..c29f51ea8517 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from) /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory - * page with #MC in source page (@from) handled, and return the number - * of bytes not copied if there was a #MC, otherwise 0 for success. + * page with #MC in source page (@from) handled, and return -EFAULT if there + * was a #MC, otherwise 0 for success. */ static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) @@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } static inline int copy_mc_highpage(struct page *to, struct page *from) @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } #else static inline int copy_mc_user_highpage(struct page *to, struct page *from, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6b9d39d65b73..ef8b70377292 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -805,7 +805,7 @@ static int __collapse_huge_page_copy(pte_t *pte, continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, _address, vma)) { result = SCAN_COPY_MC; break; } @@ -2140,7 +2140,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, clear_highpage(hpage + (index % HPAGE_PMD_NR)); index++; } - if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) { + if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page)) { result = SCAN_COPY_MC; goto rollback; } From patchwork Mon May 8 01:44:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 90913 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1856208vqo; Sun, 7 May 2023 18:54:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4XVH8pe4OOh2mGzMCLSVpNyBiXIwKT3aKfppwDf3p96skCQur0H0XH6zF9VEh56OissBzK X-Received: by 2002:a05:6a00:1da9:b0:645:834c:f521 with SMTP id z41-20020a056a001da900b00645834cf521mr2706783pfw.17.1683510886333; Sun, 07 May 2023 18:54:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683510886; cv=none; d=google.com; s=arc-20160816; b=drixpl10e4dpqyNj+MiXNmBWWA/6nXsW5hguj64xAXr1etsaCw9PaQaZDAQ0lcW5XI a37ysA/ty4VdbLiFOp3XXnWOoSxJCz0biV5h4t1FDOp1nxoXZWnGmIKbhSDJ4q4fBpAf iTERfgB7Phq2ZE5BRLLxX2SqepgHuILiRr8RcJxTAYiAOkQA9oEub6DGUjil29FVxjjl nMAwVYWfS4iJiMxkI2czEGWmU8kCs17yDSSvgTi3N6TJcBA81k+d30bzVkGx00ZJH+dM yJBPKB/g8qC4inHjmPnqH6UqjTC+Apg/ekZiXxLzLY8B0SoXqKgVQd2BlPwOMbJ/3SqZ K/Xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ebVSKQjWExOEXQYLg10dYTqwoAYTA5Tr2aFu+y4obA0=; b=QZVjEHwyrFcr7WSGFEbrBLtVUEWieKrFArFLdginrJFuzHVKOKnxpPnD2O+h/9lO4x xMOoRHz12thu4aiiCq5EkSE5+Ex6b/VQdn3f0E+xqCi4u3i1KO0NwLvDZQ7zxVfg+Y1r 4nXvdtosAGYUN98nPmd/0l6m2SEuCsxuU0cELPfhsP3pIOW+y5M6PptBxKdjzeer9GHB Z9C0JuXAsmVxrC/WL0t2pMdi3YyTi/f3FvcooQfYactbi0wwzTx2/j8opWoc6BIa6RO9 D3Uxve2MBlaxmuJcVDld7g229EIc8EOK7EFRsLpvKxHGyT40ytTtNVrQ5/qv4dCB6g9R KjqA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w63-20020a626242000000b006438a9c45b0si2733883pfb.109.2023.05.07.18.54.31; Sun, 07 May 2023 18:54:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232715AbjEHBpZ (ORCPT + 99 others); Sun, 7 May 2023 21:45:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232625AbjEHBpM (ORCPT ); Sun, 7 May 2023 21:45:12 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F22E156A6 for ; Sun, 7 May 2023 18:45:03 -0700 (PDT) Received: from kwepemm600017.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QF3rp0czMzTk2V; Mon, 8 May 2023 09:40:26 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 09:44:59 +0800 From: Tong Tiangen To: Catalin Marinas , Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Will Deacon , Alexander Viro , , "H . Peter Anvin" CC: , , , Kefeng Wang , Guohanjun , Xie XiuQi , Tong Tiangen Subject: [PATCH -next v9 5/5] arm64: support copy_mc_[user]_highpage() Date: Mon, 8 May 2023 09:44:36 +0800 Message-ID: <20230508014436.198717-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230508014436.198717-1-tongtiangen@huawei.com> References: <20230508014436.198717-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765289110792929635?= X-GMAIL-MSGID: =?utf-8?q?1765289110792929635?= Currently, many scenarios that can tolerate memory errors when copying page have been supported in the kernel[1][2][3], all of which are implemented by copy_mc_[user]_highpage(). arm64 should also support this mechanism. Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. Add new helper copy_mc_page() which provide a page copy implementation with machine check safe. The copy_mc_page() in copy_mc_page.S is largely borrows from copy_page() in copy_page.S and the main difference is copy_mc_page() add extable entry to every load/store insn to support machine check safe. Add new extable type EX_TYPE_COPY_MC_PAGE_ERR_ZERO which used in copy_mc_page(). [1]a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults") [2]5f2500b93cc9 ("mm/khugepaged: recover from poisoned anonymous memory") [3]6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 15 +++++ arch/arm64/include/asm/assembler.h | 4 ++ arch/arm64/include/asm/mte.h | 5 ++ arch/arm64/include/asm/page.h | 10 ++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_mc_page.S | 89 ++++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 27 +++++++++ arch/arm64/mm/copypage.c | 64 +++++++++++++++++--- arch/arm64/mm/extable.c | 7 ++- include/linux/highmem.h | 4 ++ 10 files changed, 217 insertions(+), 10 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..819044fefbe7 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -10,6 +10,7 @@ #define EX_TYPE_UACCESS_ERR_ZERO 2 #define EX_TYPE_KACCESS_ERR_ZERO 3 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_COPY_MC_PAGE_ERR_ZERO 5 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -51,6 +52,16 @@ #define _ASM_EXTABLE_UACCESS(insn, fixup) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) +#define _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, err, zero) \ + __ASM_EXTABLE_RAW(insn, fixup, \ + EX_TYPE_COPY_MC_PAGE_ERR_ZERO, \ + ( \ + EX_DATA_REG(ERR, err) | \ + EX_DATA_REG(ZERO, zero) \ + )) + +#define _ASM_EXTABLE_COPY_MC_PAGE(insn, fixup) \ + _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, wzr, wzr) /* * Create an exception table entry for uaccess `insn`, which will branch to `fixup` * when an unhandled fault is taken. @@ -59,6 +70,10 @@ _ASM_EXTABLE_UACCESS(\insn, \fixup) .endm + .macro _asm_extable_copy_mc_page, insn, fixup + _ASM_EXTABLE_COPY_MC_PAGE(\insn, \fixup) + .endm + /* * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 376a980f2bad..547ab2f85888 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -154,6 +154,10 @@ lr .req x30 // link register #define CPU_LE(code...) code #endif +#define CPY_MC(l, x...) \ +9999: x; \ + _asm_extable_copy_mc_page 9999b, l + /* * Define a macro that constructs a 64-bit value by concatenating two * 32-bit registers. Note that on big endian systems the order of the diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index c028afb1cd0b..df00d4aa226a 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *page) void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); +int mte_copy_mc_page_tags(void *kto, const void *kfrom); void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_cpu_setup(void); @@ -128,6 +129,10 @@ static inline void mte_sync_tags(pte_t old_pte, pte_t pte) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) +{ + return 0; +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..304cc86b8a10 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +int copy_mc_page(void *to, const void *from); +int copy_mc_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_MC_HIGHPAGE + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE +#endif + struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..a2fd865b816d 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -15,6 +15,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S new file mode 100644 index 000000000000..656d831ef4b8 --- /dev/null +++ b/arch/arm64/lib/copy_mc_page.S @@ -0,0 +1,89 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with machine check + * + * Parameters: + * x0 - dest + * x1 - src + * Returns: + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong + * while copying. + */ +SYM_FUNC_START(__pi_copy_mc_page) +alternative_if ARM64_HAS_NO_HW_PREFETCH + // Prefetch three cache lines ahead. + prfm pldl1strm, [x1, #128] + prfm pldl1strm, [x1, #256] + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +CPY_MC(9998f, ldp x2, x3, [x1]) +CPY_MC(9998f, ldp x4, x5, [x1, #16]) +CPY_MC(9998f, ldp x6, x7, [x1, #32]) +CPY_MC(9998f, ldp x8, x9, [x1, #48]) +CPY_MC(9998f, ldp x10, x11, [x1, #64]) +CPY_MC(9998f, ldp x12, x13, [x1, #80]) +CPY_MC(9998f, ldp x14, x15, [x1, #96]) +CPY_MC(9998f, ldp x16, x17, [x1, #112]) + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + +alternative_if ARM64_HAS_NO_HW_PREFETCH + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) +CPY_MC(9998f, ldp x2, x3, [x1]) +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) +CPY_MC(9998f, ldp x4, x5, [x1, #16]) +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) +CPY_MC(9998f, ldp x6, x7, [x1, #32]) +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) +CPY_MC(9998f, ldp x8, x9, [x1, #48]) +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) +CPY_MC(9998f, ldp x10, x11, [x1, #64]) +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) +CPY_MC(9998f, ldp x12, x13, [x1, #80]) +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) +CPY_MC(9998f, ldp x14, x15, [x1, #96]) +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) +CPY_MC(9998f, ldp x16, x17, [x1, #112]) + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) + + mov x0, #0 + ret + +9998: mov x0, #-EFAULT + ret + +SYM_FUNC_END(__pi_copy_mc_page) +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) +EXPORT_SYMBOL(copy_mc_page) diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 5018ac03b6bf..2b748e83f6cf 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -80,6 +80,33 @@ SYM_FUNC_START(mte_copy_page_tags) ret SYM_FUNC_END(mte_copy_page_tags) +/* + * Copy the tags from the source page to the destination one wiht machine check safe + * x0 - address of the destination page + * x1 - address of the source page + * Returns: + * x0 - Return 0 if copy success, or + * -EFAULT if anything goes wrong while copying. + */ +SYM_FUNC_START(mte_copy_mc_page_tags) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: +CPY_MC(2f, ldgm x4, [x3]) +CPY_MC(2f, stgm x4, [x2]) + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b + + mov x0, #0 + ret + +2: mov x0, #-EFAULT + ret +SYM_FUNC_END(mte_copy_mc_page_tags) + /* * Read tags from a user buffer (one tag per byte) and set the corresponding * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 4aadcfb01754..7cbc12ee6009 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -14,21 +14,34 @@ #include #include -void copy_highpage(struct page *to, struct page *from) +static int do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc) { - void *kto = page_address(to); - void *kfrom = page_address(from); - - copy_page(kto, kfrom); + int ret = 0; if (system_supports_mte() && page_mte_tagged(from)) { if (kasan_hw_tags_enabled()) page_kasan_tag_reset(to); /* It's a new page, shouldn't have been tagged yet */ WARN_ON_ONCE(!try_page_mte_tagging(to)); - mte_copy_page_tags(kto, kfrom); - set_page_mte_tagged(to); + if (mc) + ret = mte_copy_mc_page_tags(kto, kfrom); + else + mte_copy_page_tags(kto, kfrom); + + if (!ret) + set_page_mte_tagged(to); } + + return ret; +} + +void copy_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page(kto, kfrom); + do_mte(to, from, kto, kfrom, false); } EXPORT_SYMBOL(copy_highpage); @@ -39,3 +52,40 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Return -EFAULT if anything goes wrong while copying page or mte. + */ +int copy_mc_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + int ret; + + ret = copy_mc_page(kto, kfrom); + if (ret) + return -EFAULT; + + ret = do_mte(to, from, kto, kfrom, true); + if (ret) + return -EFAULT; + + return 0; +} +EXPORT_SYMBOL(copy_mc_highpage); + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + int ret; + + ret = copy_mc_highpage(to, from); + + if (!ret) + flush_dcache_page(to); + + return ret; +} +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); +#endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 28ec35e3d210..bdc81518d207 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -16,7 +16,7 @@ get_ex_fixup(const struct exception_table_entry *ex) return ((unsigned long)&ex->fixup + ex->fixup); } -static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex, +static bool ex_handler_fixup_err_zero(const struct exception_table_entry *ex, struct pt_regs *regs) { int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data); @@ -69,7 +69,7 @@ bool fixup_exception(struct pt_regs *regs) return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: case EX_TYPE_KACCESS_ERR_ZERO: - return ex_handler_uaccess_err_zero(ex, regs); + return ex_handler_fixup_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); } @@ -87,7 +87,8 @@ bool fixup_exception_mc(struct pt_regs *regs) switch (ex->type) { case EX_TYPE_UACCESS_ERR_ZERO: - return ex_handler_uaccess_err_zero(ex, regs); + case EX_TYPE_COPY_MC_PAGE_ERR_ZERO: + return ex_handler_fixup_err_zero(ex, regs); } return false; diff --git a/include/linux/highmem.h b/include/linux/highmem.h index c29f51ea8517..b36ac0d99176 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -371,19 +371,23 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) return ret ? -EFAULT : 0; } #else +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { copy_user_highpage(to, from, vaddr, vma); return 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { copy_highpage(to, from); return 0; } #endif +#endif static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off,