From patchwork Mon Jan 9 06:24:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JeeHeng Sia X-Patchwork-Id: 40676 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp2002521wrt; Sun, 8 Jan 2023 22:27:14 -0800 (PST) X-Google-Smtp-Source: AMrXdXs6JX3CsBtV/FpQHJBwy/UBUup/HJ3niltnnHB0+vh/MKuHBtO1wliG2pvQFOSglp/XOmYM X-Received: by 2002:aa7:c04f:0:b0:499:376e:6b31 with SMTP id k15-20020aa7c04f000000b00499376e6b31mr4058509edo.29.1673245634009; Sun, 08 Jan 2023 22:27:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673245633; cv=none; d=google.com; s=arc-20160816; b=hx6E9yeZay1BVpYqGWniBugWEQNjL8RbM6EFI0yRFOWNiZUFzWxsL5RqiYBQKnGy90 toPBgsTbTkmB102aP+BEoK38Wy+9/GjOba/lXZ+Ww392AJw4lBYYaoLjghCC4OlCydpX n5nJklwoHMhEcZ6pFcG75y/bDq009X5SC/KwXPCdlNt7hBoMWeuCnaP3beQLjFsTZLsJ BuujUIxNS41cevHJWPcDYlJBRWb+Os0POqgPxBiFkbGeYrb0LHCRkA241rE9ga7qo987 ibfus6RP1WtinU/I6Sd1k/U1xFQolEDHcV06SEtV0VrlCZ0Bs+jSeeKz82sM2zQvYFQy DU2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=8tD6XVBYIrdB1rXsVbv3PFTqO12NGsVNAj1/vrSNP78=; b=n/xJeu+DhfTWKGvEpFu/pW8nzA6c2qXXKQ9zQBjiAS82veZeUtz6EpMpOTFFvHjh4v lD18fWLV+HDebXR8eVy5vJL2ph9+cw5fO8q2sHxfxEYIUN8CN2pCGjP5shTp1wF/BwKB fB5T/FFGDZAeSRTygYXYoOOjHw4/mgdbjBMpLeY+f1M/mbKo7pK4f2WgvYRcD1LJvUXT 8IEYoczkeyx1WPqza0G8NS+o5aU0FKD5ENe1txzbUsb6+/xeKogk7OnGij499d7fdSbW 3ggdIT/o1GkZ613beMmQsogGXApT+nIRtlnOxRO+yBOMIWowm5L6PN74p2OwcqGrXHZL 6s+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id du3-20020a17090772c300b0084d1ed9b688si8085451ejc.405.2023.01.08.22.26.50; Sun, 08 Jan 2023 22:27:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234023AbjAIGYw convert rfc822-to-8bit (ORCPT + 99 others); Mon, 9 Jan 2023 01:24:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230491AbjAIGYY (ORCPT ); Mon, 9 Jan 2023 01:24:24 -0500 Received: from ex01.ufhost.com (ex01.ufhost.com [61.152.239.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86077BE14 for ; Sun, 8 Jan 2023 22:24:23 -0800 (PST) Received: from EXMBX165.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX165", Issuer "EXMBX165" (not verified)) by ex01.ufhost.com (Postfix) with ESMTP id 048C724E1E8; Mon, 9 Jan 2023 14:24:21 +0800 (CST) Received: from EXMBX066.cuchost.com (172.16.7.66) by EXMBX165.cuchost.com (172.16.6.75) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 9 Jan 2023 14:24:21 +0800 Received: from jsia-virtual-machine.localdomain (60.49.128.133) by EXMBX066.cuchost.com (172.16.6.66) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 9 Jan 2023 14:24:17 +0800 From: Sia Jee Heng To: , , CC: , , , , Subject: [PATCH v2 1/3] RISC-V: Change suspend_save_csrs and suspend_restore_csrs to public function Date: Mon, 9 Jan 2023 14:24:05 +0800 Message-ID: <20230109062407.3235-2-jeeheng.sia@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230109062407.3235-1-jeeheng.sia@starfivetech.com> References: <20230109062407.3235-1-jeeheng.sia@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [60.49.128.133] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX066.cuchost.com (172.16.6.66) X-YovoleRuleAgent: yovoleflag X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754525213568044476?= X-GMAIL-MSGID: =?utf-8?q?1754525213568044476?= Currently suspend_save_csrs() and suspend_restore_csrs() functions are statically defined in the suspend.c. Change the function's attribute to public so that the functions can be used by hibernation as well. Signed-off-by: Sia Jee Heng Reviewed-by: Ley Foon Tan Reviewed-by: Mason Huo --- arch/riscv/include/asm/suspend.h | 3 +++ arch/riscv/kernel/suspend.c | 4 ++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/suspend.h b/arch/riscv/include/asm/suspend.h index 8be391c2aecb..75419c5ca272 100644 --- a/arch/riscv/include/asm/suspend.h +++ b/arch/riscv/include/asm/suspend.h @@ -33,4 +33,7 @@ int cpu_suspend(unsigned long arg, /* Low-level CPU resume entry function */ int __cpu_resume_enter(unsigned long hartid, unsigned long context); +/* Used to save and restore the csr */ +void suspend_save_csrs(struct suspend_context *context); +void suspend_restore_csrs(struct suspend_context *context); #endif diff --git a/arch/riscv/kernel/suspend.c b/arch/riscv/kernel/suspend.c index 9ba24fb8cc93..3c89b8ec69c4 100644 --- a/arch/riscv/kernel/suspend.c +++ b/arch/riscv/kernel/suspend.c @@ -8,7 +8,7 @@ #include #include -static void suspend_save_csrs(struct suspend_context *context) +void suspend_save_csrs(struct suspend_context *context) { context->scratch = csr_read(CSR_SCRATCH); context->tvec = csr_read(CSR_TVEC); @@ -29,7 +29,7 @@ static void suspend_save_csrs(struct suspend_context *context) #endif } -static void suspend_restore_csrs(struct suspend_context *context) +void suspend_restore_csrs(struct suspend_context *context) { csr_write(CSR_SCRATCH, context->scratch); csr_write(CSR_TVEC, context->tvec); From patchwork Mon Jan 9 06:24:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JeeHeng Sia X-Patchwork-Id: 40677 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp2002762wrt; Sun, 8 Jan 2023 22:28:13 -0800 (PST) X-Google-Smtp-Source: AMrXdXt3AqfWj2aFXH9Mm+Rh4BJ+fZDSZHfMpnL8Ed3LkKVAnKILB6jsPWzglarykZYn/4sRg5Ub X-Received: by 2002:a17:906:30ca:b0:7c1:6091:e73 with SMTP id b10-20020a17090630ca00b007c160910e73mr55547015ejb.1.1673245692977; Sun, 08 Jan 2023 22:28:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673245692; cv=none; d=google.com; s=arc-20160816; b=P/P3o6J6NmkkV/oxoalAvkSOYH/lUUZormAu4lF65BP66viYs78tcvkVEJFrkRceGT vhEdQ7grMWUXZRb8dT7c5DCRdUjMpFON+ht01wJZGIdi/DIRhKYV8pO6eKK9qewBR7ih uStrkdiGPbP07Np/EL9F29LMjvzHTQyG7uusmnrR1os07ZaSxqyqG/CXN08/BoI9n1MO xr8T4TwbfFnmqSe0cBrYr01/APwAeocRh4gWXpA9BBLHAvgmI9h00A2eKnRo2OdHj/Ku BLMiGz5lQWuWUd5HRBi6bOsEhWIU9zj7B5eA/T7XLnFbs85+HrwxnRgzrBU2HlV1j8NL qLPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=iD0SCivnYvA5HwZaAOXIMstCcOzijD2nJiqfdeW5H2s=; b=zghLwpodc0Q2KNbb+eShAyW+uKk/MAyMeQvIIICvq8hKD7YwGp/im3ofLu+lJui1VT EaY9e7wep49RWAmx2CTBW/amxjh0qEQ4N20m33k+QyEMl7NPvnOfbPzk7DjFH+bCrpiP ulv/VRLl3G4ULDStmqHE1qArojyDFRlpFBgK9+pquvWl+7joDmDoX8aWoOlSPXl4H5up Aim8vHYOjaLoyj88Kb/LUp3g6Mu+yRGJAlEyp8sQHG6m6a66Y1mOJXNOES0n+lI4TIvl 4/AT6DAmainesPb8EBBj8LdOJc+P7FNe92axbsm/TrQXNgpuszyfV0OnDGn1B+Tza9nW XNyA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sa40-20020a1709076d2800b007306ac0faa0si9142421ejc.615.2023.01.08.22.27.49; Sun, 08 Jan 2023 22:28:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234205AbjAIGYy convert rfc822-to-8bit (ORCPT + 99 others); Mon, 9 Jan 2023 01:24:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233331AbjAIGY1 (ORCPT ); Mon, 9 Jan 2023 01:24:27 -0500 Received: from ex01.ufhost.com (ex01.ufhost.com [61.152.239.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66661BE14 for ; Sun, 8 Jan 2023 22:24:26 -0800 (PST) Received: from EXMBX166.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX166", Issuer "EXMBX166" (not verified)) by ex01.ufhost.com (Postfix) with ESMTP id 412F924E21B; Mon, 9 Jan 2023 14:24:25 +0800 (CST) Received: from EXMBX066.cuchost.com (172.16.7.66) by EXMBX166.cuchost.com (172.16.6.76) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 9 Jan 2023 14:24:25 +0800 Received: from jsia-virtual-machine.localdomain (60.49.128.133) by EXMBX066.cuchost.com (172.16.6.66) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 9 Jan 2023 14:24:21 +0800 From: Sia Jee Heng To: , , CC: , , , , Subject: [PATCH v2 2/3] RISC-V: mm: Enable huge page support to kernel_page_present() function Date: Mon, 9 Jan 2023 14:24:06 +0800 Message-ID: <20230109062407.3235-3-jeeheng.sia@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230109062407.3235-1-jeeheng.sia@starfivetech.com> References: <20230109062407.3235-1-jeeheng.sia@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [60.49.128.133] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX066.cuchost.com (172.16.6.66) X-YovoleRuleAgent: yovoleflag X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754525276062182673?= X-GMAIL-MSGID: =?utf-8?q?1754525276062182673?= Currently kernel_page_present() function doesn't support huge page detection causes the function to mistakenly return false to the hibernation core. Add huge page detection to the function to solve the problem. Signed-off-by: Sia Jee Heng Reviewed-by: Ley Foon Tan Reviewed-by: Mason Huo --- arch/riscv/mm/pageattr.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 86c56616e5de..73fdec8c0a72 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -221,14 +221,20 @@ bool kernel_page_present(struct page *page) p4d = p4d_offset(pgd, addr); if (!p4d_present(*p4d)) return false; + if (p4d_leaf(*pud)) + return true; pud = pud_offset(p4d, addr); if (!pud_present(*pud)) return false; + if (pud_leaf(*pud)) + return true; pmd = pmd_offset(pud, addr); if (!pmd_present(*pmd)) return false; + if (pmd_leaf(*pmd)) + return true; pte = pte_offset_kernel(pmd, addr); return pte_present(*pte); From patchwork Mon Jan 9 06:24:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JeeHeng Sia X-Patchwork-Id: 40675 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp2002504wrt; Sun, 8 Jan 2023 22:27:11 -0800 (PST) X-Google-Smtp-Source: AMrXdXv1u0kmgidLg0EHUzKiFNrGradi7iiphuEPHxatmobB64L7JysWLwnrc6NAvyYe5lLeLrvQ X-Received: by 2002:a17:906:36ce:b0:7c1:727c:5f70 with SMTP id b14-20020a17090636ce00b007c1727c5f70mr54499510ejc.46.1673245630962; Sun, 08 Jan 2023 22:27:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673245630; cv=none; d=google.com; s=arc-20160816; b=piyjzMFRe2cFn1pVW46D/0hjGQ56NJeA9hZHOC198L7YfUGm/ZmExdcDz3aDOvu4+u LQIFWlZCzTzrQ5X9zKiAG6rTKAZGB9CPem/GW//jiHsrsA292Up0yyNcF+O4yvuuSoHJ nvm0xy/vQpbQ3wdyDv0yFg9IA+W8SFKWS/4sEH7fnY+qQgUbQycT3SzHqh94D6yydXJl ChHbNehHnDMW1HqMZz8ehLRpA+BiQrq/1vovtQt7Q/tFVXEZIYLVQs/c+iTcs9QwAE/+ z/0ecCIHDYUCm+LIIleOogG14RIkCy5jW0A/3tw9KRvBXCy4omz57vtQTCANDuivsryA g3YQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=VZjfYcwejYTIOxajtC1Z4OQgqcfKhTG+qkckATVKAV4=; b=gFM9uYEMyJGO85rhLfuyB49Ygw0lI5zjUxMp11FZVLKdw5J/QdEpvNzC61IG4WpH8p qaY1CX2E6Y6VLTc3XMiBABabbNCOocXl+VPEbf82fEBGgpIr1kmIjhr0+RwHIbPPCu2d ghyV/ewapMIK85NvDK2m1PDzPM6m3vHGRCrZXUmUYZUyg9/SSTER6tX34OjskMR3zYl4 +OR+H2d9ZGF+qcjk9Q6yCS8sBuAArSWJhL2SDAK83e0fyxU3rdLHq2ZvTQq2woh1/I7r foAbPzdy6CEvBJsUct5SufSV8qbNqzkSwW/AxYYt6jPqtz/HcAcASaln9/eh1KYobZgx IP+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xh7-20020a170906da8700b0077fc66b581esi8056176ejb.688.2023.01.08.22.26.48; Sun, 08 Jan 2023 22:27:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234276AbjAIGY6 convert rfc822-to-8bit (ORCPT + 99 others); Mon, 9 Jan 2023 01:24:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233652AbjAIGYd (ORCPT ); Mon, 9 Jan 2023 01:24:33 -0500 Received: from fd01.gateway.ufhost.com (fd01.gateway.ufhost.com [61.152.239.71]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D5E5BE14 for ; Sun, 8 Jan 2023 22:24:31 -0800 (PST) Received: from EXMBX165.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX165", Issuer "EXMBX165" (not verified)) by fd01.gateway.ufhost.com (Postfix) with ESMTP id 9B42124E15C; Mon, 9 Jan 2023 14:24:29 +0800 (CST) Received: from EXMBX066.cuchost.com (172.16.7.66) by EXMBX165.cuchost.com (172.16.6.75) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 9 Jan 2023 14:24:29 +0800 Received: from jsia-virtual-machine.localdomain (60.49.128.133) by EXMBX066.cuchost.com (172.16.6.66) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 9 Jan 2023 14:24:26 +0800 From: Sia Jee Heng To: , , CC: , , , , Subject: [PATCH v2 3/3] RISC-V: Add arch functions to support hibernation/suspend-to-disk Date: Mon, 9 Jan 2023 14:24:07 +0800 Message-ID: <20230109062407.3235-4-jeeheng.sia@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230109062407.3235-1-jeeheng.sia@starfivetech.com> References: <20230109062407.3235-1-jeeheng.sia@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [60.49.128.133] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX066.cuchost.com (172.16.6.66) X-YovoleRuleAgent: yovoleflag X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_MSPIKE_H2, SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754525210693276016?= X-GMAIL-MSGID: =?utf-8?q?1754525210693276016?= Low level Arch functions were created to support hibernation. swsusp_arch_suspend() relies code from __cpu_suspend_enter() to write cpu state onto the stack, then calling swsusp_save() to save the memory image. arch_hibernation_header_restore() and arch_hibernation_header_save() functions are implemented to prevent kernel crash when resume, the kernel built version is saved into the hibernation image header to making sure only the same kernel is restore when resume. swsusp_arch_resume() creates a temporary page table that covering only the linear map, copies the restore code to a 'safe' page, then start to restore the memory image. Once completed, it restores the original kernel's page table. It then calls into __hibernate_cpu_resume() to restore the CPU context. Finally, it follows the normal hibernation path back to the hibernation core. To enable hibernation/suspend to disk into RISCV, the below config need to be enabled: - CONFIG_ARCH_HIBERNATION_HEADER - CONFIG_ARCH_HIBERNATION_POSSIBLE - CONFIG_ARCH_RV64I - CONFIG_64BIT Signed-off-by: Sia Jee Heng Reviewed-by: Ley Foon Tan Reviewed-by: Mason Huo --- arch/riscv/Kconfig | 8 + arch/riscv/include/asm/suspend.h | 20 ++ arch/riscv/kernel/Makefile | 2 +- arch/riscv/kernel/asm-offsets.c | 5 + arch/riscv/kernel/hibernate-asm.S | 123 +++++++++++ arch/riscv/kernel/hibernate.c | 353 ++++++++++++++++++++++++++++++ 6 files changed, 510 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/kernel/hibernate-asm.S create mode 100644 arch/riscv/kernel/hibernate.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index e2b656043abf..3c2607b6bda7 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -690,6 +690,14 @@ menu "Power management options" source "kernel/power/Kconfig" +config ARCH_HIBERNATION_POSSIBLE + def_bool y + depends on 64BIT + +config ARCH_HIBERNATION_HEADER + def_bool y + depends on HIBERNATION && 64BIT + endmenu # "Power management options" menu "CPU Power Management" diff --git a/arch/riscv/include/asm/suspend.h b/arch/riscv/include/asm/suspend.h index 75419c5ca272..ebaf103aec40 100644 --- a/arch/riscv/include/asm/suspend.h +++ b/arch/riscv/include/asm/suspend.h @@ -21,6 +21,11 @@ struct suspend_context { #endif }; +/* This parameter will be assigned to 0 during resume and will be used by + * hibernation core for the subsequent resume sequence + */ +extern int in_suspend; + /* Low-level CPU suspend entry function */ int __cpu_suspend_enter(struct suspend_context *context); @@ -36,4 +41,19 @@ int __cpu_resume_enter(unsigned long hartid, unsigned long context); /* Used to save and restore the csr */ void suspend_save_csrs(struct suspend_context *context); void suspend_restore_csrs(struct suspend_context *context); + +/* Low-level API to support hibernation */ +int swsusp_arch_suspend(void); +int swsusp_arch_resume(void); +int arch_hibernation_header_save(void *addr, unsigned int max_size); +int arch_hibernation_header_restore(void *addr); +int __hibernate_cpu_resume(void); + +/* Used to resume on the CPU we hibernated on */ +int hibernate_resume_nonboot_cpu_disable(void); + +/* Used to restore the hibernated image */ +asmlinkage void restore_image(unsigned long resume_satp, unsigned long satp_temp, + unsigned long cpu_resume); +asmlinkage int core_restore_code(void); #endif diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 4cf303a779ab..df83b8cea631 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -64,7 +64,7 @@ obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULE_SECTIONS) += module-sections.o obj-$(CONFIG_CPU_PM) += suspend_entry.o suspend.o - +obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index df9444397908..d6a75aac1d27 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -116,6 +117,10 @@ void asm_offsets(void) OFFSET(SUSPEND_CONTEXT_REGS, suspend_context, regs); + OFFSET(HIBERN_PBE_ADDR, pbe, address); + OFFSET(HIBERN_PBE_ORIG, pbe, orig_address); + OFFSET(HIBERN_PBE_NEXT, pbe, next); + OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero); OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra); OFFSET(KVM_ARCH_GUEST_SP, kvm_vcpu_arch, guest_context.sp); diff --git a/arch/riscv/kernel/hibernate-asm.S b/arch/riscv/kernel/hibernate-asm.S new file mode 100644 index 000000000000..81d9dc98d0ad --- /dev/null +++ b/arch/riscv/kernel/hibernate-asm.S @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Hibernation support specific for RISCV + * + * Copyright (C) 2023 StarFive Technology Co., Ltd. + * + * Author: Jee Heng Sia + */ + +#include +#include +#include +#include + +/* + * These code are to be executed when resume from the hibernation. + * + * It begins with loads the temporary page table then restores the memory image. + * Finally branches to __hibernate_cpu_resume() to restore the state saved by + * swsusp_arch_suspend(). + */ + +/* + * int __hibernate_cpu_resume(void) + * Switch back to the hibernated image's page table prior to restore the CPU + * context. + * + * Always returns 0 to the C code. + */ +ENTRY(__hibernate_cpu_resume) + /* switch to hibernated image's page table */ + csrw CSR_SATP, s0 + sfence.vma + + ld a0, hibernate_cpu_context + + /* Restore CSRs */ + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_EPC)(a0) + csrw CSR_EPC, t0 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_STATUS)(a0) + csrw CSR_STATUS, t0 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_BADADDR)(a0) + csrw CSR_TVAL, t0 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_CAUSE)(a0) + csrw CSR_CAUSE, t0 + + /* Restore registers (except A0 and T0-T6) */ + REG_L ra, (SUSPEND_CONTEXT_REGS + PT_RA)(a0) + REG_L sp, (SUSPEND_CONTEXT_REGS + PT_SP)(a0) + REG_L gp, (SUSPEND_CONTEXT_REGS + PT_GP)(a0) + REG_L tp, (SUSPEND_CONTEXT_REGS + PT_TP)(a0) + + REG_L s0, (SUSPEND_CONTEXT_REGS + PT_S0)(a0) + REG_L s1, (SUSPEND_CONTEXT_REGS + PT_S1)(a0) + REG_L a1, (SUSPEND_CONTEXT_REGS + PT_A1)(a0) + REG_L a2, (SUSPEND_CONTEXT_REGS + PT_A2)(a0) + REG_L a3, (SUSPEND_CONTEXT_REGS + PT_A3)(a0) + REG_L a4, (SUSPEND_CONTEXT_REGS + PT_A4)(a0) + REG_L a5, (SUSPEND_CONTEXT_REGS + PT_A5)(a0) + REG_L a6, (SUSPEND_CONTEXT_REGS + PT_A6)(a0) + REG_L a7, (SUSPEND_CONTEXT_REGS + PT_A7)(a0) + REG_L s2, (SUSPEND_CONTEXT_REGS + PT_S2)(a0) + REG_L s3, (SUSPEND_CONTEXT_REGS + PT_S3)(a0) + REG_L s4, (SUSPEND_CONTEXT_REGS + PT_S4)(a0) + REG_L s5, (SUSPEND_CONTEXT_REGS + PT_S5)(a0) + REG_L s6, (SUSPEND_CONTEXT_REGS + PT_S6)(a0) + REG_L s7, (SUSPEND_CONTEXT_REGS + PT_S7)(a0) + REG_L s8, (SUSPEND_CONTEXT_REGS + PT_S8)(a0) + REG_L s9, (SUSPEND_CONTEXT_REGS + PT_S9)(a0) + REG_L s10, (SUSPEND_CONTEXT_REGS + PT_S10)(a0) + REG_L s11, (SUSPEND_CONTEXT_REGS + PT_S11)(a0) + + /* Return zero value */ + add a0, zero, zero + + ret +END(__hibernate_cpu_resume) + +/* + * Prepare to restore the image. + * a0: satp of saved page tables + * a1: satp of temporary page tables + * a2: cpu_resume + */ +ENTRY(restore_image) + mv s0, a0 + mv s1, a1 + mv s2, a2 + ld s4, restore_pblist + ld a1, relocated_restore_code + + jalr a1 +END(restore_image) + +/* + * The below code will be executed from a 'safe' page. + * It first switches to the temporary page table, then start to copy the pages + * back to the original memory location. Finally, it jumps to the __hibernate_cpu_resume() + * to restore the CPU context. + */ +ENTRY(core_restore_code) + /* switch to temp page table */ + csrw satp, s1 + sfence.vma + beqz s4, done +loop: + /* The below code will restore the hibernated image. */ + ld a1, HIBERN_PBE_ADDR(s4) + ld a0, HIBERN_PBE_ORIG(s4) + + lui a4, 0x1 + add a4, a4, a0 +copy: ld a5, 0(a1) + addi a0, a0, 8 + addi a1, a1, 8 + sd a5, -8(a0) + bne a4, a0, copy + + ld s4, HIBERN_PBE_NEXT(s4) + bnez s4, loop +done: + jalr s2 +END(core_restore_code) diff --git a/arch/riscv/kernel/hibernate.c b/arch/riscv/kernel/hibernate.c new file mode 100644 index 000000000000..bd77c35958a8 --- /dev/null +++ b/arch/riscv/kernel/hibernate.c @@ -0,0 +1,353 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Hibernation support specific for RISCV + * + * Copyright (C) 2023 StarFive Technology Co., Ltd. + * + * Author: Jee Heng Sia + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +/* + * The logical cpu number we should resume on, initialised to a non-cpu + * number. + */ +static int sleep_cpu = -EINVAL; + +/* CPU context to be saved */ +struct suspend_context *hibernate_cpu_context; + +unsigned long relocated_restore_code; + +/* Pointer to the temporary resume page tables */ +pgd_t *resume_pg_dir; + +/* + * Save the build number and date so that the we are not resume with a different kernel + */ +struct arch_hibernate_hdr_invariants { + char uts_version[__NEW_UTS_LEN + 1]; +}; + +/* Helper parameters that help us to restore the image. + * @hartid: To make sure same boot_cpu executing the hibernate/restore code. + * @saved_satp: Original page table used by the hibernated image. + * @restore_cpu_addr: The kernel's image address to restore the CPU context. + */ +static struct arch_hibernate_hdr { + struct arch_hibernate_hdr_invariants invariants; + unsigned long hartid; + unsigned long saved_satp; + unsigned long restore_cpu_addr; +} resume_hdr; + +static inline void arch_hdr_invariants(struct arch_hibernate_hdr_invariants *i) +{ + memset(i, 0, sizeof(*i)); + memcpy(i->uts_version, init_utsname()->version, sizeof(i->uts_version)); +} + +/* + * Check if the given pfn is in the 'nosave' section. + */ +int pfn_is_nosave(unsigned long pfn) +{ + unsigned long nosave_begin_pfn = sym_to_pfn(&__nosave_begin); + unsigned long nosave_end_pfn = sym_to_pfn(&__nosave_end - 1); + + return ((pfn >= nosave_begin_pfn) && (pfn <= nosave_end_pfn)); +} + +void notrace save_processor_state(void) +{ + WARN_ON(num_online_cpus() != 1); +} + +void notrace restore_processor_state(void) +{ +} + +/* + * Helper parameters need to be saved to the hibernation image header. + */ +int arch_hibernation_header_save(void *addr, unsigned int max_size) +{ + struct arch_hibernate_hdr *hdr = addr; + + if (max_size < sizeof(*hdr)) + return -EOVERFLOW; + + arch_hdr_invariants(&hdr->invariants); + + hdr->hartid = cpuid_to_hartid_map(sleep_cpu); + hdr->saved_satp = csr_read(CSR_SATP); + hdr->restore_cpu_addr = (unsigned long) __hibernate_cpu_resume; + + return 0; +} +EXPORT_SYMBOL(arch_hibernation_header_save); + +/* + * Retrieve the helper parameters from the hibernation image header + */ +int arch_hibernation_header_restore(void *addr) +{ + struct arch_hibernate_hdr_invariants invariants; + struct arch_hibernate_hdr *hdr = addr; + int ret = 0; + + arch_hdr_invariants(&invariants); + + if (memcmp(&hdr->invariants, &invariants, sizeof(invariants))) { + pr_crit("Hibernate image not generated by this kernel!\n"); + return -EINVAL; + } + + sleep_cpu = riscv_hartid_to_cpuid(hdr->hartid); + if (sleep_cpu < 0) { + pr_crit("Hibernated on a CPU not known to this kernel!\n"); + sleep_cpu = -EINVAL; + return -EINVAL; + } + +#ifdef CONFIG_SMP + ret = bringup_hibernate_cpu(sleep_cpu); + if (ret) { + sleep_cpu = -EINVAL; + return ret; + } +#endif + resume_hdr = *hdr; + + return ret; +} +EXPORT_SYMBOL(arch_hibernation_header_restore); + +int swsusp_arch_suspend(void) +{ + int ret = 0; + + if (__cpu_suspend_enter(hibernate_cpu_context)) { + sleep_cpu = smp_processor_id(); + suspend_save_csrs(hibernate_cpu_context); + ret = swsusp_save(); + } else { + suspend_restore_csrs(hibernate_cpu_context); + flush_tlb_all(); + + /* Invalidated Icache */ + flush_icache_all(); + + /* + * Tell the hibernation core that we've just restored + * the memory + */ + in_suspend = 0; + sleep_cpu = -EINVAL; + } + + return ret; +} + +#define temp_pgtable_map_pgd_next(pgdp, vaddr, prot) \ + (pgtable_l5_enabled ? \ + temp_pgtable_map_p4d(pgdp, vaddr, prot) : \ + (pgtable_l4_enabled ? \ + temp_pgtable_map_pud((pud_t *)pgdp, vaddr, prot) : \ + temp_pgtable_map_pmd((pmd_t *)pgdp, vaddr, prot))) + +static unsigned long temp_pgtable_map_pte(pte_t *ptep, unsigned long vaddr, pgprot_t prot) +{ + uintptr_t pte_idx = pte_index(vaddr); + + ptep[pte_idx] = pfn_pte(PFN_DOWN(__pa(vaddr)), prot); + + return 0; +} + +static unsigned long temp_pgtable_map_pmd(pmd_t *pmdp, unsigned long vaddr, pgprot_t prot) +{ + uintptr_t pmd_idx = pmd_index(vaddr); + pte_t *ptep; + + if (pmd_none(pmdp[pmd_idx])) { + ptep = (pte_t *) get_safe_page(GFP_ATOMIC); + if (!ptep) + return -ENOMEM; + + memset(ptep, 0, PAGE_SIZE); + pmdp[pmd_idx] = pfn_pmd(PFN_DOWN(__pa(ptep)), PAGE_TABLE); + } else { + ptep = (pte_t *) __va(PFN_PHYS(_pmd_pfn(pmdp[pmd_idx]))); + } + + return temp_pgtable_map_pte(ptep, vaddr, prot); +} + +static unsigned long temp_pgtable_map_pud(pud_t *pudp, unsigned long vaddr, pgprot_t prot) +{ + + uintptr_t pud_index = pud_index(vaddr); + pmd_t *pmdp; + + if (pud_val(pudp[pud_index]) == 0) { + pmdp = (pmd_t *) get_safe_page(GFP_ATOMIC); + if (!pmdp) + return -ENOMEM; + + memset(pmdp, 0, PAGE_SIZE); + pudp[pud_index] = pfn_pud(PFN_DOWN(__pa(pmdp)), PAGE_TABLE); + } else { + pmdp = (pmd_t *) __va(PFN_PHYS(_pud_pfn(pudp[pud_index]))); + } + + return temp_pgtable_map_pmd(pmdp, vaddr, prot); +} + +static unsigned long temp_pgtable_map_p4d(p4d_t *p4dp, unsigned long vaddr, pgprot_t prot) +{ + uintptr_t p4d_index = p4d_index(vaddr); + pud_t *pudp; + + if (p4d_val(p4dp[p4d_index]) == 0) { + pudp = (pud_t *) get_safe_page(GFP_ATOMIC); + if (!pudp) + return -ENOMEM; + + memset(pudp, 0, PAGE_SIZE); + p4dp[p4d_index] = pfn_p4d(PFN_DOWN(__pa(pudp)), PAGE_TABLE); + } else { + pudp = (pud_t *) __va(PFN_PHYS(_p4d_pfn(p4dp[p4d_index]))); + } + + return temp_pgtable_map_pud(pudp, vaddr, prot); +} + +static unsigned long temp_pgtable_map_pgd(pgd_t *pgdp, unsigned long vaddr, pgprot_t prot) +{ + uintptr_t pgd_idx = pgd_index(vaddr); + void *nextp; + + if (pgd_val(pgdp[pgd_idx]) == 0) { + nextp = (void *) get_safe_page(GFP_ATOMIC); + if (!nextp) + return -ENOMEM; + + memset(nextp, 0, PAGE_SIZE); + pgdp[pgd_idx] = pfn_pgd(PFN_DOWN(__pa(nextp)), PAGE_TABLE); + } else { + nextp = (void *) __va(PFN_PHYS(_pgd_pfn(pgdp[pgd_idx]))); + } + + return temp_pgtable_map_pgd_next(nextp, vaddr, prot); +} + +static unsigned long temp_pgtable_mapping(pgd_t *pgdp, unsigned long vaddr, pgprot_t prot) +{ + return temp_pgtable_map_pgd(pgdp, vaddr, prot); +} + +static unsigned long relocate_restore_code(void) +{ + unsigned long ret; + void *page = (void *)get_safe_page(GFP_ATOMIC); + + if (!page) + return -ENOMEM; + + copy_page(page, core_restore_code); + + /* Make the page containing the relocated code executable */ + set_memory_x((unsigned long)page, 1); + + ret = temp_pgtable_mapping(resume_pg_dir, (unsigned long)page, PAGE_KERNEL_READ_EXEC); + if (ret) + return ret; + + return (unsigned long)page; +} + +int swsusp_arch_resume(void) +{ + unsigned long addr = PAGE_OFFSET; + unsigned long ret; + + /* + * Memory allocated by get_safe_page() will be dealt with by the hibernation core, + * we don't need to free it here. + */ + resume_pg_dir = (pgd_t *)get_safe_page(GFP_ATOMIC); + if (!resume_pg_dir) + return -ENOMEM; + + /* + * The pages need to be wrote-able when restoring the image. + * Create a second copy of page table just for the linear map, and use this when + * restoring. + */ + for (; addr <= (unsigned long)pfn_to_virt(max_low_pfn); addr += PAGE_SIZE) { + ret = temp_pgtable_mapping(resume_pg_dir, addr, PAGE_KERNEL); + if (ret) + return (int) ret; + } + + /* Move the restore code to a new page so that it doesn't get overwritten by itself */ + relocated_restore_code = relocate_restore_code(); + if (relocated_restore_code == -ENOMEM) + return -ENOMEM; + + /* Map the __hibernate_cpu_resume() address to the temporary page table so that the + * restore code can jump to it after finished restore the image. The next execution + * code doesn't find itself in a different address space after switching over to the + * original page table used by the hibernated image. + */ + ret = temp_pgtable_mapping(resume_pg_dir, (unsigned long)resume_hdr.restore_cpu_addr, + PAGE_KERNEL_READ_EXEC); + if (ret) + return ret; + + restore_image(resume_hdr.saved_satp, (PFN_DOWN(__pa(resume_pg_dir)) | satp_mode), + resume_hdr.restore_cpu_addr); + + return 0; +} + +#ifdef CONFIG_SMP +int hibernate_resume_nonboot_cpu_disable(void) +{ + if (sleep_cpu < 0) { + pr_err("Failing to resume from hibernate on an unknown CPU.\n"); + return -ENODEV; + } + + return freeze_secondary_cpus(sleep_cpu); +} +#endif + +static int __init riscv_hibernate__init(void) +{ + hibernate_cpu_context = kcalloc(1, sizeof(struct suspend_context), GFP_KERNEL); + + if (WARN_ON(!hibernate_cpu_context)) + return -ENOMEM; + + return 0; +} + +early_initcall(riscv_hibernate__init);