Message ID | 20230127091051.1465278-4-jeeheng.sia@starfivetech.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp734307wrn; Fri, 27 Jan 2023 01:18:15 -0800 (PST) X-Google-Smtp-Source: AMrXdXtXBkV58557EvACHFA8yct9lB/qe8kMrc/VavgPIII6uOBDCKS/YirhFt0GqszenuHHvF4b X-Received: by 2002:a17:907:d089:b0:7ad:aed7:a5da with SMTP id vc9-20020a170907d08900b007adaed7a5damr45084060ejc.28.1674811095361; Fri, 27 Jan 2023 01:18:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674811095; cv=none; d=google.com; s=arc-20160816; b=SfShYlWSDMUNAeauGryKQ56dIKGdbvBMaxwyHgVYZmqZeoTSY5ii6oC4fYx7kzUdcz abSkRb5GgDHiTOSyLt6HfhfSPLixIRRlGDoFjvuWdWtFrxo5kecRda3Kr9pQJ9LxVkfu kf1U+fT5Gv1nyr4UT18oUq9XDc/4JufSICGFHKmzQFiEXrIkpB0rFseS7za9d9WC9iKd UJfKyURxFY+njobPrbhWM+HrSoQ3eKo0J5cFraRnk2ku724+d2CP2/xMldFhadPfhzb/ iOpAMUEcBWV1u4ecWTKbG4wKC11X6Y/tqq7O7kQAJIiD2LkEdxon1q5/nWe+p6spAFvW PPAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=nyvTOyejeZM6shN79UmQi7uh31f5As/2h2AZKQdb2mI=; b=J6PrMLZFrrU6B6NeMCqCvJCRb452J2uYcAzdIkDpfSV0KPsEmTMlUmJ/JUcycU9BY9 J/2JeQa1XWjoz4tVSDZETGHmOKKhyTKevepaPWWv2y84bt5JD7zpRXKEjrCpVojkS6zv bDpqFseLzDpa/Nv1Y+oHpEnTrjxd4tIox6r4Vr/4HIDxS8AAcoLbahwC0J/Hqq53EiAB GamkzqGAbplsD6N5a48CTrum/HGULSReb8q8eZtdNWU74yGhIRzPgC+2yVL67+JHgs6H 4dW8uimjagCfkgdCn6eeIqZiHWgYPUdXjB7lvRbcE5di8QOvx2A21QdFAbOt5xHIHBWB Uz2w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ss2-20020a170907c00200b0086f53c24da4si5027505ejc.635.2023.01.27.01.17.52; Fri, 27 Jan 2023 01:18:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232859AbjA0JLm convert rfc822-to-8bit (ORCPT <rfc822;lekhanya01809@gmail.com> + 99 others); Fri, 27 Jan 2023 04:11:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232861AbjA0JLk (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 27 Jan 2023 04:11:40 -0500 Received: from fd01.gateway.ufhost.com (fd01.gateway.ufhost.com [61.152.239.71]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C78D783CC for <linux-kernel@vger.kernel.org>; Fri, 27 Jan 2023 01:11:28 -0800 (PST) Received: from EXMBX166.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX166", Issuer "EXMBX166" (not verified)) by fd01.gateway.ufhost.com (Postfix) with ESMTP id AE99224E02F; Fri, 27 Jan 2023 17:11:27 +0800 (CST) Received: from EXMBX066.cuchost.com (172.16.7.66) by EXMBX166.cuchost.com (172.16.6.76) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Fri, 27 Jan 2023 17:11:27 +0800 Received: from jsia-virtual-machine.localdomain (60.50.196.81) by EXMBX066.cuchost.com (172.16.6.66) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Fri, 27 Jan 2023 17:11:23 +0800 From: Sia Jee Heng <jeeheng.sia@starfivetech.com> To: <paul.walmsley@sifive.com>, <palmer@dabbelt.com>, <aou@eecs.berkeley.edu> CC: <linux-riscv@lists.infradead.org>, <linux-kernel@vger.kernel.org>, <jeeheng.sia@starfivetech.com>, <leyfoon.tan@starfivetech.com>, <mason.huo@starfivetech.com> Subject: [PATCH v3 3/4] RISC-V: mm: Enable huge page support to kernel_page_present() function Date: Fri, 27 Jan 2023 17:10:50 +0800 Message-ID: <20230127091051.1465278-4-jeeheng.sia@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127091051.1465278-1-jeeheng.sia@starfivetech.com> References: <20230127091051.1465278-1-jeeheng.sia@starfivetech.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [60.50.196.81] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX066.cuchost.com (172.16.6.66) X-YovoleRuleAgent: yovoleflag Content-Transfer-Encoding: 8BIT X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_MSPIKE_H2, SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756166719251262858?= X-GMAIL-MSGID: =?utf-8?q?1756166719251262858?= |
Series |
RISC-V Hibernation Support
|
|
Commit Message
JeeHeng Sia
Jan. 27, 2023, 9:10 a.m. UTC
Currently kernel_page_present() function doesn't support huge page detection causes the function to mistakenly return false to the hibernation core. Add huge page detection to the function to solve the problem. Signed-off-by: Sia Jee Heng <jeeheng.sia@starfivetech.com> Reviewed-by: Ley Foon Tan <leyfoon.tan@starfivetech.com> Reviewed-by: Mason Huo <mason.huo@starfivetech.com> --- arch/riscv/mm/pageattr.c | 6 ++++++ 1 file changed, 6 insertions(+)
Comments
+CC Alex On Fri, Jan 27, 2023 at 05:10:50PM +0800, Sia Jee Heng wrote: > Currently kernel_page_present() function doesn't support huge page > detection causes the function to mistakenly return false to the > hibernation core. This sounds like a bug & should have a fixes tag, no? I assume for whatever commit enabled huge page support... We don't support set_memory, which by the looks of things is the other usecase for this function, so probably doesn't need backporting. Alex, does this change look good to you? > Add huge page detection to the function to solve the problem. > > Signed-off-by: Sia Jee Heng <jeeheng.sia@starfivetech.com> > Reviewed-by: Ley Foon Tan <leyfoon.tan@starfivetech.com> > Reviewed-by: Mason Huo <mason.huo@starfivetech.com> > --- > arch/riscv/mm/pageattr.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c > index 86c56616e5de..792b8d10cdfc 100644 > --- a/arch/riscv/mm/pageattr.c > +++ b/arch/riscv/mm/pageattr.c > @@ -221,14 +221,20 @@ bool kernel_page_present(struct page *page) > p4d = p4d_offset(pgd, addr); > if (!p4d_present(*p4d)) > return false; > + if (p4d_leaf(*p4d)) > + return true; > > pud = pud_offset(p4d, addr); > if (!pud_present(*pud)) > return false; > + if (pud_leaf(*pud)) > + return true; > > pmd = pmd_offset(pud, addr); > if (!pmd_present(*pmd)) > return false; > + if (pmd_leaf(*pmd)) > + return true; > > pte = pte_offset_kernel(pmd, addr); > return pte_present(*pte); > -- > 2.34.1 >
Hi, On Mon, Jan 30, 2023 at 10:57 PM Conor Dooley <conor@kernel.org> wrote: > > +CC Alex > > On Fri, Jan 27, 2023 at 05:10:50PM +0800, Sia Jee Heng wrote: > > Currently kernel_page_present() function doesn't support huge page > > detection causes the function to mistakenly return false to the > > hibernation core. > > This sounds like a bug & should have a fixes tag, no? I assume for > whatever commit enabled huge page support... > We don't support set_memory, which by the looks of things is the other > usecase for this function, so probably doesn't need backporting. Maybe add this patch in the Fixes tag: commit 9e953cda5cdf ("riscv: Introduce huge page support for 32/64bit kernel"). > > Alex, does this change look good to you? Yes, just one thing though: what about a pgd_leaf() test? Even if very unlikely (I see x86 does not even test it), the privileged spec states it is possible to have a 256TB page. Thanks, Alex > > > Add huge page detection to the function to solve the problem. > > > > Signed-off-by: Sia Jee Heng <jeeheng.sia@starfivetech.com> > > Reviewed-by: Ley Foon Tan <leyfoon.tan@starfivetech.com> > > Reviewed-by: Mason Huo <mason.huo@starfivetech.com> > > --- > > arch/riscv/mm/pageattr.c | 6 ++++++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c > > index 86c56616e5de..792b8d10cdfc 100644 > > --- a/arch/riscv/mm/pageattr.c > > +++ b/arch/riscv/mm/pageattr.c > > @@ -221,14 +221,20 @@ bool kernel_page_present(struct page *page) > > p4d = p4d_offset(pgd, addr); > > if (!p4d_present(*p4d)) > > return false; > > + if (p4d_leaf(*p4d)) > > + return true; > > > > pud = pud_offset(p4d, addr); > > if (!pud_present(*pud)) > > return false; > > + if (pud_leaf(*pud)) > > + return true; > > > > pmd = pmd_offset(pud, addr); > > if (!pmd_present(*pmd)) > > return false; > > + if (pmd_leaf(*pmd)) > > + return true; > > > > pte = pte_offset_kernel(pmd, addr); > > return pte_present(*pte); > > -- > > 2.34.1 > >
> -----Original Message----- > From: Alexandre Ghiti <alexghiti@rivosinc.com> > Sent: Tuesday, 31 January, 2023 4:19 PM > To: Conor Dooley <conor@kernel.org> > Cc: JeeHeng Sia <jeeheng.sia@starfivetech.com>; paul.walmsley@sifive.com; palmer@dabbelt.com; aou@eecs.berkeley.edu; linux- > riscv@lists.infradead.org; linux-kernel@vger.kernel.org; Leyfoon Tan <leyfoon.tan@starfivetech.com>; Mason Huo > <mason.huo@starfivetech.com> > Subject: Re: [PATCH v3 3/4] RISC-V: mm: Enable huge page support to kernel_page_present() function > > Hi, > > On Mon, Jan 30, 2023 at 10:57 PM Conor Dooley <conor@kernel.org> wrote: > > > > +CC Alex > > > > On Fri, Jan 27, 2023 at 05:10:50PM +0800, Sia Jee Heng wrote: > > > Currently kernel_page_present() function doesn't support huge page > > > detection causes the function to mistakenly return false to the > > > hibernation core. > > > > This sounds like a bug & should have a fixes tag, no? I assume for > > whatever commit enabled huge page support... > > We don't support set_memory, which by the looks of things is the other > > usecase for this function, so probably doesn't need backporting. > > Maybe add this patch in the Fixes tag: commit 9e953cda5cdf ("riscv: > Introduce huge page support for 32/64bit kernel"). Sure, will add the fixes tag. > > > > > Alex, does this change look good to you? > > Yes, just one thing though: what about a pgd_leaf() test? Even if very > unlikely (I see x86 does not even test it), the privileged spec states > it is possible to have a 256TB page. I can add it in. But as you are probably aware that x86 and ARM don't even tested it. Thanks. > > Thanks, > > Alex > > > > > > Add huge page detection to the function to solve the problem. > > > > > > Signed-off-by: Sia Jee Heng <jeeheng.sia@starfivetech.com> > > > Reviewed-by: Ley Foon Tan <leyfoon.tan@starfivetech.com> > > > Reviewed-by: Mason Huo <mason.huo@starfivetech.com> > > > --- > > > arch/riscv/mm/pageattr.c | 6 ++++++ > > > 1 file changed, 6 insertions(+) > > > > > > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c > > > index 86c56616e5de..792b8d10cdfc 100644 > > > --- a/arch/riscv/mm/pageattr.c > > > +++ b/arch/riscv/mm/pageattr.c > > > @@ -221,14 +221,20 @@ bool kernel_page_present(struct page *page) > > > p4d = p4d_offset(pgd, addr); > > > if (!p4d_present(*p4d)) > > > return false; > > > + if (p4d_leaf(*p4d)) > > > + return true; > > > > > > pud = pud_offset(p4d, addr); > > > if (!pud_present(*pud)) > > > return false; > > > + if (pud_leaf(*pud)) > > > + return true; > > > > > > pmd = pmd_offset(pud, addr); > > > if (!pmd_present(*pmd)) > > > return false; > > > + if (pmd_leaf(*pmd)) > > > + return true; > > > > > > pte = pte_offset_kernel(pmd, addr); > > > return pte_present(*pte); > > > -- > > > 2.34.1 > > >
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 86c56616e5de..792b8d10cdfc 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -221,14 +221,20 @@ bool kernel_page_present(struct page *page) p4d = p4d_offset(pgd, addr); if (!p4d_present(*p4d)) return false; + if (p4d_leaf(*p4d)) + return true; pud = pud_offset(p4d, addr); if (!pud_present(*pud)) return false; + if (pud_leaf(*pud)) + return true; pmd = pmd_offset(pud, addr); if (!pmd_present(*pmd)) return false; + if (pmd_leaf(*pmd)) + return true; pte = pte_offset_kernel(pmd, addr); return pte_present(*pte);