[-fixes,2/2] riscv: Fix set_huge_pte_at() for NAPOT mappings when a swap entry is set
Message ID | 20230928151846.8229-3-alexghiti@rivosinc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:cae8:0:b0:403:3b70:6f57 with SMTP id r8csp3613880vqu; Thu, 28 Sep 2023 14:36:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHCvG62paAXcDgqlNYoacy4dtvyNuxl2jT6nxkb78Uz9zBB+dz9LUbkG6qgvDTE5pFLdw0O X-Received: by 2002:a05:6a00:1592:b0:68f:d6db:5d66 with SMTP id u18-20020a056a00159200b0068fd6db5d66mr2517951pfk.16.1695936966393; Thu, 28 Sep 2023 14:36:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695936966; cv=none; d=google.com; s=arc-20160816; b=jHqgCzNTg4E/jneE08vXs6aOlvIzC+/MSMvNd8eL7iqubi2g5U0UWy+nastUNLg7Tg +sz3ZwiGeSEG8vglw+uhG0z/m3GtvRLu6sUD9XnOrFX8iOIB9x9RahjIUfbU6HstvrY7 Th9HATF5SIRQb1uC/yxQ/IGoibZL7FovxPWqiiYLGhWRQyjmcq7m8b2eM34MVAOpZ9BZ le+1S9lO1IrhBvMKxb36U3eJ6AsoHIJXDPC7la3zEOtdo7sCbLlNd+Fe17zjO0cFO5C0 e6R75QAmAO0xKY6D+EuMMjG4G09yxasb+0LljjJQKv4o5zdM8gq0Sop//268rGjwrywb deFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0/s3Y1W25iZQB1mJnL4ozyyAV3qeno6bbn8hB9vh6xc=; fh=70E37CH7MQytXhmqGdiWg7mOO5+EDfbVj5MS5OXKWoU=; b=NXMsLGyyG5/aOc+souSu/DEGTWaP8BxMZiy+BokqKDzkxXX+8j25MUyG5k65ND9Rda WgfRaUIhSNH+w5sziNJbWuNMICAYGkFSo7GLq+KVrSpj779sVLQnPcIUxiUR3bkWt4u/ E+OWViGoYvwRXGUoV42xeDwxYACTL5/H0yRD8QNZ0y4HkQi/8/yop0TxmdEq707C3/28 veI0Qs59L/SZlKkQFx30BxZTNOOGuePSssUnXx4SCmOjwURDQ3T9OA0PiTrz2xTkdA1V JCuCU1P+SYhoK58Ga6YosARfXMxDowi65H8OXaqaCi9zA6rI2x6MeOaSPclM67JSKzq8 TjNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=F+CHUd7F; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id ea28-20020a056a004c1c00b00686baf235easi3707731pfb.258.2023.09.28.14.36.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 14:36:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=F+CHUd7F; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 15EFB80606C1; Thu, 28 Sep 2023 08:21:10 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231766AbjI1PVC (ORCPT <rfc822;pwkd43@gmail.com> + 21 others); Thu, 28 Sep 2023 11:21:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231593AbjI1PVB (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 28 Sep 2023 11:21:01 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDEB499 for <linux-kernel@vger.kernel.org>; Thu, 28 Sep 2023 08:20:59 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-40651a72807so8216075e9.1 for <linux-kernel@vger.kernel.org>; Thu, 28 Sep 2023 08:20:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1695914458; x=1696519258; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0/s3Y1W25iZQB1mJnL4ozyyAV3qeno6bbn8hB9vh6xc=; b=F+CHUd7FFAVzYpgpFrY09w2KQpVaU1qpPiIUIaUX9ftqRt9RmVWU/21jzsjBewE0AA ajfQWgfrT8Qu6CiK8L5ojW6FFnCDZer+wxc7TE8wTY4U3Az1zAsj+rKeu22kYSlp1ST6 iuQ9BP8hiU5Q4MMcYcOte3cQK4be5sOal6s4Jdx8W29JChC7cFcyr/J1V5I2QsED7Dxw /jeh/qFXZrArG2RTh7DSnz2cZkJSS0IDwzX1QGP6ThFGHfNOVDuErbLIzUbic9Fae5F4 r317UlC8/K0Lf8J3akh452ZDGsFb/vIddkBE4IuuL2rxu0LJ5n2Fp3LsVfAqnjJORmoS /dTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695914458; x=1696519258; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0/s3Y1W25iZQB1mJnL4ozyyAV3qeno6bbn8hB9vh6xc=; b=ExOzVXkAQKVBWkoTdu5eQUP1TBNuq1NECi71mWuaKZ+/NB9mEWdZNCp8BMrwkzwO0q N1EHzIWZ/qu+l4SHopuMHFnFHAjFQL4ciY0U5aFcultjhWY4uHg3Du/bwKgNsfEZD/sf hB+CkqeBBw1HTiHI5R19cIP71hbkcwDC9YVy3BpIe/O4U72ak/DC/ICbaU1O7PV/u+Ld QwULVFPegs9RiPqkmErncbS3QeeKV9azYfVV31aRz9eJpZFV2e0nLqP3xvtXOdMkAla4 YU/vW3pcLUb94iLKnFOL11JPCukZID7WiWJv6Lmpr5fpkihUM0zqu2uf1UaIOvhrsBvm MBOQ== X-Gm-Message-State: AOJu0Yy0T3Wae0xITXqI38Wsy4Z02D5Kb1pwtlXT8jLNsG2fZaxjSH1L 2MB6GLg3NxG8mv4Ve61PFYlGvw== X-Received: by 2002:a5d:595e:0:b0:31a:e376:6bd6 with SMTP id e30-20020a5d595e000000b0031ae3766bd6mr1444503wri.45.1695914457749; Thu, 28 Sep 2023 08:20:57 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com ([2a02:8440:4541:1a11:c4e7:d9ee:586b:e201]) by smtp.gmail.com with ESMTPSA id m26-20020a056000025a00b003233a31a467sm6038630wrz.34.2023.09.28.08.20.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 08:20:57 -0700 (PDT) From: Alexandre Ghiti <alexghiti@rivosinc.com> To: Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, Andrew Jones <ajones@ventanamicro.com>, Qinglin Pan <panqinglin2020@iscas.ac.cn>, Ryan Roberts <ryan.roberts@arm.com>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Subject: [PATCH -fixes 2/2] riscv: Fix set_huge_pte_at() for NAPOT mappings when a swap entry is set Date: Thu, 28 Sep 2023 17:18:46 +0200 Message-Id: <20230928151846.8229-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230928151846.8229-1-alexghiti@rivosinc.com> References: <20230928151846.8229-1-alexghiti@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Thu, 28 Sep 2023 08:21:10 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778318800370225567 X-GMAIL-MSGID: 1778318800370225567 |
Series |
Fix set_huge_pte_at()
|
|
Commit Message
Alexandre Ghiti
Sept. 28, 2023, 3:18 p.m. UTC
We used to determine the number of page table entries to set for a NAPOT
hugepage by using the pte value which actually fails when the pte to set is
a swap entry.
So take advantage of a recent fix for arm64 reported in [1] which
introduces the size of the mapping as an argument of set_huge_pte_at(): we
can then use this size to compute the number of page table entries to set
for a NAPOT region.
Fixes: 82a1a1f3bfb6 ("riscv: mm: support Svnapot in hugetlb page")
Reported-by: Ryan Roberts <ryan.roberts@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/20230922115804.2043771-1-ryan.roberts@arm.com/ [1]
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/riscv/mm/hugetlbpage.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
Comments
On Thu, Sep 28, 2023 at 05:18:46PM +0200, Alexandre Ghiti wrote: > We used to determine the number of page table entries to set for a NAPOT > hugepage by using the pte value which actually fails when the pte to set is > a swap entry. > > So take advantage of a recent fix for arm64 reported in [1] which > introduces the size of the mapping as an argument of set_huge_pte_at(): we > can then use this size to compute the number of page table entries to set > for a NAPOT region. > > Fixes: 82a1a1f3bfb6 ("riscv: mm: support Svnapot in hugetlb page") > Reported-by: Ryan Roberts <ryan.roberts@arm.com> > Closes: https://lore.kernel.org/linux-arm-kernel/20230922115804.2043771-1-ryan.roberts@arm.com/ [1] > Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Breaks the build. Your $subject marks this for -fixes, but this will not build there, as it relies on content that's not yet in that branch. AFAICT, you're going to have to resend this with akpm on CC, as the dependency is in his tree... Thanks, Conor. > --- > arch/riscv/mm/hugetlbpage.c | 19 +++++++++++++------ > 1 file changed, 13 insertions(+), 6 deletions(-) > > diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c > index e4a2ace92dbe..b52f0210481f 100644 > --- a/arch/riscv/mm/hugetlbpage.c > +++ b/arch/riscv/mm/hugetlbpage.c > @@ -183,15 +183,22 @@ void set_huge_pte_at(struct mm_struct *mm, > pte_t pte, > unsigned long sz) > { > + unsigned long hugepage_shift; > int i, pte_num; > > - if (!pte_napot(pte)) { > - set_pte_at(mm, addr, ptep, pte); > - return; > - } > + if (sz >= PGDIR_SIZE) > + hugepage_shift = PGDIR_SHIFT; > + else if (sz >= P4D_SIZE) > + hugepage_shift = P4D_SHIFT; > + else if (sz >= PUD_SIZE) > + hugepage_shift = PUD_SHIFT; > + else if (sz >= PMD_SIZE) > + hugepage_shift = PMD_SHIFT; > + else > + hugepage_shift = PAGE_SHIFT; > > - pte_num = napot_pte_num(napot_cont_order(pte)); > - for (i = 0; i < pte_num; i++, ptep++, addr += PAGE_SIZE) > + pte_num = sz >> hugepage_shift; > + for (i = 0; i < pte_num; i++, ptep++, addr += (1 << hugepage_shift)) > set_pte_at(mm, addr, ptep, pte); > } > > -- > 2.39.2 > > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
Hi Conor, On 30/09/2023 11:14, Conor Dooley wrote: > On Thu, Sep 28, 2023 at 05:18:46PM +0200, Alexandre Ghiti wrote: >> We used to determine the number of page table entries to set for a NAPOT >> hugepage by using the pte value which actually fails when the pte to set is >> a swap entry. >> >> So take advantage of a recent fix for arm64 reported in [1] which >> introduces the size of the mapping as an argument of set_huge_pte_at(): we >> can then use this size to compute the number of page table entries to set >> for a NAPOT region. >> >> Fixes: 82a1a1f3bfb6 ("riscv: mm: support Svnapot in hugetlb page") >> Reported-by: Ryan Roberts <ryan.roberts@arm.com> >> Closes: https://lore.kernel.org/linux-arm-kernel/20230922115804.2043771-1-ryan.roberts@arm.com/ [1] >> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> > Breaks the build. Your $subject marks this for -fixes, but this will not > build there, as it relies on content that's not yet in that branch. > AFAICT, you're going to have to resend this with akpm on CC, as the > dependency is in his tree... I see, but I still don't understand why -fixes does not point to the latest rcX instead of staying on rc1? The patch which this series depends on just made it to rc4. Thanks, Alex > Thanks, > Conor. > >> --- >> arch/riscv/mm/hugetlbpage.c | 19 +++++++++++++------ >> 1 file changed, 13 insertions(+), 6 deletions(-) >> >> diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c >> index e4a2ace92dbe..b52f0210481f 100644 >> --- a/arch/riscv/mm/hugetlbpage.c >> +++ b/arch/riscv/mm/hugetlbpage.c >> @@ -183,15 +183,22 @@ void set_huge_pte_at(struct mm_struct *mm, >> pte_t pte, >> unsigned long sz) >> { >> + unsigned long hugepage_shift; >> int i, pte_num; >> >> - if (!pte_napot(pte)) { >> - set_pte_at(mm, addr, ptep, pte); >> - return; >> - } >> + if (sz >= PGDIR_SIZE) >> + hugepage_shift = PGDIR_SHIFT; >> + else if (sz >= P4D_SIZE) >> + hugepage_shift = P4D_SHIFT; >> + else if (sz >= PUD_SIZE) >> + hugepage_shift = PUD_SHIFT; >> + else if (sz >= PMD_SIZE) >> + hugepage_shift = PMD_SHIFT; >> + else >> + hugepage_shift = PAGE_SHIFT; >> >> - pte_num = napot_pte_num(napot_cont_order(pte)); >> - for (i = 0; i < pte_num; i++, ptep++, addr += PAGE_SIZE) >> + pte_num = sz >> hugepage_shift; >> + for (i = 0; i < pte_num; i++, ptep++, addr += (1 << hugepage_shift)) >> set_pte_at(mm, addr, ptep, pte); >> } >> >> -- >> 2.39.2 >> >> >> _______________________________________________ >> linux-riscv mailing list >> linux-riscv@lists.infradead.org >> http://lists.infradead.org/mailman/listinfo/linux-riscv >> >> _______________________________________________ >> linux-riscv mailing list >> linux-riscv@lists.infradead.org >> http://lists.infradead.org/mailman/listinfo/linux-riscv
On Mon, Oct 02, 2023 at 09:18:52AM +0200, Alexandre Ghiti wrote: > Hi Conor, > > On 30/09/2023 11:14, Conor Dooley wrote: > > On Thu, Sep 28, 2023 at 05:18:46PM +0200, Alexandre Ghiti wrote: > > > We used to determine the number of page table entries to set for a NAPOT > > > hugepage by using the pte value which actually fails when the pte to set is > > > a swap entry. > > > > > > So take advantage of a recent fix for arm64 reported in [1] which > > > introduces the size of the mapping as an argument of set_huge_pte_at(): we > > > can then use this size to compute the number of page table entries to set > > > for a NAPOT region. > > > > > > Fixes: 82a1a1f3bfb6 ("riscv: mm: support Svnapot in hugetlb page") > > > Reported-by: Ryan Roberts <ryan.roberts@arm.com> > > > Closes: https://lore.kernel.org/linux-arm-kernel/20230922115804.2043771-1-ryan.roberts@arm.com/ [1] > > > Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> > > Breaks the build. Your $subject marks this for -fixes, but this will not > > build there, as it relies on content that's not yet in that branch. > > AFAICT, you're going to have to resend this with akpm on CC, as the > > dependency is in his tree... > > > I see, but I still don't understand why -fixes does not point to the latest > rcX instead of staying on rc1? It's up to Palmer what he does with his fixes branch, but two thoughts. Doing what you suggest would require rebasing things not yet sent to Linus every week and fast-forwarding when PRs are actually merged. IIRC, Palmer used to do something like the latter, but IIRC he got some complaints about that and switched to the current method. At the very least, you should point out dependencies like this, as I figure an individual patch could be applied on top of -rc4 and merged in. Both Palmer and I have submitted things for b4 to improve support for doing things exactly like this ;) > The patch which this series depends on just made it to rc4. However, if you do not mention what the deps for your patches are explicitly, how are people supposed to know? The reference to the dependency makes it look like a report for a similar problem that also applies to riscv, not a pre-requisite for the patch. Thanks, Conor.
On Thu, Sep 28, 2023 at 05:18:46PM +0200, Alexandre Ghiti wrote: > We used to determine the number of page table entries to set for a NAPOT > hugepage by using the pte value which actually fails when the pte to set is > a swap entry. > > So take advantage of a recent fix for arm64 reported in [1] which > introduces the size of the mapping as an argument of set_huge_pte_at(): we > can then use this size to compute the number of page table entries to set > for a NAPOT region. > > Fixes: 82a1a1f3bfb6 ("riscv: mm: support Svnapot in hugetlb page") > Reported-by: Ryan Roberts <ryan.roberts@arm.com> > Closes: https://lore.kernel.org/linux-arm-kernel/20230922115804.2043771-1-ryan.roberts@arm.com/ [1] > Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> > --- > arch/riscv/mm/hugetlbpage.c | 19 +++++++++++++------ > 1 file changed, 13 insertions(+), 6 deletions(-) > > diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c > index e4a2ace92dbe..b52f0210481f 100644 > --- a/arch/riscv/mm/hugetlbpage.c > +++ b/arch/riscv/mm/hugetlbpage.c > @@ -183,15 +183,22 @@ void set_huge_pte_at(struct mm_struct *mm, > pte_t pte, > unsigned long sz) > { > + unsigned long hugepage_shift; > int i, pte_num; > > - if (!pte_napot(pte)) { > - set_pte_at(mm, addr, ptep, pte); > - return; > - } > + if (sz >= PGDIR_SIZE) > + hugepage_shift = PGDIR_SHIFT; > + else if (sz >= P4D_SIZE) > + hugepage_shift = P4D_SHIFT; > + else if (sz >= PUD_SIZE) > + hugepage_shift = PUD_SHIFT; > + else if (sz >= PMD_SIZE) > + hugepage_shift = PMD_SHIFT; > + else > + hugepage_shift = PAGE_SHIFT; > > - pte_num = napot_pte_num(napot_cont_order(pte)); > - for (i = 0; i < pte_num; i++, ptep++, addr += PAGE_SIZE) > + pte_num = sz >> hugepage_shift; > + for (i = 0; i < pte_num; i++, ptep++, addr += (1 << hugepage_shift)) > set_pte_at(mm, addr, ptep, pte); > } > So a 64k napot, for example, will fall into the PAGE_SHIFT arm, but then we'll calculate 16 for pte_num. Looks good to me. Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Thanks, drew
Hey Conor, On 02/10/2023 15:11, Conor Dooley wrote: > On Mon, Oct 02, 2023 at 09:18:52AM +0200, Alexandre Ghiti wrote: >> Hi Conor, >> >> On 30/09/2023 11:14, Conor Dooley wrote: >>> On Thu, Sep 28, 2023 at 05:18:46PM +0200, Alexandre Ghiti wrote: >>>> We used to determine the number of page table entries to set for a NAPOT >>>> hugepage by using the pte value which actually fails when the pte to set is >>>> a swap entry. >>>> >>>> So take advantage of a recent fix for arm64 reported in [1] which >>>> introduces the size of the mapping as an argument of set_huge_pte_at(): we >>>> can then use this size to compute the number of page table entries to set >>>> for a NAPOT region. >>>> >>>> Fixes: 82a1a1f3bfb6 ("riscv: mm: support Svnapot in hugetlb page") >>>> Reported-by: Ryan Roberts <ryan.roberts@arm.com> >>>> Closes: https://lore.kernel.org/linux-arm-kernel/20230922115804.2043771-1-ryan.roberts@arm.com/ [1] >>>> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> >>> Breaks the build. Your $subject marks this for -fixes, but this will not >>> build there, as it relies on content that's not yet in that branch. >>> AFAICT, you're going to have to resend this with akpm on CC, as the >>> dependency is in his tree... >> >> I see, but I still don't understand why -fixes does not point to the latest >> rcX instead of staying on rc1? > It's up to Palmer what he does with his fixes branch, but two thoughts. > Doing what you suggest would require rebasing things not yet sent to Linus > every week and fast-forwarding when PRs are actually merged. > IIRC, Palmer used to do something like the latter, but IIRC he got some > complaints about that and switched to the current method. > At the very least, you should point out dependencies like this, as I > figure an individual patch could be applied on top of -rc4 and merged > in. Both Palmer and I have submitted things for b4 to improve support for > doing things exactly like this ;) > >> The patch which this series depends on just made it to rc4. > However, if you do not mention what the deps for your patches are > explicitly, how are people supposed to know? The reference to the > dependency makes it look like a report for a similar problem that also > applies to riscv, not a pre-requisite for the patch. You're right, I saw the dependency being merged so I thought it would be ok but I should have mention it. I have just discussed with Palmer, and I'll +cc Andrew to see if he can take that in his tree. Thanks! Alex > > Thanks, > Conor. > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c index e4a2ace92dbe..b52f0210481f 100644 --- a/arch/riscv/mm/hugetlbpage.c +++ b/arch/riscv/mm/hugetlbpage.c @@ -183,15 +183,22 @@ void set_huge_pte_at(struct mm_struct *mm, pte_t pte, unsigned long sz) { + unsigned long hugepage_shift; int i, pte_num; - if (!pte_napot(pte)) { - set_pte_at(mm, addr, ptep, pte); - return; - } + if (sz >= PGDIR_SIZE) + hugepage_shift = PGDIR_SHIFT; + else if (sz >= P4D_SIZE) + hugepage_shift = P4D_SHIFT; + else if (sz >= PUD_SIZE) + hugepage_shift = PUD_SHIFT; + else if (sz >= PMD_SIZE) + hugepage_shift = PMD_SHIFT; + else + hugepage_shift = PAGE_SHIFT; - pte_num = napot_pte_num(napot_cont_order(pte)); - for (i = 0; i < pte_num; i++, ptep++, addr += PAGE_SIZE) + pte_num = sz >> hugepage_shift; + for (i = 0; i < pte_num; i++, ptep++, addr += (1 << hugepage_shift)) set_pte_at(mm, addr, ptep, pte); }