From patchwork Tue Dec 19 17:50:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 181181 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2120369dyi; Tue, 19 Dec 2023 10:04:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjFzjHfSn0GrKoItNGm84JWuBXHDpkoL8BJFy9PJpsDDPmjKK+aTJZc3Dx4fByYRbrqncE X-Received: by 2002:a05:6808:23c2:b0:3b8:b0ee:53d4 with SMTP id bq2-20020a05680823c200b003b8b0ee53d4mr504629oib.57.1703009050747; Tue, 19 Dec 2023 10:04:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703009050; cv=none; d=google.com; s=arc-20160816; b=IQudwIOOLkHAMvK1QdTHfHK3Cj1xzg8NEZHiKH3RNGoIMYwIddgUsAuqrB1KdJliJH O5MBPU9P/BUBVZTUZg6SNm7F/wy/p2H0foPvY0LGD7E4b08/tMwYY0g2oPupkRDal1pm HxL7lfLDwljgp9ZssX6RuPyhJr1v08+t8bWoRgeKb+7WTUajEYQkJ1tLTngu7Zfl0KwL QDeTEfAafynikw7YNbIb3HUbiHkf9YMUyyLoGpLbfOjgyLBh2PDNzJ4z+IOdjy/aBJDJ 89yXUrZi+hl1NPXiKwrUMkkaANl2z760U5ZwXAoSaXEQc5gj/nWB0be6bMtyDRFNUorW 21fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=RQg7suMb/9Jnz+wAQ9WdVfRyNeBfCryQ9+DO8mQ1BQw=; fh=0Glk/ayqI8+vZzfC3nNOQX076ObCaIyGiP/4knLjRm0=; b=Nib2mfbwkaKsXZy1m+5OGH4n8C+IpL1s9KyRVEvhDqTZeSSMsQu/n11RMtuoTpE9Za 4oSdDu/0bq05vAKAUCxfL+VqqChRGB5FjgNvpWybPeT9nN/Ow9kznF35fCqx8IEQY2w5 Ip4aIPb/Kj5ZoQcbphw792RjsjH6rdwxMF/QnVMXwkW5ILXN0B5b0IaOde0tHtDg1sb0 1QOjxA+vyWDHbqTpFb0TlvHB1kLt8i6u+GybgsxdoxBnneC7RDst1I9JXlhj9jRyyoV9 t6Zc/my7JkywfdXMTRJGAljRPBv/2RxYwf8I++dfca1JBrfnA4npV4lBiJ1x3bJVEjV8 pnOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NZ+HplTE; spf=pass (google.com: domain of linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id e1-20020a056808148100b003b6cda99f33si7945707oiw.48.2023.12.19.10.04.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 10:04:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NZ+HplTE; spf=pass (google.com: domain of linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5758-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 1FEF7284FFB for ; Tue, 19 Dec 2023 18:03:58 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6694338FA7; Tue, 19 Dec 2023 18:03:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NZ+HplTE" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95B56381A5; Tue, 19 Dec 2023 18:03:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D62CDC433C9; Tue, 19 Dec 2023 18:03:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703009005; bh=39JAuH598M6uo4CmVtLFeafqNHIttPmqjwfQDvfhOjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NZ+HplTEbztnxahsANz3uYQlJxV91QZtNCH9i3pK9wbRFkjwTt4VzJXgiEfG1XQyS BXqJkmD+ZuHI3pJbq7hpWh4FazAdi4Y3yKeIPf67IpkIahu9t2PTGodx/3z22CFrV7 m+cNydw6UBPt2db6BACnS7usZ86CqOzEjfRnK1vUXTwLBuz8h9WArwgkfMCsadNvpZ ADKvoV3VZLzOxA8w0qKo27CtkfmEvEQi0IGRe7dYSov3cTESJzZ5pYlOHNLP8W51Mp QrorLyrRGypL6qE5j1GbuY+013Qauw6noSlblzldFoZ80Yl2nqBBjJIFW2VycqEPjS p8/QIHPpMR03g== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/4] riscv: tlb: fix __p*d_free_tlb() Date: Wed, 20 Dec 2023 01:50:43 +0800 Message-Id: <20231219175046.2496-2-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20231219175046.2496-1-jszhang@kernel.org> References: <20231219175046.2496-1-jszhang@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785734418479804867 X-GMAIL-MSGID: 1785734418479804867 If non-leaf PTEs I.E pmd, pud or p4d is modified, a sfence.vma is a must for safe, imagine if an implementation caches the non-leaf translation in TLB, although I didn't meet this HW so far, but it's possible in theory. Signed-off-by: Jisheng Zhang Reviewed-by: Alexandre Ghiti --- arch/riscv/include/asm/pgalloc.h | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index d169a4f41a2e..a12fb83fa1f5 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -95,7 +95,13 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) __pud_free(mm, pud); } -#define __pud_free_tlb(tlb, pud, addr) pud_free((tlb)->mm, pud) +#define __pud_free_tlb(tlb, pud, addr) \ +do { \ + if (pgtable_l4_enabled) { \ + pagetable_pud_dtor(virt_to_ptdesc(pud)); \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ + } \ +} while (0) #define p4d_alloc_one p4d_alloc_one static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) @@ -124,7 +130,11 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) __p4d_free(mm, p4d); } -#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) +#define __p4d_free_tlb(tlb, p4d, addr) \ +do { \ + if (pgtable_l5_enabled) \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d)); \ +} while (0) #endif /* __PAGETABLE_PMD_FOLDED */ static inline void sync_kernel_mappings(pgd_t *pgd) @@ -149,7 +159,11 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #ifndef __PAGETABLE_PMD_FOLDED -#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) +#define __pmd_free_tlb(tlb, pmd, addr) \ +do { \ + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ +} while (0) #endif /* __PAGETABLE_PMD_FOLDED */