From patchwork Thu Dec 21 20:04:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 182434 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp659338dyi; Thu, 21 Dec 2023 12:06:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IGBrSqsxlrgz3nk0UQUHmH6yc9syQUL+2B6pjZG3xvhwOBzXyUGkgG4z+mpbX/d6jsOJ+Nn X-Received: by 2002:a17:906:2092:b0:a26:a980:d5b5 with SMTP id 18-20020a170906209200b00a26a980d5b5mr206354ejq.8.1703189177404; Thu, 21 Dec 2023 12:06:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703189177; cv=none; d=google.com; s=arc-20160816; b=PKOVEBLQPzVPGt0FDCf5ITW59/+kwtAKsPV/KYiQ3F2xgPD5dT43WHTvRWTPXA8hqw ghtsqcev/6Rlpbl4DB3jq2nT24ehiY7o3XP2aHDoC5+0yykJjilzACJwXB8NA90HjBu5 hfQcpdN7JQFFe0pmYDN/c+ihF/GSpmDuWBkT0DECJQz6OCUXaA7KBzd9rMZxSFtivddb WQG9U4mlK90SOmtwrgKbo9erk0sdHO9pHrbpydDQoNMKG+qUorIS4BwaUuXlAm4vxQ7S 60MtDTMb6ksdQlUmSCpq5luyB3/x2H3uT6w5zTg+rtYQUhc0s97HfbzdcK0s+JNZXidB RrPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=+hJyqXb/2fG9kaa/Bl39ZO/TtJVAZ8OgDJUIBMdnwuM=; fh=GyTEpUCUNPwsl1pqA0jDPXgvja+iZTM9USlQd9sQtQg=; b=tNSYCHrIhR5A3GsaPrJ+/fX07pq0iX+kh94/F7ecbKavCykQn7WB/mIFunytu2wwVG npksnAd9Wn0IqfPHigqNe8CbCm2pokSvh0RKh/zy5XiS0mP5cPNDxW9vMUuAnkO/s8go kqXJJtrAcBlyM6TPFINcbv9DFaZcsseeC0vWL15T6kTWrIAz6gzMhpZFykKfiS/MDa9r /EFxKUKHsJQMB+795GNvHWNy5TRQ5RB/2+jGEDlT3AlKKbOOdkVtWJChaSKXY9X/N1pb E5pgrvgX2f1gjmq7y8DWtO+tkz1IKKZrBM9cGmRTiDGkKraZ963p4gC1fQa75QXA9iBr +P+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=ca6Xr6Os; spf=pass (google.com: domain of linux-kernel+bounces-9019-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-9019-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id ka21-20020a170907921500b00a26a5f808casi823033ejb.261.2023.12.21.12.06.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Dec 2023 12:06:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-9019-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=ca6Xr6Os; spf=pass (google.com: domain of linux-kernel+bounces-9019-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-9019-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 0BC461F24FB4 for ; Thu, 21 Dec 2023 20:05:57 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2A00D76DA7; Thu, 21 Dec 2023 20:05:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="ca6Xr6Os" X-Original-To: linux-kernel@vger.kernel.org Received: from out-186.mta1.migadu.com (out-186.mta1.migadu.com [95.215.58.186]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42C9373191 for ; Thu, 21 Dec 2023 20:05:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1703189099; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+hJyqXb/2fG9kaa/Bl39ZO/TtJVAZ8OgDJUIBMdnwuM=; b=ca6Xr6Os/Z3oFun+6KyDxiIacA2tRaVxTTFDHDXDXQXtiQWsYC5+HPech6L4MV3aDTlI07 j3QG0NjRC9B+YgnnxEmb3EgpisveCkpCHQQWA6eNmn+8rbL4RjMp1UGbU7g5fyeyc5HMZ8 qEqcpGuZ87c1j2vSIZxFI6Ddhg+Wa5s= From: andrey.konovalov@linux.dev To: Marco Elver Cc: Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH mm 01/11] kasan/arm64: improve comments for KASAN_SHADOW_START/END Date: Thu, 21 Dec 2023 21:04:43 +0100 Message-Id: <140108ca0b164648c395a41fbeecb0601b1ae9e1.1703188911.git.andreyknvl@google.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785923294958668109 X-GMAIL-MSGID: 1785923294958668109 From: Andrey Konovalov Unify and improve the comments for KASAN_SHADOW_START/END definitions from include/asm/kasan.h and include/asm/memory.h. Also put both definitions together in include/asm/memory.h. Also clarify the related BUILD_BUG_ON checks in mm/kasan_init.c. Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/kasan.h | 22 +------------------ arch/arm64/include/asm/memory.h | 38 +++++++++++++++++++++++++++------ arch/arm64/mm/kasan_init.c | 5 +++++ 3 files changed, 38 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index 12d5f47f7dbe..7eefc525a9df 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -15,29 +15,9 @@ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) +asmlinkage void kasan_early_init(void); void kasan_init(void); - -/* - * KASAN_SHADOW_START: beginning of the kernel virtual addresses. - * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses, - * where N = (1 << KASAN_SHADOW_SCALE_SHIFT). - * - * KASAN_SHADOW_OFFSET: - * This value is used to map an address to the corresponding shadow - * address by the following formula: - * shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET - * - * (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) shadow addresses that lie in range - * [KASAN_SHADOW_OFFSET, KASAN_SHADOW_END) cover all 64-bits of virtual - * addresses. So KASAN_SHADOW_OFFSET should satisfy the following equation: - * KASAN_SHADOW_OFFSET = KASAN_SHADOW_END - - * (1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT)) - */ -#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT))) -#define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual) - void kasan_copy_shadow(pgd_t *pgdir); -asmlinkage void kasan_early_init(void); #else static inline void kasan_init(void) { } diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index fde4186cc387..0f139cb4467b 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -65,15 +65,41 @@ #define KERNEL_END _end /* - * Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual - * address space for the shadow region respectively. They can bloat the stack - * significantly, so double the (minimum) stack size when they are in use. + * Generic and Software Tag-Based KASAN modes require 1/8th and 1/16th of the + * kernel virtual address space for storing the shadow memory respectively. + * + * The mapping between a virtual memory address and its corresponding shadow + * memory address is defined based on the formula: + * + * shadow_addr = (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET + * + * where KASAN_SHADOW_SCALE_SHIFT is the order of the number of bits that map + * to a single shadow byte and KASAN_SHADOW_OFFSET is a constant that offsets + * the mapping. Note that KASAN_SHADOW_OFFSET does not point to the start of + * the shadow memory region. + * + * Based on this mapping, we define two constants: + * + * KASAN_SHADOW_START: the start of the shadow memory region; + * KASAN_SHADOW_END: the end of the shadow memory region. + * + * KASAN_SHADOW_END is defined first as the shadow address that corresponds to + * the upper bound of possible virtual kernel memory addresses UL(1) << 64 + * according to the mapping formula. + * + * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow + * memory start must map to the lowest possible kernel virtual memory address + * and thus it depends on the actual bitness of the address space. + * + * As KASAN inserts redzones between stack variables, this increases the stack + * memory usage significantly. Thus, we double the (minimum) stack size. */ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) -#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \ - + KASAN_SHADOW_OFFSET) -#define PAGE_END (KASAN_SHADOW_END - (1UL << (vabits_actual - KASAN_SHADOW_SCALE_SHIFT))) +#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET) +#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT))) +#define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual) +#define PAGE_END KASAN_SHADOW_START #define KASAN_THREAD_SHIFT 1 #else #define KASAN_THREAD_SHIFT 0 diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 555285ebd5af..4c7ad574b946 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -170,6 +170,11 @@ asmlinkage void __init kasan_early_init(void) { BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); + /* + * We cannot check the actual value of KASAN_SHADOW_START during build, + * as it depends on vabits_actual. As a best-effort approach, check + * potential values calculated based on VA_BITS and VA_BITS_MIN. + */ BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));