From patchwork Mon Oct 31 10:28:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 13191 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2229329wru; Mon, 31 Oct 2022 03:29:41 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4TQ0inpLw+FTgwJ1PR9uTooVsrLPJVetFSj8doqXwXR9B5HSBDalbW3pCfRxFXAMNsxmWP X-Received: by 2002:a17:907:9495:b0:78e:1bee:5919 with SMTP id dm21-20020a170907949500b0078e1bee5919mr12234889ejc.701.1667212181248; Mon, 31 Oct 2022 03:29:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667212181; cv=none; d=google.com; s=arc-20160816; b=Ss7doITNTbKjJGzLwpS1INF+p3PdHcs/C6TBOsQxdQE5qUrQK4l6FE47QatEH6BlxD pdwQdqhgx9kIIwAhRANqBfTFYogbU2C1wORI8NfpSP67yU+9DENnuxc3kqB4q4U04S0k GZJJT90zW6QMFqdB8FN1vM6GRapIMLKtqzQ/oVwzohVL8yDvdsJREQbmbG5BCN0qrz/I 1wNrwDLQ1AyGeFThPvID+GM4m97dmpuqCm5e72JT0i/6q17rucAMQzZyfp0dVs9T7M6g TD8Fp2O/Y0wwdXLA6Ky3HHuhTOpUdg6eC6cqARCsNqI+SMPtDMX1xsT65RIQo7mJt8RK W1Fw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0UXEPVUAG/ks2HswpgrTribc0cOtwroH75nU4iepnEM=; b=m/vnIB1t65uQtB1xnF+xc6Mn/XUYSmr41BQ0ce8d26bYCDd3WLnxeM5s2jLVasxmzA eFpRRFHiwrsgFZoCwsMMkVmFxvq4OeCZKW1HmCa/U5gj8jgwCEupbAqDgywrmCUromuD Vh0ibpDTTMcGiq91jujcf5PC+oCIgq+XSihhOYpAGQZDwozhEM868GOgwJ3b3G8U7BNI zO5sLZj/+ves/VMrZgFhoNCX7dV+PbFDjbGvlzOgTvxMoU0j1rtCcE5Zowa5J2zDUty7 les0jqywgqBC5iimgI68TXq9qZ1ANelfJYOxP5Xl/7dWCTNGBMs8v9CXRft5OsjXr9X6 mhHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=jIWnLQ63; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=zx2c4.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k14-20020aa7c04e000000b0045c9313faf7si4506999edo.353.2022.10.31.03.29.17; Mon, 31 Oct 2022 03:29:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=jIWnLQ63; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229918AbiJaK3J (ORCPT + 99 others); Mon, 31 Oct 2022 06:29:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230473AbiJaK3E (ORCPT ); Mon, 31 Oct 2022 06:29:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3598B2B9 for ; Mon, 31 Oct 2022 03:29:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DCCEEB815B3 for ; Mon, 31 Oct 2022 10:29:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0B75C433D7; Mon, 31 Oct 2022 10:28:59 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="jIWnLQ63" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1667212136; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0UXEPVUAG/ks2HswpgrTribc0cOtwroH75nU4iepnEM=; b=jIWnLQ63I+uryCzoppKfVZLVlDRMGZnWpasUvvxlmChXzn640wphoZOta8Ak+hkObp9Rbm 0t/YFFw10cb7CYPPT9OJcODmygiC8qg7JYHOGwHiktilXbQApkWiduwY5AGKfzTFcgyxTq x3FR/jgH9rYfeann+dhpqfUESC3mW08= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 264ea943 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Mon, 31 Oct 2022 10:28:55 +0000 (UTC) From: "Jason A. Donenfeld" To: catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: "Jason A. Donenfeld" , Will Deacon , Ard Biesheuvel , Jean-Philippe Brucker Subject: [PATCH v3] random: remove early archrandom abstraction Date: Mon, 31 Oct 2022 11:28:40 +0100 Message-Id: <20221031102840.85621-1-Jason@zx2c4.com> In-Reply-To: <20221030212123.1022049-1-Jason@zx2c4.com> References: <20221030212123.1022049-1-Jason@zx2c4.com> MIME-Version: 1.0 X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747977860464325685?= X-GMAIL-MSGID: =?utf-8?q?1748198680298509338?= The arch_get_random*_early() abstraction is not completely useful and adds complexity, because it's not a given that there will be no calls to arch_get_random*() between random_init_early(), which uses arch_get_random*_early(), and init_cpu_features(). During that gap, crng_reseed() might be called, which uses arch_get_random*(), since it's mostly not init code. Instead we can test whether we're in the early phase in arch_get_random*() itself, and in doing so avoid all ambiguity about where we are. Fortunately, the only architecture that currently implements arch_get_random*_early() also has an alternatives-based cpu feature system, one flag of which determines whether the other flags have been initialized. This makes it possible to do the early check with zero cost once the system is initialized. Cc: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Cc: Jean-Philippe Brucker Signed-off-by: Jason A. Donenfeld --- Changes v2->v3: - Keep around __early_cpu_has_rndr() for kaslr usage. Changes v1->v2: - Also check early_boot_irqs_disabled, to make sure that the raw capability check only runs during an early stage when we're only running on the boot CPU and with IRQs off. This check disappears once the system is up, because system_capabilities_finalized() is a static branch. Catalin - Though this touches arm64's archrandom.h, I intend to take this through the random.git tree, if that's okay. I have other patches that will build off of this one. -Jason arch/arm64/include/asm/archrandom.h | 61 ++++++++--------------------- drivers/char/random.c | 4 +- include/linux/random.h | 20 ---------- 3 files changed, 18 insertions(+), 67 deletions(-) diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h index 109e2a4454be..4b0f28730ab2 100644 --- a/arch/arm64/include/asm/archrandom.h +++ b/arch/arm64/include/asm/archrandom.h @@ -58,6 +58,20 @@ static inline bool __arm64_rndrrs(unsigned long *v) return ok; } +static inline bool __early_cpu_has_rndr(void) +{ + /* Open code as we run prior to the first call to cpufeature. */ + unsigned long ftr = read_sysreg_s(SYS_ID_AA64ISAR0_EL1); + return (ftr >> ID_AA64ISAR0_EL1_RNDR_SHIFT) & 0xf; +} + +static __always_inline bool __cpu_has_rng(void) +{ + if (!system_capabilities_finalized() && early_boot_irqs_disabled) + return __early_cpu_has_rndr(); + return cpus_have_const_cap(ARM64_HAS_RNG); +} + static inline size_t __must_check arch_get_random_longs(unsigned long *v, size_t max_longs) { /* @@ -66,7 +80,7 @@ static inline size_t __must_check arch_get_random_longs(unsigned long *v, size_t * cpufeature code and with potential scheduling between CPUs * with and without the feature. */ - if (max_longs && cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndr(v)) + if (max_longs && __cpu_has_rng() && __arm64_rndr(v)) return 1; return 0; } @@ -108,53 +122,10 @@ static inline size_t __must_check arch_get_random_seed_longs(unsigned long *v, s * reseeded after each invocation. This is not a 100% fit but good * enough to implement this API if no other entropy source exists. */ - if (cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndrrs(v)) - return 1; - - return 0; -} - -static inline bool __init __early_cpu_has_rndr(void) -{ - /* Open code as we run prior to the first call to cpufeature. */ - unsigned long ftr = read_sysreg_s(SYS_ID_AA64ISAR0_EL1); - return (ftr >> ID_AA64ISAR0_EL1_RNDR_SHIFT) & 0xf; -} - -static inline size_t __init __must_check -arch_get_random_seed_longs_early(unsigned long *v, size_t max_longs) -{ - WARN_ON(system_state != SYSTEM_BOOTING); - - if (!max_longs) - return 0; - - if (smccc_trng_available) { - struct arm_smccc_res res; - - max_longs = min_t(size_t, 3, max_longs); - arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND64, max_longs * 64, &res); - if ((int)res.a0 >= 0) { - switch (max_longs) { - case 3: - *v++ = res.a1; - fallthrough; - case 2: - *v++ = res.a2; - fallthrough; - case 1: - *v++ = res.a3; - break; - } - return max_longs; - } - } - - if (__early_cpu_has_rndr() && __arm64_rndr(v)) + if (__cpu_has_rng() && __arm64_rndrrs(v)) return 1; return 0; } -#define arch_get_random_seed_longs_early arch_get_random_seed_longs_early #endif /* _ASM_ARCHRANDOM_H */ diff --git a/drivers/char/random.c b/drivers/char/random.c index 6f323344d0b9..e3cf4f51ed58 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -813,13 +813,13 @@ void __init random_init_early(const char *command_line) #endif for (i = 0, arch_bits = sizeof(entropy) * 8; i < ARRAY_SIZE(entropy);) { - longs = arch_get_random_seed_longs_early(entropy, ARRAY_SIZE(entropy) - i); + longs = arch_get_random_seed_longs(entropy, ARRAY_SIZE(entropy) - i); if (longs) { _mix_pool_bytes(entropy, sizeof(*entropy) * longs); i += longs; continue; } - longs = arch_get_random_longs_early(entropy, ARRAY_SIZE(entropy) - i); + longs = arch_get_random_longs(entropy, ARRAY_SIZE(entropy) - i); if (longs) { _mix_pool_bytes(entropy, sizeof(*entropy) * longs); i += longs; diff --git a/include/linux/random.h b/include/linux/random.h index 182780cafd45..2bdd3add3400 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -153,26 +153,6 @@ declare_get_random_var_wait(long, unsigned long) #include -/* - * Called from the boot CPU during startup; not valid to call once - * secondary CPUs are up and preemption is possible. - */ -#ifndef arch_get_random_seed_longs_early -static inline size_t __init arch_get_random_seed_longs_early(unsigned long *v, size_t max_longs) -{ - WARN_ON(system_state != SYSTEM_BOOTING); - return arch_get_random_seed_longs(v, max_longs); -} -#endif - -#ifndef arch_get_random_longs_early -static inline bool __init arch_get_random_longs_early(unsigned long *v, size_t max_longs) -{ - WARN_ON(system_state != SYSTEM_BOOTING); - return arch_get_random_longs(v, max_longs); -} -#endif - #ifdef CONFIG_SMP int random_prepare_cpu(unsigned int cpu); int random_online_cpu(unsigned int cpu);