From patchwork Tue Oct 18 17:34:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 4273 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp2085686wrs; Tue, 18 Oct 2022 10:52:46 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6HrOJj3m5Meq+tWGDeReCrLAI9OxmBdArvw15ekmll7xrLSegf4RrltYLRNW91FBxw0Z0n X-Received: by 2002:aa7:c054:0:b0:453:98c6:f6c4 with SMTP id k20-20020aa7c054000000b0045398c6f6c4mr3704727edo.2.1666115566705; Tue, 18 Oct 2022 10:52:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666115566; cv=none; d=google.com; s=arc-20160816; b=qzuvn2OL0HdEpNybOVqWUYy9EFAZbeNHbA9vrIDdK26ejrRnvVeN8Aqq9ZEnbVdCUw ACRCrhPDsoio84HxULKqJKRq6JKJlyvjqwhsFI899h1RyRNFKfW6pVmB0+hfuuBkxEqP h3uXeO3iZ4XHl5BbcCxqEtvkkDMJocHoadJ1gtxA5jjPx9FQAgn2zExukwUPJBD8isbT In1xMc4Ijx5qtymJNGZJOWWbbKnxuUpz2To/eZGIswGG49etyDwuUPJPFU5CZzOnvHBz B0V2Qyiuww6RtdRqiSabeOtIbsV31exMC3wYOLiZdD1HMZ5ciMn+RH6U26HBP0Avo/+Z GsPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7XMkYIGMgupwppPotNYu9obKkCzLFIuRLklhiM2amjU=; b=p06havaVcP3Zkk4GY54IMMiqOLIDmgl9KlxELgRRLMcgzdiXj8SB8jfJNtfLri9EY1 Jo4dnT7mee/9ECd8fEPf+Sh+KLFpVdult4hOa1AKMAR/zT2MhP5phvCvVm9nSlbAO01k z7tiGnlVcLr6ZUs4r6NNDyNxEQwJH9cEmKs/pChw36LUOWZ5c50mMD4ZZSM6lQ2mWkFj BexD/DZkaw0kCA3+zlFymvuxeP9K4QGXC3lWvAvJKW0Dgy7Ir4JJwL5aKJFOY5N6q4Cd NkjnkoBc8zjNfOBAqFdr/Dy7YV6Ef63bKbdSNKg0YJORaMWc7wokztIMuKri5sXAUoQD gTsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=ImkinLzb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=zx2c4.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ev16-20020a056402541000b004596d6e78e3si10340497edb.144.2022.10.18.10.52.22; Tue, 18 Oct 2022 10:52:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=ImkinLzb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230085AbiJRRfF (ORCPT + 99 others); Tue, 18 Oct 2022 13:35:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229691AbiJRRfC (ORCPT ); Tue, 18 Oct 2022 13:35:02 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D09552DE9; Tue, 18 Oct 2022 10:35:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 383A6B82081; Tue, 18 Oct 2022 17:35:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B258C433D6; Tue, 18 Oct 2022 17:34:58 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="ImkinLzb" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1666114496; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7XMkYIGMgupwppPotNYu9obKkCzLFIuRLklhiM2amjU=; b=ImkinLzbFY4YpsK1seMTWP+u+P5GLQwArJpGW+7FbAh3BBn6fhl9ko0a92FkbbTl8zrxBL 0dlL6OrN8tkgLWDk6VFz1hDUSBr8XAixPZkd+yHTsTh66FJ4NKp08eaem6vR4WHY3taPBi 0PTdAMiC5aZRtKTyi5vJ3yZM3YrtPAI= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 7d1abb0b (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Tue, 18 Oct 2022 17:34:55 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: sneves@dei.uc.pt, ebiggers@kernel.org, "Jason A. Donenfeld" Subject: [PATCH v2] random: use rejection sampling for uniform bounded random integers Date: Tue, 18 Oct 2022 11:34:21 -0600 Message-Id: <20221018173420.127174-1-Jason@zx2c4.com> In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746900726647774191?= X-GMAIL-MSGID: =?utf-8?q?1747048796808861515?= Until the very recent commits, many bounded random integers were calculated using `get_random_u32() % max_plus_one`, which not only incurs the price of a division -- indicating performance mostly was not a real issue -- but also does not result in a uniformly distributed output if max_plus_one is not a power of two. Recent commits moved to using `prandom_u32_max(max_plus_one)`, which replaces the division with a faster multiplication, but still does not solve the issue with non-uniform output. For some users, maybe this isn't a problem, and for others, maybe it is, but for the majority of users, probably the question has never been posed and analyzed, and nobody thought much about it, probably assuming random is random is random. In other words, the unthinking expectation of most users is likely that the resultant numbers are uniform. So we implement here an efficient way of generating uniform bounded random integers. Through use of compile-time evaluation, and avoiding divisions as much as possible, this commit introduces no measurable overhead. At least for hot-path uses tested, any potential difference was lost in the noise. On both clang and gcc, code generation is pretty small. The new function, get_random_u32_below(), lives in random.h, rather than prandom.h, and has a "get_random_xxx" function name, because it is suitable for all uses, including cryptography. In order to be efficient, we implement a kernel-specific variant of Daniel Lemire's algorithm from "Fast Random Integer Generation in an Interval", linked below. The kernel's variant takes advantage of constant folding to avoid divisions entirely in the vast majority of cases, works on both 32-bit and 64-bit architectures, and requests a minimal amount of bytes from the RNG. Link: https://arxiv.org/pdf/1805.10941.pdf Signed-off-by: Jason A. Donenfeld --- Changes v1->v2: - Add a few explanatory comments. drivers/char/random.c | 22 ++++++++++++++++++++++ include/linux/prandom.h | 18 ++---------------- include/linux/random.h | 40 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 64 insertions(+), 16 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2fe28eeb2f38..e3cf4f51ed58 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -160,6 +160,7 @@ EXPORT_SYMBOL(wait_for_random_bytes); * u8 get_random_u8() * u16 get_random_u16() * u32 get_random_u32() + * u32 get_random_u32_below(u32 ceil) * u64 get_random_u64() * unsigned long get_random_long() * @@ -510,6 +511,27 @@ DEFINE_BATCHED_ENTROPY(u16) DEFINE_BATCHED_ENTROPY(u32) DEFINE_BATCHED_ENTROPY(u64) +u32 __get_random_u32_below(u32 ceil) +{ + /* + * This is the slow path for variable ceil. It is still fast, most of + * the time, by doing traditional reciprocal multiplication and + * opportunistically comparing the lower half to ceil itself, before + * falling back to computing a larger bound, and then rejecting samples + * whose lower half would indicate a range indivisible by ceil. The use + * of `-ceil % ceil` is analogous to `2^32 % ceil`, but is computable + * in 32-bits. + */ + u64 mult = (u64)ceil * get_random_u32(); + if (unlikely((u32)mult < ceil)) { + u32 bound = -ceil % ceil; + while (unlikely((u32)mult < bound)) + mult = (u64)ceil * get_random_u32(); + } + return mult >> 32; +} +EXPORT_SYMBOL(__get_random_u32_below); + #ifdef CONFIG_SMP /* * This function is called when the CPU is coming up, with entry diff --git a/include/linux/prandom.h b/include/linux/prandom.h index e0a0759dd09c..1f4a0de7b019 100644 --- a/include/linux/prandom.h +++ b/include/linux/prandom.h @@ -23,24 +23,10 @@ void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state); #define prandom_init_once(pcpu_state) \ DO_ONCE(prandom_seed_full_state, (pcpu_state)) -/** - * prandom_u32_max - returns a pseudo-random number in interval [0, ep_ro) - * @ep_ro: right open interval endpoint - * - * Returns a pseudo-random number that is in interval [0, ep_ro). This is - * useful when requesting a random index of an array containing ep_ro elements, - * for example. The result is somewhat biased when ep_ro is not a power of 2, - * so do not use this for cryptographic purposes. - * - * Returns: pseudo-random number in interval [0, ep_ro) - */ +/* Deprecated: use get_random_u32_below() instead. */ static inline u32 prandom_u32_max(u32 ep_ro) { - if (__builtin_constant_p(ep_ro <= 1U << 8) && ep_ro <= 1U << 8) - return (get_random_u8() * ep_ro) >> 8; - if (__builtin_constant_p(ep_ro <= 1U << 16) && ep_ro <= 1U << 16) - return (get_random_u16() * ep_ro) >> 16; - return ((u64)get_random_u32() * ep_ro) >> 32; + return get_random_u32_below(ep_ro); } /* diff --git a/include/linux/random.h b/include/linux/random.h index 147a5e0d0b8e..3a82c0a8bc46 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -51,6 +51,46 @@ static inline unsigned long get_random_long(void) #endif } +u32 __get_random_u32_below(u32 ceil); + +/* + * Returns a random integer in the interval [0, ceil), with uniform + * distribution, suitable for all uses. Fastest when ceil is a constant, but + * still fast for variable ceil as well. + */ +static inline u32 get_random_u32_below(u32 ceil) +{ + if (!__builtin_constant_p(ceil)) + return __get_random_u32_below(ceil); + + /* + * For the fast path, below, all operations on ceil are precomputed by + * the compiler, so this incurs no overhead for checking pow2, doing + * divisions, or branching based on integer size. The resultant + * algorithm does traditional reciprocal multiplication (typically + * optimized by the compiler into shifts and adds), rejecting samples + * whose lower half would indicate a range indivisible by ceil. + */ + BUILD_BUG_ON_MSG(!ceil, "get_random_u32_below() must take ceil > 0"); + if (ceil <= 1) + return 0; + for (;;) { + if (ceil <= 1U << 8) { + u32 mult = ceil * get_random_u8(); + if (likely(is_power_of_2(ceil) || (u8)mult >= (1U << 8) % ceil)) + return mult >> 8; + } else if (ceil <= 1U << 16) { + u32 mult = ceil * get_random_u16(); + if (likely(is_power_of_2(ceil) || (u16)mult >= (1U << 16) % ceil)) + return mult >> 16; + } else { + u64 mult = (u64)ceil * get_random_u32(); + if (likely(is_power_of_2(ceil) || (u32)mult >= -ceil % ceil)) + return mult >> 32; + } + } +} + /* * On 64-bit architectures, protect against non-terminated C string overflows * by zeroing out the first byte of the canary; this leaves 56 bits of entropy.