Message ID | 20230303160645.3594-1-mkoutny@suse.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp499297wrd; Fri, 3 Mar 2023 08:20:53 -0800 (PST) X-Google-Smtp-Source: AK7set/VHOvmWoZIzhehbAseq0tjF3ZQjvvTnZA0OufrLkXtYNRdYvnDsyxvyrxwJ4/9FUcUdAAD X-Received: by 2002:a17:906:c301:b0:8b1:7684:dfb0 with SMTP id s1-20020a170906c30100b008b17684dfb0mr2159409ejz.57.1677860453752; Fri, 03 Mar 2023 08:20:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677860453; cv=none; d=google.com; s=arc-20160816; b=qVgb/2PuJBRyIwN0vaAxuF0b4L77XrONqZJLPWgPZEmN2sVzD4OlFJJNKR7KU7d9hW UHW7aRw9kKpPtq8Q6ZIfYToIyffUsIQ4SqoZDwSYmQSM0FUtp0PAGTK3JjHXyptf4KLh g68JuEB7Tu1TdgBPgi0jXdmQcwipHnsxQTcs7Y/CSbXsWWm8t/Fl0/whOIdTp0DHA1gK aNNmHdYpdklV9guS0DWa5suaMpOE3X72glvdKkhTfmpr6RmeGO6MKOKDXQXm4CLjX6u7 cqfoDTf3c1FqjJ5IFfhcmx3tIp7B/TZYbFHkDJcrmt5ejAcfie+X1oojJv/jVsH/o6Uj LsiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=qUbas8+9LTlIYdmVN4CIX3YAxB/ippen+8fFcV4qE+4=; b=dmGEEPhd0UKB6R4s9nwSR+qPFgUitVm053DIFMEekmm1eAbbrHrWMYZGi8s62a0zH2 hhtXnH5YP6c1TCsvhEA134yNINSuw39xL8zZUP3TzzfSKMZyTgl7pW/YbLTJy1QdIW7m +B66Lz38zSde0kjxtUNJ+2dhn7NGuyDz37qhj6DjTVdZyhw6dR2QuhqJN8SX8MrBj3wC +bDYa9CCynpbJ3KcQzjZdnQ12IrG8oQRFZBLZaK1ItZ/MrFIMBvbTO4iZ3VYrX9a+4II kT4oz6hkifCKDZKbGeFHrMnsI/RIM2M5lWYqJRolLJDz2hWt6MvLJzEhFqWKLP4omCUZ zbQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=QZIoSjOo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=suse.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xa5-20020a170906fd8500b008e1cbdcd3ccsi3540252ejb.27.2023.03.03.08.20.31; Fri, 03 Mar 2023 08:20:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=QZIoSjOo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230381AbjCCQGy (ORCPT <rfc822;davidbtadokoro@gmail.com> + 99 others); Fri, 3 Mar 2023 11:06:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229916AbjCCQGw (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 3 Mar 2023 11:06:52 -0500 Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D85641B68 for <linux-kernel@vger.kernel.org>; Fri, 3 Mar 2023 08:06:50 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id EEFB322310; Fri, 3 Mar 2023 16:06:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1677859608; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=qUbas8+9LTlIYdmVN4CIX3YAxB/ippen+8fFcV4qE+4=; b=QZIoSjOo2HRjG0Cdn6eCrGfBs4cMpWhhDEW3UnqEAta3VbZm//lMRGjS5BsP91yKtQUVxS A7as5KjZZ1efy6qcJB/7MZgzi6PVIfnhCvfvs/8w14M5yohR84u9wBM6llaOdh6XbXby7z Km+Zyp2EweBoYFWX6WPwa3iwVHBIUv4= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id ACF951329E; Fri, 3 Mar 2023 16:06:48 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id qNPEKBgbAmRlTwAAMHmgww (envelope-from <mkoutny@suse.com>); Fri, 03 Mar 2023 16:06:48 +0000 From: =?utf-8?q?Michal_Koutn=C3=BD?= <mkoutny@suse.com> To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com> Subject: [PATCH] x86/mm: Do not shuffle CPU entry areas without KASLR Date: Fri, 3 Mar 2023 17:06:45 +0100 Message-Id: <20230303160645.3594-1-mkoutny@suse.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759364203290884469?= X-GMAIL-MSGID: =?utf-8?q?1759364203290884469?= |
Series |
x86/mm: Do not shuffle CPU entry areas without KASLR
|
|
Commit Message
Michal Koutný
March 3, 2023, 4:06 p.m. UTC
The commit 97e3d26b5e5f ("x86/mm: Randomize per-cpu entry area") fixed
an omission of KASLR on CPU entry areas. It doesn't take into account
KASLR switches though, which may result in unintended non-determinism
when a user wants to avoid it (e.g. debugging, benchmarking).
Generate only a single combination of CPU entry areas offsets -- the
linear array that existed prior randomization when KASLR is turned off.
Signed-off-by: Michal Koutný <mkoutny@suse.com>
---
arch/x86/mm/cpu_entry_area.c | 7 +++++++
1 file changed, 7 insertions(+)
Comments
On 3/3/23 08:06, Michal Koutný wrote: > @@ -29,6 +30,12 @@ static __init void init_cea_offsets(void) > unsigned int max_cea; > unsigned int i, j; > > + if (!kaslr_memory_enabled()) { > + for_each_possible_cpu(i) > + per_cpu(_cea_offset, i) = i; > + return; > + } Should this be kaslr_memory_enabled() or kaslr_enabled()? The delta seems to be CONFIG_KASAN, and the cpu entry area randomization works just fine with KASAN after some recent fixes. I _think_ that makes cpu entry area randomization more like module randomization which would point toward kaslr_enabled().
On Fri, Mar 03, 2023 at 01:24:53PM -0800, Dave Hansen <dave.hansen@intel.com> wrote: > Should this be kaslr_memory_enabled() or kaslr_enabled()? Originally, I had chosen kaslr_enabled(), seeing the PGD requirement of KASAN (whole randomization area CPU_ENTRY_AREA_MAP_SIZE would fit in PGD afterall). > The delta seems to be CONFIG_KASAN, and the cpu entry area randomization > works just fine with KASAN after some recent fixes. But then I found KASAN code trying to be smart and having the fixups, hence I chickened out to kaslr_memory_enabled(). > I _think_ that makes cpu entry area randomization more like module > randomization which would point toward kaslr_enabled(). <del>I understood the only difference between kaslr_enabled and kaslr_memory_enabled is the PGD alignment of the respective regions. (Although, I don't see where KASAN breaks with unaligned ranges except for better efficiency of page tables.)</del> I've just found your [1], wondering the similar. That being said, I will send v2 with just kaslr_enabled() guard and updated commit message to beware of KASAN fixups (when backporting). Thanks, Michal [1] https://lore.kernel.org/r/299fbb80-e3ab-3b7c-3491-e85cac107930@intel.com/
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index 7316a8224259..f5e93df096fb 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -10,6 +10,7 @@ #include <asm/fixmap.h> #include <asm/desc.h> #include <asm/kasan.h> +#include <asm/setup.h> static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage); @@ -29,6 +30,12 @@ static __init void init_cea_offsets(void) unsigned int max_cea; unsigned int i, j; + if (!kaslr_memory_enabled()) { + for_each_possible_cpu(i) + per_cpu(_cea_offset, i) = i; + return; + } + max_cea = (CPU_ENTRY_AREA_MAP_SIZE - PAGE_SIZE) / CPU_ENTRY_AREA_SIZE; /* O(sodding terrible) */