x86/mm: Do not shuffle CPU entry areas without KASLR

Message ID 20230303160645.3594-1-mkoutny@suse.com
State New
Headers
Series x86/mm: Do not shuffle CPU entry areas without KASLR |

Commit Message

Michal Koutný March 3, 2023, 4:06 p.m. UTC
  The commit 97e3d26b5e5f ("x86/mm: Randomize per-cpu entry area") fixed
an omission of KASLR on CPU entry areas. It doesn't take into account
KASLR switches though, which may result in unintended non-determinism
when a user wants to avoid it (e.g. debugging, benchmarking).

Generate only a single combination of CPU entry areas offsets -- the
linear array that existed prior randomization when KASLR is turned off.

Signed-off-by: Michal Koutný <mkoutny@suse.com>
---
 arch/x86/mm/cpu_entry_area.c | 7 +++++++
 1 file changed, 7 insertions(+)
  

Comments

Dave Hansen March 3, 2023, 9:24 p.m. UTC | #1
On 3/3/23 08:06, Michal Koutný wrote:
> @@ -29,6 +30,12 @@ static __init void init_cea_offsets(void)
>  	unsigned int max_cea;
>  	unsigned int i, j;
>  
> +	if (!kaslr_memory_enabled()) {
> +		for_each_possible_cpu(i)
> +			per_cpu(_cea_offset, i) = i;
> +		return;
> +	}

Should this be kaslr_memory_enabled() or kaslr_enabled()?

The delta seems to be CONFIG_KASAN, and the cpu entry area randomization
works just fine with KASAN after some recent fixes.  I _think_ that
makes cpu entry area randomization more like module randomization which
would point toward kaslr_enabled().
  
Michal Koutný March 3, 2023, 11:04 p.m. UTC | #2
On Fri, Mar 03, 2023 at 01:24:53PM -0800, Dave Hansen <dave.hansen@intel.com> wrote:
> Should this be kaslr_memory_enabled() or kaslr_enabled()?

Originally, I had chosen kaslr_enabled(), seeing the PGD requirement of
KASAN (whole randomization area CPU_ENTRY_AREA_MAP_SIZE would fit in PGD
afterall).

> The delta seems to be CONFIG_KASAN, and the cpu entry area randomization
> works just fine with KASAN after some recent fixes.

But then I found KASAN code trying to be smart and having the fixups,
hence I chickened out to kaslr_memory_enabled().

> I _think_ that makes cpu entry area randomization more like module
> randomization which would point toward kaslr_enabled().

<del>I understood the only difference between kaslr_enabled and
kaslr_memory_enabled is the PGD alignment of the respective regions.
(Although, I don't see where KASAN breaks with unaligned ranges except
for better efficiency of page tables.)</del>

I've just found your [1], wondering the similar.


That being said, I will send v2 with just kaslr_enabled() guard and
updated commit message to beware of KASAN fixups (when backporting).

Thanks,
Michal

[1] https://lore.kernel.org/r/299fbb80-e3ab-3b7c-3491-e85cac107930@intel.com/
  

Patch

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index 7316a8224259..f5e93df096fb 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -10,6 +10,7 @@ 
 #include <asm/fixmap.h>
 #include <asm/desc.h>
 #include <asm/kasan.h>
+#include <asm/setup.h>
 
 static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);
 
@@ -29,6 +30,12 @@  static __init void init_cea_offsets(void)
 	unsigned int max_cea;
 	unsigned int i, j;
 
+	if (!kaslr_memory_enabled()) {
+		for_each_possible_cpu(i)
+			per_cpu(_cea_offset, i) = i;
+		return;
+	}
+
 	max_cea = (CPU_ENTRY_AREA_MAP_SIZE - PAGE_SIZE) / CPU_ENTRY_AREA_SIZE;
 
 	/* O(sodding terrible) */