[v4,06/12] x86/mtrr: replace vendor tests in MTRR code

Message ID 20230306163425.8324-7-jgross@suse.com
State New
Headers
Series x86/mtrr: fix handling with PAT but without MTRR |

Commit Message

Juergen Gross March 6, 2023, 4:34 p.m. UTC
  Modern CPUs all share the same MTRR interface implemented via
generic_mtrr_ops.

At several places in MTRR code this generic interface is deduced via
is_cpu(INTEL) tests, which is only working due to X86_VENDOR_INTEL
being 0 (the is_cpu() macro is testing mtrr_if->vendor, which isn't
explicitly set in generic_mtrr_ops).

Fix that by replacing the is_cpu(INTEL) tests with testing for mtrr_if
to be &generic_mtrr_ops.

The only other place where the .vendor member of struct mtrr_ops is
being used is in set_num_var_ranges(), where depending on the vendor
the number of MTRR registers is determined. This can easily be changed
by replacing .vendor with the static number of MTRR registers.

It should be noted that the test "is_cpu(HYGON)" wasn't ever returning
true, as there is no struct mtrr_ops with that vendor information.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 arch/x86/kernel/cpu/mtrr/amd.c     | 2 +-
 arch/x86/kernel/cpu/mtrr/centaur.c | 2 +-
 arch/x86/kernel/cpu/mtrr/cleanup.c | 4 ++--
 arch/x86/kernel/cpu/mtrr/cyrix.c   | 2 +-
 arch/x86/kernel/cpu/mtrr/generic.c | 2 +-
 arch/x86/kernel/cpu/mtrr/mtrr.c    | 8 +++-----
 arch/x86/kernel/cpu/mtrr/mtrr.h    | 4 +---
 7 files changed, 10 insertions(+), 14 deletions(-)
  

Comments

Borislav Petkov March 24, 2023, 4:56 p.m. UTC | #1
On Mon, Mar 06, 2023 at 05:34:19PM +0100, Juergen Gross wrote:
> Modern CPUs all share the same MTRR interface implemented via
> generic_mtrr_ops.
> 
> At several places in MTRR code this generic interface is deduced via
> is_cpu(INTEL) tests, which is only working due to X86_VENDOR_INTEL
> being 0 (the is_cpu() macro is testing mtrr_if->vendor, which isn't
> explicitly set in generic_mtrr_ops).
> 
> Fix that by replacing the is_cpu(INTEL) tests with testing for mtrr_if
> to be &generic_mtrr_ops.

Two things:

* is_cpu() checks also whether mtrr_if is set. And we don't set it for
all vendors. I wanted to replace that thing with a vendor check recently
but there's that little issue.

I guess for the cases where we have the generic MTRR implementation, we
can safely assume that mtrr_if is set. Which leads me to the second
thing:

* If you're going to test for &generic_mtrr_ops, then you can just as
well do

	cpu_feature_enabled(X86_FEATURE_MTRR)

which is a lot more telling.

> diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
> index 5fe62ee0361b..0c83990501f5 100644
> --- a/arch/x86/kernel/cpu/mtrr/mtrr.c
> +++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
> @@ -108,14 +108,12 @@ static int have_wrcomb(void)
>  /*  This function returns the number of variable MTRRs  */
>  static void __init set_num_var_ranges(bool use_generic)
>  {
> -	unsigned long config = 0, dummy;
> +	unsigned long config, dummy;
>  
>  	if (use_generic)
>  		rdmsr(MSR_MTRRcap, config, dummy);
> -	else if (is_cpu(AMD) || is_cpu(HYGON))
> -		config = 2;
> -	else if (is_cpu(CYRIX) || is_cpu(CENTAUR))
> -		config = 8;
> +	else
> +		config = mtrr_if->var_regs;
>  
>  	num_var_ranges = config & 0xff;
>  }

Since you're touching this function, you might simply expand its body in
its only call site in mtrr_bp_init(), put a comment above the expanded
code and remove that function.

That is, if we're going to do the ->var_regs thing.

Thx.
  
Juergen Gross March 27, 2023, 5:43 a.m. UTC | #2
On 24.03.23 17:56, Borislav Petkov wrote:
> On Mon, Mar 06, 2023 at 05:34:19PM +0100, Juergen Gross wrote:
>> Modern CPUs all share the same MTRR interface implemented via
>> generic_mtrr_ops.
>>
>> At several places in MTRR code this generic interface is deduced via
>> is_cpu(INTEL) tests, which is only working due to X86_VENDOR_INTEL
>> being 0 (the is_cpu() macro is testing mtrr_if->vendor, which isn't
>> explicitly set in generic_mtrr_ops).
>>
>> Fix that by replacing the is_cpu(INTEL) tests with testing for mtrr_if
>> to be &generic_mtrr_ops.
> 
> Two things:
> 
> * is_cpu() checks also whether mtrr_if is set. And we don't set it for
> all vendors. I wanted to replace that thing with a vendor check recently
> but there's that little issue.

The is_cpu() checks are either in functions reachable only with mtrr_if being
set, or are testing for INTEL, which is replaced by the test of mtrr_if being
&generic_mtrr_ops as written in the commit message.

> I guess for the cases where we have the generic MTRR implementation, we
> can safely assume that mtrr_if is set. Which leads me to the second
> thing:
> 
> * If you're going to test for &generic_mtrr_ops, then you can just as
> well do
> 
> 	cpu_feature_enabled(X86_FEATURE_MTRR)
> 
> which is a lot more telling.

Yes, I think this is true.

> 
>> diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
>> index 5fe62ee0361b..0c83990501f5 100644
>> --- a/arch/x86/kernel/cpu/mtrr/mtrr.c
>> +++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
>> @@ -108,14 +108,12 @@ static int have_wrcomb(void)
>>   /*  This function returns the number of variable MTRRs  */
>>   static void __init set_num_var_ranges(bool use_generic)
>>   {
>> -	unsigned long config = 0, dummy;
>> +	unsigned long config, dummy;
>>   
>>   	if (use_generic)
>>   		rdmsr(MSR_MTRRcap, config, dummy);
>> -	else if (is_cpu(AMD) || is_cpu(HYGON))
>> -		config = 2;
>> -	else if (is_cpu(CYRIX) || is_cpu(CENTAUR))
>> -		config = 8;
>> +	else
>> +		config = mtrr_if->var_regs;
>>   
>>   	num_var_ranges = config & 0xff;
>>   }
> 
> Since you're touching this function, you might simply expand its body in
> its only call site in mtrr_bp_init(), put a comment above the expanded
> code and remove that function.

Okay.


Juergen
  
Borislav Petkov March 27, 2023, 7:14 a.m. UTC | #3
On Mon, Mar 27, 2023 at 07:43:58AM +0200, Juergen Gross wrote:
> The is_cpu() checks are either in functions reachable only with mtrr_if being
> set, or are testing for INTEL, which is replaced by the test of mtrr_if being
> &generic_mtrr_ops as written in the commit message.

I went through all call sites... it seems like what you're saying should
work. I guess this mtrr_if check was added as a precaution in 2002 as
part of a cleanup:

commit 8fbdcb188e31ac901e216b466b97e90e8b057daa
Author: Dave Jones <davej@suse.de>
Date:   Wed Aug 14 21:14:22 2002 -0700

    [PATCH] Modular x86 MTRR driver.
    
    This patch from Pat Mochel cleans up the hell that was mtrr.c
    into something a lot more modular and easy to understand, by
    doing the implementation-per-file as has been done to various
    other things by Pat and myself over the last months.
    
    It's functionally identical from a kernel internal point of view,
    and a userspace point of view, and is basically just a very large
    code clean up.

So let's see what catches fire...

Thx.
  

Patch

diff --git a/arch/x86/kernel/cpu/mtrr/amd.c b/arch/x86/kernel/cpu/mtrr/amd.c
index eff6ac62c0ff..ef3e8e42b782 100644
--- a/arch/x86/kernel/cpu/mtrr/amd.c
+++ b/arch/x86/kernel/cpu/mtrr/amd.c
@@ -110,7 +110,7 @@  amd_validate_add_page(unsigned long base, unsigned long size, unsigned int type)
 }
 
 const struct mtrr_ops amd_mtrr_ops = {
-	.vendor            = X86_VENDOR_AMD,
+	.var_regs          = 2,
 	.set               = amd_set_mtrr,
 	.get               = amd_get_mtrr,
 	.get_free_region   = generic_get_free_region,
diff --git a/arch/x86/kernel/cpu/mtrr/centaur.c b/arch/x86/kernel/cpu/mtrr/centaur.c
index b8a74eddde83..4466ddeb0125 100644
--- a/arch/x86/kernel/cpu/mtrr/centaur.c
+++ b/arch/x86/kernel/cpu/mtrr/centaur.c
@@ -112,7 +112,7 @@  centaur_validate_add_page(unsigned long base, unsigned long size, unsigned int t
 }
 
 const struct mtrr_ops centaur_mtrr_ops = {
-	.vendor            = X86_VENDOR_CENTAUR,
+	.var_regs          = 8,
 	.set               = centaur_set_mcr,
 	.get               = centaur_get_mcr,
 	.get_free_region   = centaur_get_free_region,
diff --git a/arch/x86/kernel/cpu/mtrr/cleanup.c b/arch/x86/kernel/cpu/mtrr/cleanup.c
index b5f43049fa5f..1c2c0c252fa5 100644
--- a/arch/x86/kernel/cpu/mtrr/cleanup.c
+++ b/arch/x86/kernel/cpu/mtrr/cleanup.c
@@ -689,7 +689,7 @@  int __init mtrr_cleanup(unsigned address_bits)
 	int index_good;
 	int i;
 
-	if (!is_cpu(INTEL) || enable_mtrr_cleanup < 1)
+	if (mtrr_if != &generic_mtrr_ops || enable_mtrr_cleanup < 1)
 		return 0;
 
 	rdmsr(MSR_MTRRdefType, def, dummy);
@@ -886,7 +886,7 @@  int __init mtrr_trim_uncached_memory(unsigned long end_pfn)
 	 * Make sure we only trim uncachable memory on machines that
 	 * support the Intel MTRR architecture:
 	 */
-	if (!is_cpu(INTEL) || disable_mtrr_trim)
+	if (mtrr_if != &generic_mtrr_ops || disable_mtrr_trim)
 		return 0;
 
 	rdmsr(MSR_MTRRdefType, def, dummy);
diff --git a/arch/x86/kernel/cpu/mtrr/cyrix.c b/arch/x86/kernel/cpu/mtrr/cyrix.c
index 173b9e01e623..238dad57d4d6 100644
--- a/arch/x86/kernel/cpu/mtrr/cyrix.c
+++ b/arch/x86/kernel/cpu/mtrr/cyrix.c
@@ -235,7 +235,7 @@  static void cyrix_set_arr(unsigned int reg, unsigned long base,
 }
 
 const struct mtrr_ops cyrix_mtrr_ops = {
-	.vendor            = X86_VENDOR_CYRIX,
+	.var_regs          = 8,
 	.set               = cyrix_set_arr,
 	.get               = cyrix_get_arr,
 	.get_free_region   = cyrix_get_free_region,
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 49b4cc923312..e25a44c2c950 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -827,7 +827,7 @@  int generic_validate_add_page(unsigned long base, unsigned long size,
 	 * For Intel PPro stepping <= 7
 	 * must be 4 MiB aligned and not touch 0x70000000 -> 0x7003FFFF
 	 */
-	if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
+	if (mtrr_if == &generic_mtrr_ops && boot_cpu_data.x86 == 6 &&
 	    boot_cpu_data.x86_model == 1 &&
 	    boot_cpu_data.x86_stepping <= 7) {
 		if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index 5fe62ee0361b..0c83990501f5 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -108,14 +108,12 @@  static int have_wrcomb(void)
 /*  This function returns the number of variable MTRRs  */
 static void __init set_num_var_ranges(bool use_generic)
 {
-	unsigned long config = 0, dummy;
+	unsigned long config, dummy;
 
 	if (use_generic)
 		rdmsr(MSR_MTRRcap, config, dummy);
-	else if (is_cpu(AMD) || is_cpu(HYGON))
-		config = 2;
-	else if (is_cpu(CYRIX) || is_cpu(CENTAUR))
-		config = 8;
+	else
+		config = mtrr_if->var_regs;
 
 	num_var_ranges = config & 0xff;
 }
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
index 02eb5871492d..a3c362d3d5bf 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
@@ -13,7 +13,7 @@ 
 extern unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
 
 struct mtrr_ops {
-	u32	vendor;
+	u32	var_regs;
 	void	(*set)(unsigned int reg, unsigned long base,
 		       unsigned long size, mtrr_type type);
 	void	(*get)(unsigned int reg, unsigned long *base,
@@ -54,8 +54,6 @@  bool get_mtrr_state(void);
 extern u64 size_or_mask, size_and_mask;
 extern const struct mtrr_ops *mtrr_if;
 
-#define is_cpu(vnd)	(mtrr_if && mtrr_if->vendor == X86_VENDOR_##vnd)
-
 extern unsigned int num_var_ranges;
 extern u64 mtrr_tom2;
 extern struct mtrr_state_type mtrr_state;