[v3,2/2] x86/pm: Add enumeration check before spec MSRs save/restore setup

Message ID c24db75d69df6e66c0465e13676ad3f2837a2ed8.1668539735.git.pawan.kumar.gupta@linux.intel.com
State New
Headers
Series Check enumeration before MSR save/restore |

Commit Message

Pawan Gupta Nov. 15, 2022, 7:17 p.m. UTC
  pm_save_spec_msr() keeps a list of all the MSRs which _might_ need to be
saved and restored at hibernate and resume.  However, it has zero
awareness of CPU support for these MSRs.  It mostly works by
unconditionally attempting to manipulate these MSRs and relying on
rdmsrl_safe() being able to handle a #GP on CPUs where the support is
unavailable.

However, it's possible for reads (RDMSR) to be supported for a given MSR
while writes (WRMSR) are not.  In this case, msr_build_context() sees a
successful read (RDMSR) and marks the MSR as 'valid'.  Then, later, a
write (WRMSR) fails, producing a nasty (but harmless) error message.
This causes restore_processor_state() to try and restore it, but writing
this MSR is not allowed on the Intel Atom N2600 leading to:

  unchecked MSR access error: WRMSR to 0x122 (tried to write 0x0000000000000002) \
     at rIP: 0xffffffff8b07a574 (native_write_msr+0x4/0x20)
  Call Trace:
   <TASK>
   restore_processor_state
   x86_acpi_suspend_lowlevel
   acpi_suspend_enter
   suspend_devices_and_enter
   pm_suspend.cold
   state_store
   kernfs_fop_write_iter
   vfs_write
   ksys_write
   do_syscall_64
   ? do_syscall_64
   ? up_read
   ? lock_is_held_type
   ? asm_exc_page_fault
   ? lockdep_hardirqs_on
   entry_SYSCALL_64_after_hwframe

To fix this, add the corresponding X86_FEATURE bit for each MSR.  Avoid
trying to manipulate the MSR when the feature bit is clear. This
required adding a X86_FEATURE bit for MSRs that do not have one already,
but it's a small price to pay.

Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
Reported-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Cc: stable@kernel.org
---
 arch/x86/power/cpu.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)
  

Comments

Rafael J. Wysocki Nov. 15, 2022, 7:23 p.m. UTC | #1
On Tue, Nov 15, 2022 at 8:17 PM Pawan Gupta
<pawan.kumar.gupta@linux.intel.com> wrote:
>
> pm_save_spec_msr() keeps a list of all the MSRs which _might_ need to be
> saved and restored at hibernate and resume.  However, it has zero
> awareness of CPU support for these MSRs.  It mostly works by
> unconditionally attempting to manipulate these MSRs and relying on
> rdmsrl_safe() being able to handle a #GP on CPUs where the support is
> unavailable.
>
> However, it's possible for reads (RDMSR) to be supported for a given MSR
> while writes (WRMSR) are not.  In this case, msr_build_context() sees a
> successful read (RDMSR) and marks the MSR as 'valid'.  Then, later, a
> write (WRMSR) fails, producing a nasty (but harmless) error message.
> This causes restore_processor_state() to try and restore it, but writing
> this MSR is not allowed on the Intel Atom N2600 leading to:
>
>   unchecked MSR access error: WRMSR to 0x122 (tried to write 0x0000000000000002) \
>      at rIP: 0xffffffff8b07a574 (native_write_msr+0x4/0x20)
>   Call Trace:
>    <TASK>
>    restore_processor_state
>    x86_acpi_suspend_lowlevel
>    acpi_suspend_enter
>    suspend_devices_and_enter
>    pm_suspend.cold
>    state_store
>    kernfs_fop_write_iter
>    vfs_write
>    ksys_write
>    do_syscall_64
>    ? do_syscall_64
>    ? up_read
>    ? lock_is_held_type
>    ? asm_exc_page_fault
>    ? lockdep_hardirqs_on
>    entry_SYSCALL_64_after_hwframe
>
> To fix this, add the corresponding X86_FEATURE bit for each MSR.  Avoid
> trying to manipulate the MSR when the feature bit is clear. This
> required adding a X86_FEATURE bit for MSRs that do not have one already,
> but it's a small price to pay.
>
> Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
> Reported-by: Hans de Goede <hdegoede@redhat.com>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Cc: stable@kernel.org

Fine with me:

Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

> ---
>  arch/x86/power/cpu.c | 25 +++++++++++++++++--------
>  1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
> index 4cd39f304e20..11a7e28f8985 100644
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -511,18 +511,27 @@ static int pm_cpu_check(const struct x86_cpu_id *c)
>         return ret;
>  }
>
> +struct msr_enumeration {
> +       u32 msr_no;
> +       u32 feature;
> +};
> +
>  static void pm_save_spec_msr(void)
>  {
> -       u32 spec_msr_id[] = {
> -               MSR_IA32_SPEC_CTRL,
> -               MSR_IA32_TSX_CTRL,
> -               MSR_TSX_FORCE_ABORT,
> -               MSR_IA32_MCU_OPT_CTRL,
> -               MSR_AMD64_LS_CFG,
> -               MSR_AMD64_DE_CFG,
> +       struct msr_enumeration msr_enum[] = {
> +               {MSR_IA32_SPEC_CTRL,    X86_FEATURE_MSR_SPEC_CTRL},
> +               {MSR_IA32_TSX_CTRL,     X86_FEATURE_MSR_TSX_CTRL},
> +               {MSR_TSX_FORCE_ABORT,   X86_FEATURE_TSX_FORCE_ABORT},
> +               {MSR_IA32_MCU_OPT_CTRL, X86_FEATURE_SRBDS_CTRL},
> +               {MSR_AMD64_LS_CFG,      X86_FEATURE_LS_CFG_SSBD},
> +               {MSR_AMD64_DE_CFG,      X86_FEATURE_LFENCE_RDTSC},
>         };
> +       int i;
>
> -       msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
> +       for (i = 0; i < ARRAY_SIZE(msr_enum); i++) {
> +               if (boot_cpu_has(msr_enum[i].feature))
> +                       msr_build_context(&msr_enum[i].msr_no, 1);
> +       }
>  }
>
>  static int pm_check_save_msr(void)
> --
> 2.37.3
>
  
Pawan Gupta Nov. 15, 2022, 8:48 p.m. UTC | #2
On Tue, Nov 15, 2022 at 08:23:35PM +0100, Rafael J. Wysocki wrote:
>On Tue, Nov 15, 2022 at 8:17 PM Pawan Gupta
><pawan.kumar.gupta@linux.intel.com> wrote:
>>
>> pm_save_spec_msr() keeps a list of all the MSRs which _might_ need to be
>> saved and restored at hibernate and resume.  However, it has zero
>> awareness of CPU support for these MSRs.  It mostly works by
>> unconditionally attempting to manipulate these MSRs and relying on
>> rdmsrl_safe() being able to handle a #GP on CPUs where the support is
>> unavailable.
>>
>> However, it's possible for reads (RDMSR) to be supported for a given MSR
>> while writes (WRMSR) are not.  In this case, msr_build_context() sees a
>> successful read (RDMSR) and marks the MSR as 'valid'.  Then, later, a
>> write (WRMSR) fails, producing a nasty (but harmless) error message.
>> This causes restore_processor_state() to try and restore it, but writing
>> this MSR is not allowed on the Intel Atom N2600 leading to:
>>
>>   unchecked MSR access error: WRMSR to 0x122 (tried to write 0x0000000000000002) \
>>      at rIP: 0xffffffff8b07a574 (native_write_msr+0x4/0x20)
>>   Call Trace:
>>    <TASK>
>>    restore_processor_state
>>    x86_acpi_suspend_lowlevel
>>    acpi_suspend_enter
>>    suspend_devices_and_enter
>>    pm_suspend.cold
>>    state_store
>>    kernfs_fop_write_iter
>>    vfs_write
>>    ksys_write
>>    do_syscall_64
>>    ? do_syscall_64
>>    ? up_read
>>    ? lock_is_held_type
>>    ? asm_exc_page_fault
>>    ? lockdep_hardirqs_on
>>    entry_SYSCALL_64_after_hwframe
>>
>> To fix this, add the corresponding X86_FEATURE bit for each MSR.  Avoid
>> trying to manipulate the MSR when the feature bit is clear. This
>> required adding a X86_FEATURE bit for MSRs that do not have one already,
>> but it's a small price to pay.
>>
>> Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
>> Reported-by: Hans de Goede <hdegoede@redhat.com>
>> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
>> Cc: stable@kernel.org
>
>Fine with me:
>
>Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Thanks.
  
Dave Hansen Nov. 18, 2022, 6:12 p.m. UTC | #3
On 11/15/22 11:17, Pawan Gupta wrote:
> To fix this, add the corresponding X86_FEATURE bit for each MSR.  Avoid
> trying to manipulate the MSR when the feature bit is clear. This
> required adding a X86_FEATURE bit for MSRs that do not have one already,
> but it's a small price to pay.
> 
> Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
> Reported-by: Hans de Goede <hdegoede@redhat.com>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Cc: stable@kernel.org

Thanks for fixing this up.  The X86_FEATURE approach is a good one.  The
"poking at random MSRs" always seemed a bit wonky.

Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
  
Pawan Gupta Nov. 18, 2022, 6:27 p.m. UTC | #4
On Fri, Nov 18, 2022 at 10:12:32AM -0800, Dave Hansen wrote:
>On 11/15/22 11:17, Pawan Gupta wrote:
>> To fix this, add the corresponding X86_FEATURE bit for each MSR.  Avoid
>> trying to manipulate the MSR when the feature bit is clear. This
>> required adding a X86_FEATURE bit for MSRs that do not have one already,
>> but it's a small price to pay.
>>
>> Fixes: 73924ec4d560 ("x86/pm: Save the MSR validity status at context setup")
>> Reported-by: Hans de Goede <hdegoede@redhat.com>
>> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
>> Cc: stable@kernel.org
>
>Thanks for fixing this up.  The X86_FEATURE approach is a good one.  The
>"poking at random MSRs" always seemed a bit wonky.
>
>Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>

Thanks.
  

Patch

diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
index 4cd39f304e20..11a7e28f8985 100644
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -511,18 +511,27 @@  static int pm_cpu_check(const struct x86_cpu_id *c)
 	return ret;
 }
 
+struct msr_enumeration {
+	u32 msr_no;
+	u32 feature;
+};
+
 static void pm_save_spec_msr(void)
 {
-	u32 spec_msr_id[] = {
-		MSR_IA32_SPEC_CTRL,
-		MSR_IA32_TSX_CTRL,
-		MSR_TSX_FORCE_ABORT,
-		MSR_IA32_MCU_OPT_CTRL,
-		MSR_AMD64_LS_CFG,
-		MSR_AMD64_DE_CFG,
+	struct msr_enumeration msr_enum[] = {
+		{MSR_IA32_SPEC_CTRL,	X86_FEATURE_MSR_SPEC_CTRL},
+		{MSR_IA32_TSX_CTRL,	X86_FEATURE_MSR_TSX_CTRL},
+		{MSR_TSX_FORCE_ABORT,	X86_FEATURE_TSX_FORCE_ABORT},
+		{MSR_IA32_MCU_OPT_CTRL,	X86_FEATURE_SRBDS_CTRL},
+		{MSR_AMD64_LS_CFG,	X86_FEATURE_LS_CFG_SSBD},
+		{MSR_AMD64_DE_CFG,	X86_FEATURE_LFENCE_RDTSC},
 	};
+	int i;
 
-	msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
+	for (i = 0; i < ARRAY_SIZE(msr_enum); i++) {
+		if (boot_cpu_has(msr_enum[i].feature))
+			msr_build_context(&msr_enum[i].msr_no, 1);
+	}
 }
 
 static int pm_check_save_msr(void)