[RFC,x86/nmi] Fix out-of-order nesting checks

Message ID 0cbff831-6e3d-431c-9830-ee65ee7787ff@paulmck-laptop
State New
Headers
Series [RFC,x86/nmi] Fix out-of-order nesting checks |

Commit Message

Paul E. McKenney Oct. 11, 2023, 6:40 p.m. UTC
  The ->idt_seq and ->recv_jiffies variables added by commit 1a3ea611fc10
("x86/nmi: Accumulate NMI-progress evidence in exc_nmi()") place
the exit-time check of the bottom bit of ->idt_seq after the
this_cpu_dec_return() that re-enables NMI nesting.  This can result in
the following sequence of events on a given CPU in kernels built with
CONFIG_NMI_CHECK_CPU=y:

o       An NMI arrives, and ->idt_seq is incremented to an odd number.
        In addition, nmi_state is set to NMI_EXECUTING==1.

o       The NMI is processed.

o       The this_cpu_dec_return(nmi_state) zeroes nmi_state and returns
        NMI_EXECUTING==1, thus opting out of the "goto nmi_restart".

o       Another NMI arrives and ->idt_seq is incremented to an even
        number, triggering the warning.  But all is just fine, at least
        assuming we don't get so many closely spaced NMIs that the stack
        overflows or some such.

Experience on the fleet indicates that the MTBF of this false positive
is about 70 years.  Or, for those who are not quite that patient, the
MTBF appears to be about one per week per 4,000 systems.

Fix this false-positive warning by moving the "nmi_restart" label before
the initial ->idt_seq increment/check and moving the this_cpu_dec_return()
to follow the final ->idt_seq increment/check.  This way, all nested NMIs
that get past the NMI_NOT_RUNNING check get a clean ->idt_seq slate.
And if they don't get past that check, they will set nmi_state to
NMI_LATCHED, which will cause the this_cpu_dec_return(nmi_state)
to restart.

Reported-by: Chris Mason <clm@fb.com>
Fixes: 1a3ea611fc10 ("x86/nmi: Accumulate NMI-progress evidence in exc_nmi()")
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <x86@kernel.org>
  

Comments

Ingo Molnar Oct. 12, 2023, 6:37 a.m. UTC | #1
* Paul E. McKenney <paulmck@kernel.org> wrote:

> The ->idt_seq and ->recv_jiffies variables added by commit 1a3ea611fc10
> ("x86/nmi: Accumulate NMI-progress evidence in exc_nmi()") place
> the exit-time check of the bottom bit of ->idt_seq after the
> this_cpu_dec_return() that re-enables NMI nesting.  This can result in
> the following sequence of events on a given CPU in kernels built with
> CONFIG_NMI_CHECK_CPU=y:
> 
> o       An NMI arrives, and ->idt_seq is incremented to an odd number.
>         In addition, nmi_state is set to NMI_EXECUTING==1.
> 
> o       The NMI is processed.
> 
> o       The this_cpu_dec_return(nmi_state) zeroes nmi_state and returns
>         NMI_EXECUTING==1, thus opting out of the "goto nmi_restart".
> 
> o       Another NMI arrives and ->idt_seq is incremented to an even
>         number, triggering the warning.  But all is just fine, at least
>         assuming we don't get so many closely spaced NMIs that the stack
>         overflows or some such.
> 
> Experience on the fleet indicates that the MTBF of this false positive
> is about 70 years.  Or, for those who are not quite that patient, the
> MTBF appears to be about one per week per 4,000 systems.
> 
> Fix this false-positive warning by moving the "nmi_restart" label before
> the initial ->idt_seq increment/check and moving the this_cpu_dec_return()
> to follow the final ->idt_seq increment/check.  This way, all nested NMIs
> that get past the NMI_NOT_RUNNING check get a clean ->idt_seq slate.
> And if they don't get past that check, they will set nmi_state to
> NMI_LATCHED, which will cause the this_cpu_dec_return(nmi_state)
> to restart.

This looks like a sensible fix: the warning should obviously be atomic wrt. 
the no-nesting region. I've applied your fix to tip:x86/irq, as it doesn't 
seem urgent enough with a MTBF of 70 years to warrant tip:x86/urgent handling. ;-)

Thanks,

	Ingo
  
Paul E. McKenney Oct. 12, 2023, 10:45 a.m. UTC | #2
On Thu, Oct 12, 2023 at 08:37:25AM +0200, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@kernel.org> wrote:
> 
> > The ->idt_seq and ->recv_jiffies variables added by commit 1a3ea611fc10
> > ("x86/nmi: Accumulate NMI-progress evidence in exc_nmi()") place
> > the exit-time check of the bottom bit of ->idt_seq after the
> > this_cpu_dec_return() that re-enables NMI nesting.  This can result in
> > the following sequence of events on a given CPU in kernels built with
> > CONFIG_NMI_CHECK_CPU=y:
> > 
> > o       An NMI arrives, and ->idt_seq is incremented to an odd number.
> >         In addition, nmi_state is set to NMI_EXECUTING==1.
> > 
> > o       The NMI is processed.
> > 
> > o       The this_cpu_dec_return(nmi_state) zeroes nmi_state and returns
> >         NMI_EXECUTING==1, thus opting out of the "goto nmi_restart".
> > 
> > o       Another NMI arrives and ->idt_seq is incremented to an even
> >         number, triggering the warning.  But all is just fine, at least
> >         assuming we don't get so many closely spaced NMIs that the stack
> >         overflows or some such.
> > 
> > Experience on the fleet indicates that the MTBF of this false positive
> > is about 70 years.  Or, for those who are not quite that patient, the
> > MTBF appears to be about one per week per 4,000 systems.
> > 
> > Fix this false-positive warning by moving the "nmi_restart" label before
> > the initial ->idt_seq increment/check and moving the this_cpu_dec_return()
> > to follow the final ->idt_seq increment/check.  This way, all nested NMIs
> > that get past the NMI_NOT_RUNNING check get a clean ->idt_seq slate.
> > And if they don't get past that check, they will set nmi_state to
> > NMI_LATCHED, which will cause the this_cpu_dec_return(nmi_state)
> > to restart.
> 
> This looks like a sensible fix: the warning should obviously be atomic wrt. 
> the no-nesting region. I've applied your fix to tip:x86/irq, as it doesn't 
> seem urgent enough with a MTBF of 70 years to warrant tip:x86/urgent handling. ;-)

Works for me!  ;-)

And thank you!

							Thanx, Paul
  

Patch

diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index a0c551846b35..4766b6bed443 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -507,12 +507,13 @@  DEFINE_IDTENTRY_RAW(exc_nmi)
 	}
 	this_cpu_write(nmi_state, NMI_EXECUTING);
 	this_cpu_write(nmi_cr2, read_cr2());
+
+nmi_restart:
 	if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) {
 		WRITE_ONCE(nsp->idt_seq, nsp->idt_seq + 1);
 		WARN_ON_ONCE(!(nsp->idt_seq & 0x1));
 		WRITE_ONCE(nsp->recv_jiffies, jiffies);
 	}
-nmi_restart:
 
 	/*
 	 * Needs to happen before DR7 is accessed, because the hypervisor can
@@ -548,16 +549,16 @@  DEFINE_IDTENTRY_RAW(exc_nmi)
 
 	if (unlikely(this_cpu_read(nmi_cr2) != read_cr2()))
 		write_cr2(this_cpu_read(nmi_cr2));
-	if (this_cpu_dec_return(nmi_state))
-		goto nmi_restart;
-
-	if (user_mode(regs))
-		mds_user_clear_cpu_buffers();
 	if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) {
 		WRITE_ONCE(nsp->idt_seq, nsp->idt_seq + 1);
 		WARN_ON_ONCE(nsp->idt_seq & 0x1);
 		WRITE_ONCE(nsp->recv_jiffies, jiffies);
 	}
+	if (this_cpu_dec_return(nmi_state))
+		goto nmi_restart;
+
+	if (user_mode(regs))
+		mds_user_clear_cpu_buffers();
 }
 
 #if IS_ENABLED(CONFIG_KVM_INTEL)