x86/uaccess: instrument copy_from_user_nmi()

Message ID 20221025221755.3810809-1-glider@google.com
State New
Headers
Series x86/uaccess: instrument copy_from_user_nmi() |

Commit Message

Alexander Potapenko Oct. 25, 2022, 10:17 p.m. UTC
  Make sure usercopy hooks from linux/instrumented.h are invoked for
copy_from_user_nmi().
This fixes KMSAN false positives reported when dumping opcodes for a
stack trace.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: x86@kernel.org
Signed-off-by: Alexander Potapenko <glider@google.com>
---
 arch/x86/lib/usercopy.c | 3 +++
 1 file changed, 3 insertions(+)
  

Comments

Peter Zijlstra Oct. 26, 2022, 9:30 a.m. UTC | #1
On Wed, Oct 26, 2022 at 12:17:55AM +0200, Alexander Potapenko wrote:
> Make sure usercopy hooks from linux/instrumented.h are invoked for
> copy_from_user_nmi().
> This fixes KMSAN false positives reported when dumping opcodes for a
> stack trace.
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: x86@kernel.org
> Signed-off-by: Alexander Potapenko <glider@google.com>
> ---
>  arch/x86/lib/usercopy.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
> index f1bb186171562..24b48af274173 100644
> --- a/arch/x86/lib/usercopy.c
> +++ b/arch/x86/lib/usercopy.c
> @@ -6,6 +6,7 @@
>  
>  #include <linux/uaccess.h>
>  #include <linux/export.h>
> +#include <linux/instrumented.h>
>  
>  #include <asm/tlbflush.h>
>  
> @@ -44,7 +45,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
>  	 * called from other contexts.
>  	 */
>  	pagefault_disable();
> +	instrument_copy_from_user_before(to, from, n);
>  	ret = raw_copy_from_user(to, from, n);
> +	instrument_copy_from_user_after(to, from, n, ret);
>  	pagefault_enable();
>  
>  	return ret;

Is all that instrumentation NMI safe? ISTR having seen locks in some of
that *SAN stuff.

Also did this want:

Fixes: 59298997df89 ("x86/uaccess: avoid check_object_size() in copy_from_user_nmi()")

?
  
Peter Zijlstra Oct. 27, 2022, 8:05 a.m. UTC | #2
On Wed, Oct 26, 2022 at 11:38:53AM -0700, Alexander Potapenko wrote:
> A bigger issue from the NMI perspective is probably
> having __msan_poison_alloca() inserted in every non-noinstr kernel
> function, because that hook may acquire the stackdepot lock.

*urgghhh* that's broken, that must not be. There is a *TON* of NMI
functions that are non-noinstr.

What's worse, it seems to do a memory allocation as well, and that's out
the window with PREEMPT_RT where you can't do even GFP_ATOMIC from
regular IRQ context.

That function is wholly unacceptable to be added to every kernel
function.
  
Alexander Potapenko Oct. 27, 2022, 6:26 p.m. UTC | #3
On Thu, Oct 27, 2022 at 1:05 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Oct 26, 2022 at 11:38:53AM -0700, Alexander Potapenko wrote:
> > A bigger issue from the NMI perspective is probably
> > having __msan_poison_alloca() inserted in every non-noinstr kernel
> > function, because that hook may acquire the stackdepot lock.
>
> *urgghhh* that's broken, that must not be. There is a *TON* of NMI
> functions that are non-noinstr.

__msan_poison_alloca() is guarded by kmsan_in_runtime(), which is
currently implemented as:

  static __always_inline bool kmsan_in_runtime(void)
  {
          if ((hardirq_count() >> HARDIRQ_SHIFT) > 1)
                  return true;
          return kmsan_get_context()->kmsan_in_runtime;
  }

I think the easiest way to fix the NMI situation would be adding "if
in_nmi() return true"?

Currently that will render kmsan_unpoison_memory() useless in NMI
context, but I think we don't need a check for kmsan_in_runtime()
there, because unpoisoning is self-contained and normally does not
recurse (guess we can tolerate a pr_err() on the rare assertion
violation path?)

> What's worse, it seems to do a memory allocation as well, and that's out
> the window with PREEMPT_RT where you can't do even GFP_ATOMIC from
> regular IRQ context.

Yes, there's a lazy call to alloc_pages() in lib/stackdepot.c that is
done when we run out of storage space.
It would be a pity to ignore everything that is happening inside
regular IRQs (by making kmsan_in_runtime() bail out on in_irq()) - I
think we occasionally see errors from there, e.g.
https://syzkaller.appspot.com/bug?id=233563e79a8e00f86412eb3d2fb4eb1f425e70c3
We could make stackdepot avoid allocating memory in IRQs/NMIs and hope
that calls to __msan_poison_alloca() from regular contexts keep up
with draining the storage from interrupts.
Another option would be to preallocate a very big chunk of memory for
stackdepot and never do allocations again.

These tricks won't however save us from acquiring depot_lock from
lib/stackdepot.c every time we want to create a new origin.
But that should not be a problem by itself, because we always do
kmsan_enter_runtime() before accessing the stack depot - i.e. it won't
be taken recursively.

Given that PREEMPT_RT is not the default at the moment, shall we make
KMSAN incompatible with it for the time being?

> That function is wholly unacceptable to be added to every kernel
> function.
>
  
Kees Cook Oct. 27, 2022, 6:58 p.m. UTC | #4
On Thu, Oct 27, 2022 at 11:26:50AM -0700, Alexander Potapenko wrote:
> On Thu, Oct 27, 2022 at 1:05 AM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Wed, Oct 26, 2022 at 11:38:53AM -0700, Alexander Potapenko wrote:
> > > A bigger issue from the NMI perspective is probably
> > > having __msan_poison_alloca() inserted in every non-noinstr kernel
> > > function, because that hook may acquire the stackdepot lock.
> >
> > *urgghhh* that's broken, that must not be. There is a *TON* of NMI
> > functions that are non-noinstr.
> 
> __msan_poison_alloca() is guarded by kmsan_in_runtime(), which is
> currently implemented as:
> 
>   static __always_inline bool kmsan_in_runtime(void)
>   {
>           if ((hardirq_count() >> HARDIRQ_SHIFT) > 1)
>                   return true;
>           return kmsan_get_context()->kmsan_in_runtime;
>   }
> 
> I think the easiest way to fix the NMI situation would be adding "if
> in_nmi() return true"?

It might help to look through these threads:

https://lore.kernel.org/lkml/20220916135953.1320601-1-keescook@chromium.org/
https://lore.kernel.org/all/20220919201648.2250764-1-keescook@chromium.org/

I wandered around attempting to deal with in_nmi(), etc. And in
the end just drop the attempt to cover it. It's worth noting that
copy_from_user_nmi() exists on 1 architecture and has exactly 1
call-site...
  
Peter Zijlstra Oct. 27, 2022, 7:26 p.m. UTC | #5
On Thu, Oct 27, 2022 at 11:58:35AM -0700, Kees Cook wrote:

> I wandered around attempting to deal with in_nmi(), etc. And in
> the end just drop the attempt to cover it. It's worth noting that
> copy_from_user_nmi() exists on 1 architecture and has exactly 1
> call-site...

Yeah, back when I wrote it, it was a lot more complicated because we
could not reliably take #PF from NMI context; it did manual page-walks,
kmap_atomic()s and mempcy(). That's all fixed now and it's really mostly
a rudiment -- except for these instrumentation issues it seems.
  
Alexander Potapenko Oct. 27, 2022, 11:24 p.m. UTC | #6
On Thu, Oct 27, 2022 at 11:58 AM Kees Cook <keescook@chromium.org> wrote:
>
> On Thu, Oct 27, 2022 at 11:26:50AM -0700, Alexander Potapenko wrote:
> > On Thu, Oct 27, 2022 at 1:05 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Wed, Oct 26, 2022 at 11:38:53AM -0700, Alexander Potapenko wrote:
> > > > A bigger issue from the NMI perspective is probably
> > > > having __msan_poison_alloca() inserted in every non-noinstr kernel
> > > > function, because that hook may acquire the stackdepot lock.
> > >
> > > *urgghhh* that's broken, that must not be. There is a *TON* of NMI
> > > functions that are non-noinstr.
> >
> > __msan_poison_alloca() is guarded by kmsan_in_runtime(), which is
> > currently implemented as:
> >
> >   static __always_inline bool kmsan_in_runtime(void)
> >   {
> >           if ((hardirq_count() >> HARDIRQ_SHIFT) > 1)
> >                   return true;
> >           return kmsan_get_context()->kmsan_in_runtime;
> >   }
> >
> > I think the easiest way to fix the NMI situation would be adding "if
> > in_nmi() return true"?
>
> It might help to look through these threads:
>
> https://lore.kernel.org/lkml/20220916135953.1320601-1-keescook@chromium.org/
> https://lore.kernel.org/all/20220919201648.2250764-1-keescook@chromium.org/

Sorry, I missed that letter, should have responded earlier.

> I wandered around attempting to deal with in_nmi(), etc. And in
> the end just drop the attempt to cover it. It's worth noting that
> copy_from_user_nmi() exists on 1 architecture and has exactly 1
> call-site...

It doesn't really matter for KASAN, because a missing addressability
check is a matter of missing some (possibly rare) bugs.
For KMSAN a missing initialization will result in false positives, and
we already started seeing them: show_opcodes() copies data to a local
and prints it, but without a call to kmsan_unpoison_memory() it will
result in error reports about opcodes[] being uninitialized.

So for this particular case I want to ensure kmsan_unpoison_memory()
can be called from NMI context (by removing the kmsan_in_runtime()
check from it), but to be on the safe side we'll also have to do
nothing in __msan_poison_alloca() under in_nmi().


> --
> Kees Cook


--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
  

Patch

diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
index f1bb186171562..24b48af274173 100644
--- a/arch/x86/lib/usercopy.c
+++ b/arch/x86/lib/usercopy.c
@@ -6,6 +6,7 @@ 
 
 #include <linux/uaccess.h>
 #include <linux/export.h>
+#include <linux/instrumented.h>
 
 #include <asm/tlbflush.h>
 
@@ -44,7 +45,9 @@  copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
 	 * called from other contexts.
 	 */
 	pagefault_disable();
+	instrument_copy_from_user_before(to, from, n);
 	ret = raw_copy_from_user(to, from, n);
+	instrument_copy_from_user_after(to, from, n, ret);
 	pagefault_enable();
 
 	return ret;