[v3,2/6] exit: Put an upper limit on how often we can oops

Message ID 20221117234328.594699-2-keescook@chromium.org
State New
Headers
Series exit: Put an upper limit on how often we can oops |

Commit Message

Kees Cook Nov. 17, 2022, 11:43 p.m. UTC
  From: Jann Horn <jannh@google.com>

Many Linux systems are configured to not panic on oops; but allowing an
attacker to oops the system **really** often can make even bugs that look
completely unexploitable exploitable (like NULL dereferences and such) if
each crash elevates a refcount by one or a lock is taken in read mode, and
this causes a counter to eventually overflow.

The most interesting counters for this are 32 bits wide (like open-coded
refcounts that don't use refcount_t). (The ldsem reader count on 32-bit
platforms is just 16 bits, but probably nobody cares about 32-bit platforms
that much nowadays.)

So let's panic the system if the kernel is constantly oopsing.

The speed of oopsing 2^32 times probably depends on several factors, like
how long the stack trace is and which unwinder you're using; an empirically
important one is whether your console is showing a graphical environment or
a text console that oopses will be printed to.
In a quick single-threaded benchmark, it looks like oopsing in a vfork()
child with a very short stack trace only takes ~510 microseconds per run
when a graphical console is active; but switching to a text console that
oopses are printed to slows it down around 87x, to ~45 milliseconds per
run.
(Adding more threads makes this faster, but the actual oops printing
happens under &die_lock on x86, so you can maybe speed this up by a factor
of around 2 and then any further improvement gets eaten up by lock
contention.)

It looks like it would take around 8-12 days to overflow a 32-bit counter
with repeated oopsing on a multi-core X86 system running a graphical
environment; both me (in an X86 VM) and Seth (with a distro kernel on
normal hardware in a standard configuration) got numbers in that ballpark.

12 days aren't *that* short on a desktop system, and you'd likely need much
longer on a typical server system (assuming that people don't run graphical
desktop environments on their servers), and this is a *very* noisy and
violent approach to exploiting the kernel; and it also seems to take orders
of magnitude longer on some machines, probably because stuff like EFI
pstore will slow it down a ton if that's active.

Signed-off-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/r/20221107201317.324457-1-jannh@google.com
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 Documentation/admin-guide/sysctl/kernel.rst |  8 ++++
 kernel/exit.c                               | 42 +++++++++++++++++++++
 2 files changed, 50 insertions(+)
  

Comments

SeongJae Park Jan. 19, 2023, 8:10 p.m. UTC | #1
Hello,

On Thu, 17 Nov 2022 15:43:22 -0800 Kees Cook <keescook@chromium.org> wrote:

> From: Jann Horn <jannh@google.com>
> 
> Many Linux systems are configured to not panic on oops; but allowing an
> attacker to oops the system **really** often can make even bugs that look
> completely unexploitable exploitable (like NULL dereferences and such) if
> each crash elevates a refcount by one or a lock is taken in read mode, and
> this causes a counter to eventually overflow.
> 
> The most interesting counters for this are 32 bits wide (like open-coded
> refcounts that don't use refcount_t). (The ldsem reader count on 32-bit
> platforms is just 16 bits, but probably nobody cares about 32-bit platforms
> that much nowadays.)
> 
> So let's panic the system if the kernel is constantly oopsing.
> 
> The speed of oopsing 2^32 times probably depends on several factors, like
> how long the stack trace is and which unwinder you're using; an empirically
> important one is whether your console is showing a graphical environment or
> a text console that oopses will be printed to.
> In a quick single-threaded benchmark, it looks like oopsing in a vfork()
> child with a very short stack trace only takes ~510 microseconds per run
> when a graphical console is active; but switching to a text console that
> oopses are printed to slows it down around 87x, to ~45 milliseconds per
> run.
> (Adding more threads makes this faster, but the actual oops printing
> happens under &die_lock on x86, so you can maybe speed this up by a factor
> of around 2 and then any further improvement gets eaten up by lock
> contention.)
> 
> It looks like it would take around 8-12 days to overflow a 32-bit counter
> with repeated oopsing on a multi-core X86 system running a graphical
> environment; both me (in an X86 VM) and Seth (with a distro kernel on
> normal hardware in a standard configuration) got numbers in that ballpark.
> 
> 12 days aren't *that* short on a desktop system, and you'd likely need much
> longer on a typical server system (assuming that people don't run graphical
> desktop environments on their servers), and this is a *very* noisy and
> violent approach to exploiting the kernel; and it also seems to take orders
> of magnitude longer on some machines, probably because stuff like EFI
> pstore will slow it down a ton if that's active.

I found a blog article[1] recommending LTS kernels to backport this as below.

    While this patch is already upstream, it is important that distributed
    kernels also inherit this oops limit and backport it to LTS releases if we
    want to avoid treating such null-dereference bugs as full-fledged security
    issues in the future.

Do you have a plan to backport this into upstream LTS kernels?

[1] https://googleprojectzero.blogspot.com/2023/01/exploiting-null-dereferences-in-linux.html


Thanks,
SJ

> 
> Signed-off-by: Jann Horn <jannh@google.com>
> Link: https://lore.kernel.org/r/20221107201317.324457-1-jannh@google.com
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> Signed-off-by: Kees Cook <keescook@chromium.org>
  
Seth Jenkins Jan. 19, 2023, 8:19 p.m. UTC | #2
> Do you have a plan to backport this into upstream LTS kernels?

As I understand, the answer is "hopefully yes" with the big
presumption that all stakeholders are on board for the change. There
is *definitely* a plan to *submit* backports to the stable trees, but
ofc it will require some approvals.


On Thu, Jan 19, 2023 at 3:10 PM SeongJae Park <sj@kernel.org> wrote:
>
> Hello,
>
> On Thu, 17 Nov 2022 15:43:22 -0800 Kees Cook <keescook@chromium.org> wrote:
>
> > From: Jann Horn <jannh@google.com>
> >
> > Many Linux systems are configured to not panic on oops; but allowing an
> > attacker to oops the system **really** often can make even bugs that look
> > completely unexploitable exploitable (like NULL dereferences and such) if
> > each crash elevates a refcount by one or a lock is taken in read mode, and
> > this causes a counter to eventually overflow.
> >
> > The most interesting counters for this are 32 bits wide (like open-coded
> > refcounts that don't use refcount_t). (The ldsem reader count on 32-bit
> > platforms is just 16 bits, but probably nobody cares about 32-bit platforms
> > that much nowadays.)
> >
> > So let's panic the system if the kernel is constantly oopsing.
> >
> > The speed of oopsing 2^32 times probably depends on several factors, like
> > how long the stack trace is and which unwinder you're using; an empirically
> > important one is whether your console is showing a graphical environment or
> > a text console that oopses will be printed to.
> > In a quick single-threaded benchmark, it looks like oopsing in a vfork()
> > child with a very short stack trace only takes ~510 microseconds per run
> > when a graphical console is active; but switching to a text console that
> > oopses are printed to slows it down around 87x, to ~45 milliseconds per
> > run.
> > (Adding more threads makes this faster, but the actual oops printing
> > happens under &die_lock on x86, so you can maybe speed this up by a factor
> > of around 2 and then any further improvement gets eaten up by lock
> > contention.)
> >
> > It looks like it would take around 8-12 days to overflow a 32-bit counter
> > with repeated oopsing on a multi-core X86 system running a graphical
> > environment; both me (in an X86 VM) and Seth (with a distro kernel on
> > normal hardware in a standard configuration) got numbers in that ballpark.
> >
> > 12 days aren't *that* short on a desktop system, and you'd likely need much
> > longer on a typical server system (assuming that people don't run graphical
> > desktop environments on their servers), and this is a *very* noisy and
> > violent approach to exploiting the kernel; and it also seems to take orders
> > of magnitude longer on some machines, probably because stuff like EFI
> > pstore will slow it down a ton if that's active.
>
> I found a blog article[1] recommending LTS kernels to backport this as below.
>
>     While this patch is already upstream, it is important that distributed
>     kernels also inherit this oops limit and backport it to LTS releases if we
>     want to avoid treating such null-dereference bugs as full-fledged security
>     issues in the future.
>
> Do you have a plan to backport this into upstream LTS kernels?
>
> [1] https://googleprojectzero.blogspot.com/2023/01/exploiting-null-dereferences-in-linux.html
>
>
> Thanks,
> SJ
>
> >
> > Signed-off-by: Jann Horn <jannh@google.com>
> > Link: https://lore.kernel.org/r/20221107201317.324457-1-jannh@google.com
> > Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> > Signed-off-by: Kees Cook <keescook@chromium.org>
  
Kees Cook Jan. 20, 2023, 12:28 a.m. UTC | #3
On Thu, Jan 19, 2023 at 03:19:21PM -0500, Seth Jenkins wrote:
> > Do you have a plan to backport this into upstream LTS kernels?
> 
> As I understand, the answer is "hopefully yes" with the big
> presumption that all stakeholders are on board for the change. There
> is *definitely* a plan to *submit* backports to the stable trees, but
> ofc it will require some approvals.

I've asked for at least v6.1.x (it's a clean cherry-pick). Earlier
kernels will need some non-trivial backporting. Is there anyone that
would be interested in stepping up to do that?

https://lore.kernel.org/lkml/202301191532.AEEC765@keescook

-Kees
  
Eric Biggers Jan. 24, 2023, 6:54 p.m. UTC | #4
On Thu, Jan 19, 2023 at 04:28:42PM -0800, Kees Cook wrote:
> On Thu, Jan 19, 2023 at 03:19:21PM -0500, Seth Jenkins wrote:
> > > Do you have a plan to backport this into upstream LTS kernels?
> > 
> > As I understand, the answer is "hopefully yes" with the big
> > presumption that all stakeholders are on board for the change. There
> > is *definitely* a plan to *submit* backports to the stable trees, but
> > ofc it will require some approvals.
> 
> I've asked for at least v6.1.x (it's a clean cherry-pick). Earlier
> kernels will need some non-trivial backporting. Is there anyone that
> would be interested in stepping up to do that?
> 
> https://lore.kernel.org/lkml/202301191532.AEEC765@keescook
> 

I've sent out a backport to 5.15:
https://lore.kernel.org/stable/20230124185110.143857-1-ebiggers@kernel.org/T/#t

- Eric
  
Eric Biggers Jan. 24, 2023, 7:38 p.m. UTC | #5
On Tue, Jan 24, 2023 at 10:54:57AM -0800, Eric Biggers wrote:
> On Thu, Jan 19, 2023 at 04:28:42PM -0800, Kees Cook wrote:
> > On Thu, Jan 19, 2023 at 03:19:21PM -0500, Seth Jenkins wrote:
> > > > Do you have a plan to backport this into upstream LTS kernels?
> > > 
> > > As I understand, the answer is "hopefully yes" with the big
> > > presumption that all stakeholders are on board for the change. There
> > > is *definitely* a plan to *submit* backports to the stable trees, but
> > > ofc it will require some approvals.
> > 
> > I've asked for at least v6.1.x (it's a clean cherry-pick). Earlier
> > kernels will need some non-trivial backporting. Is there anyone that
> > would be interested in stepping up to do that?
> > 
> > https://lore.kernel.org/lkml/202301191532.AEEC765@keescook
> > 
> 
> I've sent out a backport to 5.15:
> https://lore.kernel.org/stable/20230124185110.143857-1-ebiggers@kernel.org/T/#t

Also 5.10, which wasn't too hard after doing 5.15:
https://lore.kernel.org/stable/20230124193004.206841-1-ebiggers@kernel.org/T/#t

- Eric
  
Kees Cook Jan. 24, 2023, 11:09 p.m. UTC | #6
On January 24, 2023 11:38:05 AM PST, Eric Biggers <ebiggers@kernel.org> wrote:
>On Tue, Jan 24, 2023 at 10:54:57AM -0800, Eric Biggers wrote:
>> On Thu, Jan 19, 2023 at 04:28:42PM -0800, Kees Cook wrote:
>> > On Thu, Jan 19, 2023 at 03:19:21PM -0500, Seth Jenkins wrote:
>> > > > Do you have a plan to backport this into upstream LTS kernels?
>> > > 
>> > > As I understand, the answer is "hopefully yes" with the big
>> > > presumption that all stakeholders are on board for the change. There
>> > > is *definitely* a plan to *submit* backports to the stable trees, but
>> > > ofc it will require some approvals.
>> > 
>> > I've asked for at least v6.1.x (it's a clean cherry-pick). Earlier
>> > kernels will need some non-trivial backporting. Is there anyone that
>> > would be interested in stepping up to do that?
>> > 
>> > https://lore.kernel.org/lkml/202301191532.AEEC765@keescook
>> > 
>> 
>> I've sent out a backport to 5.15:
>> https://lore.kernel.org/stable/20230124185110.143857-1-ebiggers@kernel.org/T/#t
>
>Also 5.10, which wasn't too hard after doing 5.15:
>https://lore.kernel.org/stable/20230124193004.206841-1-ebiggers@kernel.org/T/#t

Oh excellent! Thank you very much!

-Kees
  

Patch

diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index 98d1b198b2b4..09f3fb2f8585 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -667,6 +667,14 @@  This is the default behavior.
 an oops event is detected.
 
 
+oops_limit
+==========
+
+Number of kernel oopses after which the kernel should panic when
+``panic_on_oops`` is not set. Setting this to 0 or 1 has the same effect
+as setting ``panic_on_oops=1``.
+
+
 osrelease, ostype & version
 ===========================
 
diff --git a/kernel/exit.c b/kernel/exit.c
index 35e0a31a0315..799c5edd6be6 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -72,6 +72,33 @@ 
 #include <asm/unistd.h>
 #include <asm/mmu_context.h>
 
+/*
+ * The default value should be high enough to not crash a system that randomly
+ * crashes its kernel from time to time, but low enough to at least not permit
+ * overflowing 32-bit refcounts or the ldsem writer count.
+ */
+static unsigned int oops_limit = 10000;
+
+#if CONFIG_SYSCTL
+static struct ctl_table kern_exit_table[] = {
+	{
+		.procname       = "oops_limit",
+		.data           = &oops_limit,
+		.maxlen         = sizeof(oops_limit),
+		.mode           = 0644,
+		.proc_handler   = proc_douintvec,
+	},
+	{ }
+};
+
+static __init int kernel_exit_sysctls_init(void)
+{
+	register_sysctl_init("kernel", kern_exit_table);
+	return 0;
+}
+late_initcall(kernel_exit_sysctls_init);
+#endif
+
 static void __unhash_process(struct task_struct *p, bool group_dead)
 {
 	nr_threads--;
@@ -874,6 +901,8 @@  void __noreturn do_exit(long code)
 
 void __noreturn make_task_dead(int signr)
 {
+	static atomic_t oops_count = ATOMIC_INIT(0);
+
 	/*
 	 * Take the task off the cpu after something catastrophic has
 	 * happened.
@@ -897,6 +926,19 @@  void __noreturn make_task_dead(int signr)
 		preempt_count_set(PREEMPT_ENABLED);
 	}
 
+	/*
+	 * Every time the system oopses, if the oops happens while a reference
+	 * to an object was held, the reference leaks.
+	 * If the oops doesn't also leak memory, repeated oopsing can cause
+	 * reference counters to wrap around (if they're not using refcount_t).
+	 * This means that repeated oopsing can make unexploitable-looking bugs
+	 * exploitable through repeated oopsing.
+	 * To make sure this can't happen, place an upper bound on how often the
+	 * kernel may oops without panic().
+	 */
+	if (atomic_inc_return(&oops_count) >= READ_ONCE(oops_limit))
+		panic("Oopsed too often (kernel.oops_limit is %d)", oops_limit);
+
 	/*
 	 * We're taking recursive faults here in make_task_dead. Safest is to just
 	 * leave this task alone and wait for reboot.