[2/3] tracing/user_events: Ensure bit is cleared on unregister
Commit Message
If an event is enabled and a user process unregisters user_events, the
bit is left set. Fix this by always clearing the bit in the user process
if unregister is successful.
Update abi self-test to ensure this occurs properly.
Suggested-by: Doug Cook <dcook@linux.microsoft.com>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
kernel/trace/trace_events_user.c | 34 +++++++++++++++++++
.../testing/selftests/user_events/abi_test.c | 9 +++--
2 files changed, 40 insertions(+), 3 deletions(-)
Comments
On Tue, 11 Apr 2023 14:17:08 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:
> +static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
> + unsigned long uaddr, unsigned char bit)
> +{
> + struct user_event_enabler enabler;
> + int result;
> +
> + memset(&enabler, 0, sizeof(enabler));
> + enabler.addr = uaddr;
> + enabler.values = bit;
> +retry:
> + /* Prevents state changes from racing with new enablers */
> + mutex_lock(&event_mutex);
> +
> + /* Force the bit to be cleared, since no event is attached */
> + mmap_read_lock(user_mm->mm);
> + result = user_event_enabler_write(user_mm, &enabler, false);
> + mmap_read_unlock(user_mm->mm);
> +
> + mutex_unlock(&event_mutex);
> +
> + if (result) {
> + /* Attempt to fault-in and retry if it worked */
> + if (!user_event_mm_fault_in(user_mm, uaddr))
> + goto retry;
Without looking into the functions of this call, I wonder if this can
get into an infinite loop?
-- Steve
> + }
> +
> + return result;
> +}
> +
On Mon, Apr 24, 2023 at 09:39:57PM -0400, Steven Rostedt wrote:
> On Tue, 11 Apr 2023 14:17:08 -0700
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
>
> > +static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
> > + unsigned long uaddr, unsigned char bit)
> > +{
> > + struct user_event_enabler enabler;
> > + int result;
> > +
> > + memset(&enabler, 0, sizeof(enabler));
> > + enabler.addr = uaddr;
> > + enabler.values = bit;
> > +retry:
> > + /* Prevents state changes from racing with new enablers */
> > + mutex_lock(&event_mutex);
> > +
> > + /* Force the bit to be cleared, since no event is attached */
> > + mmap_read_lock(user_mm->mm);
> > + result = user_event_enabler_write(user_mm, &enabler, false);
> > + mmap_read_unlock(user_mm->mm);
> > +
> > + mutex_unlock(&event_mutex);
> > +
> > + if (result) {
> > + /* Attempt to fault-in and retry if it worked */
> > + if (!user_event_mm_fault_in(user_mm, uaddr))
> > + goto retry;
>
> Without looking into the functions of this call, I wonder if this can
> get into an infinite loop?
>
That's a good point, user_event_mm_fault() is a wrapper around
fixup_user_fault(). We retry if it works, so I guess if the user could
somehow cause a fail on the write and succeed to page in repeatedly, it
could keep the loop going for that time period. I cannot think of a way
to achieve this forever, but that doesn't mean it couldn't happen.
I can certainly add an upper bound of retries (maybe 3 or so?) if you
think it would be possible for this to occur. I think we need retries of
some amount to handle spurious faults.
Thanks,
-Beau
> -- Steve
>
>
> > + }
> > +
> > + return result;
> > +}
> > +
On Tue, 25 Apr 2023 10:06:54 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:
> That's a good point, user_event_mm_fault() is a wrapper around
> fixup_user_fault(). We retry if it works, so I guess if the user could
> somehow cause a fail on the write and succeed to page in repeatedly, it
> could keep the loop going for that time period. I cannot think of a way
> to achieve this forever, but that doesn't mean it couldn't happen.
>
> I can certainly add an upper bound of retries (maybe 3 or so?) if you
> think it would be possible for this to occur. I think we need retries of
> some amount to handle spurious faults.
Even 10 is fine. With a comment saying, "This really shouldn't loop more
than a couple of times, but we want to make sure some mischievous user
doesn't take advantage of this looping".
-- Steve
@@ -2146,6 +2146,35 @@ static long user_unreg_get(struct user_unreg __user *ureg,
return ret;
}
+static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
+ unsigned long uaddr, unsigned char bit)
+{
+ struct user_event_enabler enabler;
+ int result;
+
+ memset(&enabler, 0, sizeof(enabler));
+ enabler.addr = uaddr;
+ enabler.values = bit;
+retry:
+ /* Prevents state changes from racing with new enablers */
+ mutex_lock(&event_mutex);
+
+ /* Force the bit to be cleared, since no event is attached */
+ mmap_read_lock(user_mm->mm);
+ result = user_event_enabler_write(user_mm, &enabler, false);
+ mmap_read_unlock(user_mm->mm);
+
+ mutex_unlock(&event_mutex);
+
+ if (result) {
+ /* Attempt to fault-in and retry if it worked */
+ if (!user_event_mm_fault_in(user_mm, uaddr))
+ goto retry;
+ }
+
+ return result;
+}
+
/*
* Unregisters an enablement address/bit within a task/user mm.
*/
@@ -2190,6 +2219,11 @@ static long user_events_ioctl_unreg(unsigned long uarg)
mutex_unlock(&event_mutex);
+ /* Ensure bit is now cleared for user, regardless of event status */
+ if (!ret)
+ ret = user_event_mm_clear_bit(mm, reg.disable_addr,
+ reg.disable_bit);
+
return ret;
}
@@ -109,13 +109,16 @@ TEST_F(user, enablement) {
ASSERT_EQ(0, change_event(false));
ASSERT_EQ(0, self->check);
- /* Should not change after disable */
+ /* Ensure kernel clears bit after disable */
ASSERT_EQ(0, change_event(true));
ASSERT_EQ(1, self->check);
ASSERT_EQ(0, reg_disable(&self->check, 0));
+ ASSERT_EQ(0, self->check);
+
+ /* Ensure doesn't change after unreg */
+ ASSERT_EQ(0, change_event(true));
+ ASSERT_EQ(0, self->check);
ASSERT_EQ(0, change_event(false));
- ASSERT_EQ(1, self->check);
- self->check = 0;
}
TEST_F(user, bit_sizes) {