From patchwork Wed Apr 26 17:17:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 87923 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp404508vqo; Wed, 26 Apr 2023 10:32:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5olOvz2xQVv5So+yTA8I3BcyzGhwyiIiPv6PBDeZeJdsDZ2jWXDwI4CEkFvSO/O32hZTTb X-Received: by 2002:a05:6a21:9712:b0:f8:1904:532b with SMTP id ub18-20020a056a21971200b000f81904532bmr1980570pzb.21.1682530338650; Wed, 26 Apr 2023 10:32:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682530338; cv=none; d=google.com; s=arc-20160816; b=E83JnopISBv84+WEMPbKYktSUEXqBP4klfAsRDTjNbyN31tK/wm/iZ9a7p6ML9P6R2 GXbDCDHbB9QHL+pBEL4uhb3ngpyDgBbNZStgSLARMiNmJbM8ZLf99THwdxNTPlFhGyVi k6b3LcdsugOTr2eIb7LovL12sSQcTQNahTiNyOHqLxLYRePfL3iIMXZh1O4sZX0mEAT4 AvVZWKgOfPGKN02QVHIvOZiIEaBAXE5xeQWf+5IsvwXMDZ9cJHE2BZ7fSt8lhgwA+RsE 93x8Jxgw67GG6YC/9kj9HWkXYdpfr8DlC1uqzN2Qaaq30FH0ICbeuIJM1/MOiG914jxq 84gA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=9fngIok6xSFOreRPx8/aXQ/2hCxe4crIgsh44I1mxz8=; b=Hks+nnHCaFGHfTIfrDGtoS1gj3z5EB4RyxOFq2kU6dnoRGVS4qfsH/w1W03XVTCd6D KoGPiIIZiQnv8bRMJuPJJd+xXOCDlz1PVUGYHJlz1+fbq/HrLBW9HhjGOCDvsECg+40D Pg4iomvxIcm5C2ny4jgPexk4bNaD50O2ZEloqT1C+w6FFISm4lWlRgmiiHOydFJgo4qh wKi2LFIuKk5bupVRhHBpDGtN7u+KENPy/9c5hpjXYTNXiPl9/nCBYieBIwjmELK1lbNs V/HGv9n1aziEw14TfF8uirwE780d57mNjEKT23Rt2UARYZeJ5ryaYsfNpAjOTNTHf6s2 MGOA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 10-20020a63124a000000b0051f0d0d6359si12857079pgs.118.2023.04.26.10.32.03; Wed, 26 Apr 2023 10:32:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234903AbjDZRSH (ORCPT + 99 others); Wed, 26 Apr 2023 13:18:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234750AbjDZRRx (ORCPT ); Wed, 26 Apr 2023 13:17:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70D11768F for ; Wed, 26 Apr 2023 10:17:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D75036367A for ; Wed, 26 Apr 2023 17:17:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B1DFEC433EF; Wed, 26 Apr 2023 17:17:51 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.96) (envelope-from ) id 1prim6-005KbY-2e; Wed, 26 Apr 2023 13:17:50 -0400 Message-ID: <20230426171750.635695746@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 26 Apr 2023 13:17:11 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 08/11] tracing/user_events: Limit max fault-in attempts References: <20230426171703.202523909@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.0 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764260932335692403?= X-GMAIL-MSGID: =?utf-8?q?1764260932335692403?= From: Beau Belgrave When event enablement changes, user_events attempts to update a bit in the user process. If a fault is hit, an attempt to fault-in the page and the write is retried if the page made it in. While this normally requires a couple attempts, it is possible a bad user process could attempt to cause infinite loops. Ensure fault-in attempts either sync or async are limited to a max of 10 attempts for each update. When the max is hit, return -EFAULT so another attempt is not made in all cases. Link: https://lkml.kernel.org/r/20230425225107.8525-5-beaub@linux.microsoft.com Suggested-by: Steven Rostedt (Google) Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace_events_user.c | 49 +++++++++++++++++++++++--------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index a29cd13caf55..b1ecd7677642 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -123,6 +123,7 @@ struct user_event_enabler_fault { struct work_struct work; struct user_event_mm *mm; struct user_event_enabler *enabler; + int attempt; }; static struct kmem_cache *fault_cache; @@ -266,11 +267,19 @@ static void user_event_enabler_destroy(struct user_event_enabler *enabler) kfree(enabler); } -static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) +static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr, + int attempt) { bool unlocked; int ret; + /* + * Normally this is low, ensure that it cannot be taken advantage of by + * bad user processes to cause excessive looping. + */ + if (attempt > 10) + return -EFAULT; + mmap_read_lock(mm->mm); /* Ensure MM has tasks, cannot use after exit_mm() */ @@ -289,7 +298,7 @@ static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) static int user_event_enabler_write(struct user_event_mm *mm, struct user_event_enabler *enabler, - bool fixup_fault); + bool fixup_fault, int *attempt); static void user_event_enabler_fault_fixup(struct work_struct *work) { @@ -298,9 +307,10 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) struct user_event_enabler *enabler = fault->enabler; struct user_event_mm *mm = fault->mm; unsigned long uaddr = enabler->addr; + int attempt = fault->attempt; int ret; - ret = user_event_mm_fault_in(mm, uaddr); + ret = user_event_mm_fault_in(mm, uaddr, attempt); if (ret && ret != -ENOENT) { struct user_event *user = enabler->event; @@ -329,7 +339,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) if (!ret) { mmap_read_lock(mm->mm); - user_event_enabler_write(mm, enabler, true); + user_event_enabler_write(mm, enabler, true, &attempt); mmap_read_unlock(mm->mm); } out: @@ -341,7 +351,8 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) } static bool user_event_enabler_queue_fault(struct user_event_mm *mm, - struct user_event_enabler *enabler) + struct user_event_enabler *enabler, + int attempt) { struct user_event_enabler_fault *fault; @@ -353,6 +364,7 @@ static bool user_event_enabler_queue_fault(struct user_event_mm *mm, INIT_WORK(&fault->work, user_event_enabler_fault_fixup); fault->mm = user_event_mm_get(mm); fault->enabler = enabler; + fault->attempt = attempt; /* Don't try to queue in again while we have a pending fault */ set_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); @@ -372,7 +384,7 @@ static bool user_event_enabler_queue_fault(struct user_event_mm *mm, static int user_event_enabler_write(struct user_event_mm *mm, struct user_event_enabler *enabler, - bool fixup_fault) + bool fixup_fault, int *attempt) { unsigned long uaddr = enabler->addr; unsigned long *ptr; @@ -383,6 +395,8 @@ static int user_event_enabler_write(struct user_event_mm *mm, lockdep_assert_held(&event_mutex); mmap_assert_locked(mm->mm); + *attempt += 1; + /* Ensure MM has tasks, cannot use after exit_mm() */ if (refcount_read(&mm->tasks) == 0) return -ENOENT; @@ -398,7 +412,7 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (!fixup_fault) return -EFAULT; - if (!user_event_enabler_queue_fault(mm, enabler)) + if (!user_event_enabler_queue_fault(mm, enabler, *attempt)) pr_warn("user_events: Unable to queue fault handler\n"); return -EFAULT; @@ -439,15 +453,19 @@ static void user_event_enabler_update(struct user_event *user) struct user_event_enabler *enabler; struct user_event_mm *mm = user_event_mm_get_all(user); struct user_event_mm *next; + int attempt; while (mm) { next = mm->next; mmap_read_lock(mm->mm); rcu_read_lock(); - list_for_each_entry_rcu(enabler, &mm->enablers, link) - if (enabler->event == user) - user_event_enabler_write(mm, enabler, true); + list_for_each_entry_rcu(enabler, &mm->enablers, link) { + if (enabler->event == user) { + attempt = 0; + user_event_enabler_write(mm, enabler, true, &attempt); + } + } rcu_read_unlock(); mmap_read_unlock(mm->mm); @@ -695,6 +713,7 @@ static struct user_event_enabler struct user_event_enabler *enabler; struct user_event_mm *user_mm; unsigned long uaddr = (unsigned long)reg->enable_addr; + int attempt = 0; user_mm = current_user_event_mm(); @@ -715,7 +734,8 @@ static struct user_event_enabler /* Attempt to reflect the current state within the process */ mmap_read_lock(user_mm->mm); - *write_result = user_event_enabler_write(user_mm, enabler, false); + *write_result = user_event_enabler_write(user_mm, enabler, false, + &attempt); mmap_read_unlock(user_mm->mm); /* @@ -735,7 +755,7 @@ static struct user_event_enabler if (*write_result) { /* Attempt to fault-in and retry if it worked */ - if (!user_event_mm_fault_in(user_mm, uaddr)) + if (!user_event_mm_fault_in(user_mm, uaddr, attempt)) goto retry; kfree(enabler); @@ -2195,6 +2215,7 @@ static int user_event_mm_clear_bit(struct user_event_mm *user_mm, { struct user_event_enabler enabler; int result; + int attempt = 0; memset(&enabler, 0, sizeof(enabler)); enabler.addr = uaddr; @@ -2205,14 +2226,14 @@ static int user_event_mm_clear_bit(struct user_event_mm *user_mm, /* Force the bit to be cleared, since no event is attached */ mmap_read_lock(user_mm->mm); - result = user_event_enabler_write(user_mm, &enabler, false); + result = user_event_enabler_write(user_mm, &enabler, false, &attempt); mmap_read_unlock(user_mm->mm); mutex_unlock(&event_mutex); if (result) { /* Attempt to fault-in and retry if it worked */ - if (!user_event_mm_fault_in(user_mm, uaddr)) + if (!user_event_mm_fault_in(user_mm, uaddr, attempt)) goto retry; }