From patchwork Wed Mar 29 19:45:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76799 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp654117vqo; Wed, 29 Mar 2023 13:01:29 -0700 (PDT) X-Google-Smtp-Source: AKy350Z1mk5RJJPHhEI2dXNn3FE0vWd0yvmQ8kwYbvgCX5b3LyFHdpcFwXz+rSOPIFDIC/lQcsbd X-Received: by 2002:a05:6a20:1e56:b0:d5:1863:fe5f with SMTP id cy22-20020a056a201e5600b000d51863fe5fmr3620244pzb.2.1680120089564; Wed, 29 Mar 2023 13:01:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120089; cv=none; d=google.com; s=arc-20160816; b=ooP+vVX3FLjaUsU1Hl3c/ySHP0SwzbLc86jZPcbAz00hkyKXfw6MX2rn9q+pGuGiqc algF0REwHqoGY50tQAu+b/eFiDQE3gP/S0022Dw+PkpM/jg8jf3Po1hEle6O1uaQ5zxz gVcH4HoBcS6kovsZUMpofyO3uesOlJZHm7dvrLWWbHvYCVwSMTfHL0Zjrzq8hfXY+KrX P+bLRbMPkw9GJK1Emx/2jLcfjKin1aXjOcIVuuobnDqtHhTkvBNhpw07Kn0YCsEzhtyl 61VpfNvYBaMpB/hxJ3MLwzPEUa3qAiUSyV5vNN5ac7QqtghcAcSlH8CKMC9ZJxOyVVRH LVmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=mf93w3ERRZIkGC52ItWvPjtqHnZoSkxVFEqasWUHZL0=; b=hfMT7VuAzB5OaygmCN5xA+vQ5oeUFqOvQn84eHF0o8ljQP6mxA3WQ9/Y+ij9WImOD4 HMBQ5OF+ptR4JPdG+b4jKZ59C8a4Ex81R6JIGx/2um2VuKttNMk6vvwemdFC4ng7Fk6k wH3WxR1y07g7LesF8Zi9DCDNorfhf4YZ7A25Cnk8JwkZqYTgVv31Ruwr2yxhhWhQR6Li IPfZAEbi906JbIXkrD9wteoMu2l8swBMLlakwiqOxMOe77FCM1gnljNZxpdhTDsbhY6G 31dsbgAR3gPthvltj2lnBfGSIENJb5Eudg6zpHAVvwD9AvEb+FqrpbaXoh6eJDdby/wZ XWpw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fd21-20020a056a002e9500b00608f52c3f20si33774617pfb.302.2023.03.29.13.01.03; Wed, 29 Mar 2023 13:01:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230498AbjC2Tqd (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230039AbjC2Tpz (ORCPT ); Wed, 29 Mar 2023 15:45:55 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 270011732 for ; Wed, 29 Mar 2023 12:45:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 49A4C61E2F for ; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 21C06C4339B; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk0-002RmY-0o; Wed, 29 Mar 2023 15:45:52 -0400 Message-ID: <20230329194552.069419879@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:31 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 15/25] tracing/user_events: Fixup enable faults asyncly References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733603010977113?= X-GMAIL-MSGID: =?utf-8?q?1761733603010977113?= From: Beau Belgrave When events are enabled within the various tracing facilities, such as ftrace/perf, the event_mutex is held. As events are enabled pages are accessed. We do not want page faults to occur under this lock. Instead queue the fault to a workqueue to be handled in a process context safe way without the lock. The enable address is marked faulting while the async fault-in occurs. This ensures that we don't attempt to fault-in more than is necessary. Once the page has been faulted in, an address write is re-attempted. If the page couldn't fault-in, then we wait until the next time the event is enabled to prevent any potential infinite loops. Link: https://lkml.kernel.org/r/20230328235219.203-5-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace_events_user.c | 120 +++++++++++++++++++++++++++++-- 1 file changed, 114 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 553a82ee7aeb..86bda1660536 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -99,9 +99,23 @@ struct user_event_enabler { /* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ #define ENABLE_VAL_BIT_MASK 0x3F +/* Bit 6 is for faulting status of enablement */ +#define ENABLE_VAL_FAULTING_BIT 6 + /* Only duplicate the bit value */ #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK +#define ENABLE_BITOPS(e) ((unsigned long *)&(e)->values) + +/* Used for asynchronous faulting in of pages */ +struct user_event_enabler_fault { + struct work_struct work; + struct user_event_mm *mm; + struct user_event_enabler *enabler; +}; + +static struct kmem_cache *fault_cache; + /* Global list of memory descriptors using user_events */ static LIST_HEAD(user_event_mms); static DEFINE_SPINLOCK(user_event_mms_lock); @@ -263,7 +277,85 @@ static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) } static int user_event_enabler_write(struct user_event_mm *mm, - struct user_event_enabler *enabler) + struct user_event_enabler *enabler, + bool fixup_fault); + +static void user_event_enabler_fault_fixup(struct work_struct *work) +{ + struct user_event_enabler_fault *fault = container_of( + work, struct user_event_enabler_fault, work); + struct user_event_enabler *enabler = fault->enabler; + struct user_event_mm *mm = fault->mm; + unsigned long uaddr = enabler->addr; + int ret; + + ret = user_event_mm_fault_in(mm, uaddr); + + if (ret && ret != -ENOENT) { + struct user_event *user = enabler->event; + + pr_warn("user_events: Fault for mm: 0x%pK @ 0x%llx event: %s\n", + mm->mm, (unsigned long long)uaddr, EVENT_NAME(user)); + } + + /* Prevent state changes from racing */ + mutex_lock(&event_mutex); + + /* + * If we managed to get the page, re-issue the write. We do not + * want to get into a possible infinite loop, which is why we only + * attempt again directly if the page came in. If we couldn't get + * the page here, then we will try again the next time the event is + * enabled/disabled. + */ + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + if (!ret) { + mmap_read_lock(mm->mm); + user_event_enabler_write(mm, enabler, true); + mmap_read_unlock(mm->mm); + } + + mutex_unlock(&event_mutex); + + /* In all cases we no longer need the mm or fault */ + user_event_mm_put(mm); + kmem_cache_free(fault_cache, fault); +} + +static bool user_event_enabler_queue_fault(struct user_event_mm *mm, + struct user_event_enabler *enabler) +{ + struct user_event_enabler_fault *fault; + + fault = kmem_cache_zalloc(fault_cache, GFP_NOWAIT | __GFP_NOWARN); + + if (!fault) + return false; + + INIT_WORK(&fault->work, user_event_enabler_fault_fixup); + fault->mm = user_event_mm_get(mm); + fault->enabler = enabler; + + /* Don't try to queue in again while we have a pending fault */ + set_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + if (!schedule_work(&fault->work)) { + /* Allow another attempt later */ + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + user_event_mm_put(mm); + kmem_cache_free(fault_cache, fault); + + return false; + } + + return true; +} + +static int user_event_enabler_write(struct user_event_mm *mm, + struct user_event_enabler *enabler, + bool fixup_fault) { unsigned long uaddr = enabler->addr; unsigned long *ptr; @@ -278,11 +370,19 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (refcount_read(&mm->tasks) == 0) return -ENOENT; + if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))) + return -EBUSY; + ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, &page, NULL, NULL); - if (ret <= 0) { - pr_warn("user_events: Enable write failed\n"); + if (unlikely(ret <= 0)) { + if (!fixup_fault) + return -EFAULT; + + if (!user_event_enabler_queue_fault(mm, enabler)) + pr_warn("user_events: Unable to queue fault handler\n"); + return -EFAULT; } @@ -314,7 +414,7 @@ static void user_event_enabler_update(struct user_event *user) list_for_each_entry_rcu(enabler, &mm->enablers, link) if (enabler->event == user) - user_event_enabler_write(mm, enabler); + user_event_enabler_write(mm, enabler, true); rcu_read_unlock(); mmap_read_unlock(mm->mm); @@ -562,7 +662,7 @@ static struct user_event_enabler /* Attempt to reflect the current state within the process */ mmap_read_lock(user_mm->mm); - *write_result = user_event_enabler_write(user_mm, enabler); + *write_result = user_event_enabler_write(user_mm, enabler, false); mmap_read_unlock(user_mm->mm); /* @@ -2201,16 +2301,24 @@ static int __init trace_events_user_init(void) { int ret; + fault_cache = KMEM_CACHE(user_event_enabler_fault, 0); + + if (!fault_cache) + return -ENOMEM; + init_group = user_event_group_create(&init_user_ns); - if (!init_group) + if (!init_group) { + kmem_cache_destroy(fault_cache); return -ENOMEM; + } ret = create_user_tracefs(); if (ret) { pr_warn("user_events could not register with tracefs\n"); user_event_group_destroy(init_group); + kmem_cache_destroy(fault_cache); init_group = NULL; return ret; }