From patchwork Mon Jun 5 23:38:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 103516 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3021836vqr; Mon, 5 Jun 2023 16:45:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4ph//Q64L7KtBOfH+TIc45pydNX4udh+uz17snXU9ih/lO5ko/eIpXBwVcA72CFAYte/Cd X-Received: by 2002:a17:90b:104d:b0:256:2efc:270e with SMTP id gq13-20020a17090b104d00b002562efc270emr256409pjb.5.1686008701321; Mon, 05 Jun 2023 16:45:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686008701; cv=none; d=google.com; s=arc-20160816; b=YoRVWFo1WXTwr5541evUr45K9a0OzfAa5vybetTwP/d4cQ856ZL0t15YVmT0tQl9CR oeGTmupn6vSW8/1r6khxUBVTWCPeHQGw5gFIdWjUqrMz0ZMrxdFuJIfiSl8lCJMH1xa6 Q1TcST/Nhb6XSDCjRkUNr7HYl6rFNYtvQpwZKPBf9FeJ2Qgy8HdKCXYLquiW3TDhzugO eOdYqVc+BJRfOVFYPzMFxJ4lgHnTeMpvIixO/25RCvM65Lk7pzMEMFyhsUSiDJ8O30tl rqLdbB/2Bmg4QXQaF2JqK2FY/UEVij7uzBgkQ223T79vtXJtDOw55eYMMGR/V/CyZmUD biEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=LgcGhn17kzfhccWPtaNXHZP5pGnU21mKIjrxJNC4SUo=; b=rAh1Slj089i3tfW3nYywNkdnzNdKxbgvALWk+ExIW0syS5XdosjGlubmIWw+RY9m8x qhc9HYZ7+ENdIPLnGNr9S9UMzvKqY1pVPfJHlwl4Dw8idIsAILuJpASoqEEye9SGekBg ZXiL1GT1wrxFJpTGL98IIGcGvi7/Dd/kcy3IBjP7k+CmWvKqCDz/IMSyCIKMlVc0a2P5 oE0bdAt4LHI9TtNTk/BOjqwrTrZyVSCU8hYUX6n61Gjap/rnNKCW8nVCZ5oL6b2TQ5fA TlP2wgu1eCXRumbcxAXdUi44vls1VoN+FwzdIpPHlimX5WTF9f/8C6axFdi9shaPhaBv iMLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=OiTxQnr6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j14-20020a636e0e000000b00541bf36f2d2si6220013pgc.208.2023.06.05.16.44.49; Mon, 05 Jun 2023 16:45:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=OiTxQnr6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233540AbjFEXjK (ORCPT + 99 others); Mon, 5 Jun 2023 19:39:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233442AbjFEXjH (ORCPT ); Mon, 5 Jun 2023 19:39:07 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 56CF4EE; Mon, 5 Jun 2023 16:39:06 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id BFC6720BCFD0; Mon, 5 Jun 2023 16:39:05 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com BFC6720BCFD0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1686008345; bh=LgcGhn17kzfhccWPtaNXHZP5pGnU21mKIjrxJNC4SUo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OiTxQnr6civO9QCUdf6WEYpmc+w/Ffs7Ci59nbZ8m6t+DzXS8IL+y0jaYqlo98hBk MOjmFiFFRUnEqTGV739nkDY0GWStanfvV49m0pYm7XwzKcj/eCczG7EswUgEMIHHXO 5raoeOpsXnu8XExtOZdKqBptj8RyCnNlSBzTszhw= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH v2 1/5] tracing/user_events: Store register flags on events Date: Mon, 5 Jun 2023 16:38:56 -0700 Message-Id: <20230605233900.2838-2-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605233900.2838-1-beaub@linux.microsoft.com> References: <20230605233900.2838-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767908259613690579?= X-GMAIL-MSGID: =?utf-8?q?1767908259613690579?= Currently we don't have any available flags for user processes to use to indicate options for user_events. We will soon have a flag to indicate the event should or should not auto-delete once it's not being used by anyone. Add a reg_flags field to user_events and parameters to existing functions to allow for this in future patches. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index b1ecd7677642..34aa0a5d8e2a 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -87,6 +87,7 @@ struct user_event { struct list_head validators; refcount_t refcnt; int min_size; + int reg_flags; char status; }; @@ -163,7 +164,7 @@ typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i, static int user_event_parse(struct user_event_group *group, char *name, char *args, char *flags, - struct user_event **newuser); + struct user_event **newuser, int reg_flags); static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm); static struct user_event_mm *user_event_mm_get_all(struct user_event *user); @@ -809,7 +810,8 @@ static struct list_head *user_event_get_fields(struct trace_event_call *call) * Upon success user_event has its ref count increased by 1. */ static int user_event_parse_cmd(struct user_event_group *group, - char *raw_command, struct user_event **newuser) + char *raw_command, struct user_event **newuser, + int reg_flags) { char *name = raw_command; char *args = strpbrk(name, " "); @@ -823,7 +825,7 @@ static int user_event_parse_cmd(struct user_event_group *group, if (flags) *flags++ = '\0'; - return user_event_parse(group, name, args, flags, newuser); + return user_event_parse(group, name, args, flags, newuser, reg_flags); } static int user_field_array_size(const char *type) @@ -1587,7 +1589,7 @@ static int user_event_create(const char *raw_command) mutex_lock(&group->reg_mutex); - ret = user_event_parse_cmd(group, name, &user); + ret = user_event_parse_cmd(group, name, &user, 0); if (!ret) refcount_dec(&user->refcnt); @@ -1748,7 +1750,7 @@ static int user_event_trace_register(struct user_event *user) */ static int user_event_parse(struct user_event_group *group, char *name, char *args, char *flags, - struct user_event **newuser) + struct user_event **newuser, int reg_flags) { int ret; u32 key; @@ -1819,6 +1821,8 @@ static int user_event_parse(struct user_event_group *group, char *name, if (ret) goto put_user_lock; + user->reg_flags = reg_flags; + /* Ensure we track self ref and caller ref (2) */ refcount_set(&user->refcnt, 2); @@ -2117,7 +2121,7 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, return ret; } - ret = user_event_parse_cmd(info->group, name, &user); + ret = user_event_parse_cmd(info->group, name, &user, reg.flags); if (ret) { kfree(name); From patchwork Mon Jun 5 23:38:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 103517 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3022147vqr; Mon, 5 Jun 2023 16:45:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7oKPzCt+fowdT0jQuNKSylzdLQ88/ERvagBQnNtSPqBS+hAUoxD9Nl77yowIbo11jliG9I X-Received: by 2002:a05:6a00:189b:b0:658:e9f4:f7b6 with SMTP id x27-20020a056a00189b00b00658e9f4f7b6mr1224586pfh.15.1686008736013; Mon, 05 Jun 2023 16:45:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686008735; cv=none; d=google.com; s=arc-20160816; b=nLPj/5Sk4VtWY9/iEYXjUzMr/Qyzw26mgu9fSIEDnyDQfmyhyVdLmHYaEsrR03wmAF 8ixczfYgiIkToaG0dvXwAX7O+NIONQXfnRN98PbUzoOdaGXKcYriYgV9xgkFKSCEOsvl Ddw2/adLBeXPqh8n+mqS8dM1vS+iTF4vHE36oMTxKIU1hTPh8FxmuLiYWAno/qEedt/P PPRPcy5minTzgYA4Ih6MQp9tBaqPrh04pazpGf736o1AxlTbxeghX9H+hBHnxnIXMiP4 SPynodz6tieez/6Y9QdDn00Onm+v2pmK463dlmDlvkUMxz/m84XC+erNf2n/8rWibQNo Zc0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=7HerbMz1DFTdP2RVbzOEQ58GbkMSyi6fAKvdAGruc5c=; b=ZdroagnFnkGo0O+OuPN98/fn3/NhNWsbe9QO2ytHWMeGWCnJxJ78kUGAPEnZhWdQfz 0NZksBdvpM0MPr5kc2D8jZFuzG8k7Lno0nLc16C/3yu8MoMg6a1fjAslVK56aDdIHTnJ gvuL8ISg/Rg/Z8KJ2tXOCuvC4bYsx8Ar8fbrdXJl0nXSCiva6tRHv+Ar6kQnmFsGGqeX ieC5YfPaZDfk8RSYx/alEy9nMzW/YnNgFsIcj2EH8GxN7d/4Tmq2tBMHiKXz0r9rzKEq XLZ5i4+ufnyVw9GfkrOxTiV4grjBJtu5emsszEhx+320GNkpUph790RzirZbX8CI7myY HYaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=fElDeCc6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o26-20020aa7979a000000b00643ba887601si5991888pfp.307.2023.06.05.16.45.24; Mon, 05 Jun 2023 16:45:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=fElDeCc6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233545AbjFEXjM (ORCPT + 99 others); Mon, 5 Jun 2023 19:39:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233453AbjFEXjI (ORCPT ); Mon, 5 Jun 2023 19:39:08 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8FF94F2; Mon, 5 Jun 2023 16:39:06 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 0074020BCFD1; Mon, 5 Jun 2023 16:39:05 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0074020BCFD1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1686008346; bh=7HerbMz1DFTdP2RVbzOEQ58GbkMSyi6fAKvdAGruc5c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fElDeCc6cZT5ljFjefxjBMK2DZRLH6sfH8EKsYxhs+clOn/VxvgqfwhcRy5dwLSdE mRdhVQ3gYfiPaJrv5Aq+T0jnJEfYhKiKrVapsW/+ms1gQC6AOUOf6dCwuKNTGkCGYU m3ApaFjh5K01nun214FXp+k9P8/pSMO8RciIpJg0= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH v2 2/5] tracing/user_events: Track refcount consistently via put/get Date: Mon, 5 Jun 2023 16:38:57 -0700 Message-Id: <20230605233900.2838-3-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605233900.2838-1-beaub@linux.microsoft.com> References: <20230605233900.2838-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767908296542485443?= X-GMAIL-MSGID: =?utf-8?q?1767908296542485443?= Various parts of the code today track user_event's refcnt field directly via a refcount_add/dec. This makes it hard to modify the behavior of the last reference decrement in all code paths consistently. For example, in the future we will auto-delete events upon the last reference going away. This last reference could happen in many places, but we want it to be consistently handled. Add user_event_get() and user_event_put() for the add/dec. Update all places where direct refcounts are being used to utilize these new functions. In each location pass if event_mutex is locked or not. This allows us to drop events automatically in future patches clearly. Ensure when caller states the lock is held, it really is (or is not) held. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 66 +++++++++++++++++++------------- 1 file changed, 40 insertions(+), 26 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 34aa0a5d8e2a..8f0fb6cb0f33 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -175,6 +175,28 @@ static u32 user_event_key(char *name) return jhash(name, strlen(name), 0); } +static struct user_event *user_event_get(struct user_event *user) +{ + refcount_inc(&user->refcnt); + + return user; +} + +static void user_event_put(struct user_event *user, bool locked) +{ +#ifdef CONFIG_LOCKDEP + if (locked) + lockdep_assert_held(&event_mutex); + else + lockdep_assert_not_held(&event_mutex); +#endif + + if (unlikely(!user)) + return; + + refcount_dec(&user->refcnt); +} + static void user_event_group_destroy(struct user_event_group *group) { kfree(group->system_name); @@ -258,12 +280,13 @@ static struct user_event_group return NULL; }; -static void user_event_enabler_destroy(struct user_event_enabler *enabler) +static void user_event_enabler_destroy(struct user_event_enabler *enabler, + bool locked) { list_del_rcu(&enabler->link); /* No longer tracking the event via the enabler */ - refcount_dec(&enabler->event->refcnt); + user_event_put(enabler->event, locked); kfree(enabler); } @@ -325,7 +348,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) /* User asked for enabler to be removed during fault */ if (test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler))) { - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, true); goto out; } @@ -489,13 +512,12 @@ static bool user_event_enabler_dup(struct user_event_enabler *orig, if (!enabler) return false; - enabler->event = orig->event; + enabler->event = user_event_get(orig->event); enabler->addr = orig->addr; /* Only dup part of value (ignore future flags, etc) */ enabler->values = orig->values & ENABLE_VAL_DUP_MASK; - refcount_inc(&enabler->event->refcnt); list_add_rcu(&enabler->link, &mm->enablers); return true; @@ -595,7 +617,7 @@ static void user_event_mm_destroy(struct user_event_mm *mm) struct user_event_enabler *enabler, *next; list_for_each_entry_safe(enabler, next, &mm->enablers, link) - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, false); mmdrop(mm->mm); kfree(mm); @@ -748,7 +770,7 @@ static struct user_event_enabler * exit or run exec(), which includes forks and clones. */ if (!*write_result) { - refcount_inc(&enabler->event->refcnt); + user_event_get(user); list_add_rcu(&enabler->link, &user_mm->enablers); } @@ -1336,10 +1358,8 @@ static struct user_event *find_user_event(struct user_event_group *group, *outkey = key; hash_for_each_possible(group->register_table, user, node, key) - if (!strcmp(EVENT_NAME(user), name)) { - refcount_inc(&user->refcnt); - return user; - } + if (!strcmp(EVENT_NAME(user), name)) + return user_event_get(user); return NULL; } @@ -1553,12 +1573,12 @@ static int user_event_reg(struct trace_event_call *call, return ret; inc: - refcount_inc(&user->refcnt); + user_event_get(user); update_enable_bit_for(user); return 0; dec: update_enable_bit_for(user); - refcount_dec(&user->refcnt); + user_event_put(user, true); return 0; } @@ -1592,7 +1612,7 @@ static int user_event_create(const char *raw_command) ret = user_event_parse_cmd(group, name, &user, 0); if (!ret) - refcount_dec(&user->refcnt); + user_event_put(user, false); mutex_unlock(&group->reg_mutex); @@ -1856,7 +1876,7 @@ static int delete_user_event(struct user_event_group *group, char *name) if (!user) return -ENOENT; - refcount_dec(&user->refcnt); + user_event_put(user, true); if (!user_event_last_ref(user)) return -EBUSY; @@ -2015,9 +2035,7 @@ static int user_events_ref_add(struct user_event_file_info *info, for (i = 0; i < count; ++i) new_refs->events[i] = refs->events[i]; - new_refs->events[i] = user; - - refcount_inc(&user->refcnt); + new_refs->events[i] = user_event_get(user); rcu_assign_pointer(info->refs, new_refs); @@ -2131,7 +2149,7 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, ret = user_events_ref_add(info, user); /* No longer need parse ref, ref_add either worked or not */ - refcount_dec(&user->refcnt); + user_event_put(user, false); /* Positive number is index and valid */ if (ret < 0) @@ -2280,7 +2298,7 @@ static long user_events_ioctl_unreg(unsigned long uarg) set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, true); /* Removed at least one */ ret = 0; @@ -2337,7 +2355,6 @@ static int user_events_release(struct inode *node, struct file *file) struct user_event_file_info *info = file->private_data; struct user_event_group *group; struct user_event_refs *refs; - struct user_event *user; int i; if (!info) @@ -2361,12 +2378,9 @@ static int user_events_release(struct inode *node, struct file *file) * The underlying user_events are ref counted, and cannot be freed. * After this decrement, the user_events may be freed elsewhere. */ - for (i = 0; i < refs->count; ++i) { - user = refs->events[i]; + for (i = 0; i < refs->count; ++i) + user_event_put(refs->events[i], false); - if (user) - refcount_dec(&user->refcnt); - } out: file->private_data = NULL; From patchwork Mon Jun 5 23:38:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 103521 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3033366vqr; Mon, 5 Jun 2023 17:07:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5X2ukykCQR3horthaAHl9lQuZjQf1XZ8bkGB+IdHGtVegmgmwF5lZ8DDu63mRkp0YzrLiu X-Received: by 2002:ac8:5e08:0:b0:3f6:e3aa:a61f with SMTP id h8-20020ac85e08000000b003f6e3aaa61fmr463794qtx.19.1686010062019; Mon, 05 Jun 2023 17:07:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686010062; cv=none; d=google.com; s=arc-20160816; b=qWGESL+qe8gDYrn7jMVmmDoGRjqLAoZrQt4tLGR6pWQB+9fOczGCBl2nlMZOzWJJf6 bk2HPSZaMUWo5nsTpKukBYICEDMks9pvrdj4JX87AI+Zqws7Wbx1S8g1WS/wAh2SDX6K o4idqrnJj50WtYzsnyqF7lF+7CF0tA5g2eICDGYmNjFq3lrtsjP8nX+b+1WIraEdpJaC UDb3UjgATDqConxdxteyuJg84B+iOdEZPXIhDAPbFUtUo/u7f+87CIdBMMQRgMYzrYFY Zppb5loTtoxy0SvQKPFxqPqiNmtruOSA555w6oNClsWfgmTlgQJbeBc475+/Bw6X7qsT 8JUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=l5pDZVAER7tAujicM0OONXv9r0p1wOmBOEOQ4i5LxMM=; b=kxMTrCngc7hJrrn3yT+bOpae5QfLDtNNQJlP5/95Tun6a3RJUeDLcxEUPQcUg68sgF Gc+M4fHJ873SIFRrs79PyXstY+QbE+lS8iFnGXDVGxvoE0u4LJm9AQQrOxbBMVURlEh9 Q0BV6tXbsp2VZZDjO4rIyopq1/SbLA7PerFuLBvoHihL7tmOku3iTiY0UUfZYid5EOYk 2E/Yk+oSjosp/cwA422xoP3obDkSScAjstl1raHbbmkGyY2ElHlhCRItXRhX0qOBkisL qaDpzjlFFq5EZgdfUlrwSnUbzwyggD3kVSmi1di/o6z8mYrRuVMJATGYS8LEX+JWi/y/ 92vQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=Us3wqFJ5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c8-20020ac87dc8000000b003f6a9bb8590si5615037qte.60.2023.06.05.17.07.27; Mon, 05 Jun 2023 17:07:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=Us3wqFJ5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233719AbjFEXjX (ORCPT + 99 others); Mon, 5 Jun 2023 19:39:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233449AbjFEXjI (ORCPT ); Mon, 5 Jun 2023 19:39:08 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CE576F7; Mon, 5 Jun 2023 16:39:06 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 3ECAB20BCFD3; Mon, 5 Jun 2023 16:39:06 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 3ECAB20BCFD3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1686008346; bh=l5pDZVAER7tAujicM0OONXv9r0p1wOmBOEOQ4i5LxMM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Us3wqFJ5JLZaTYkyZtxynKhPZ9Sn+cILNXVpOnEFiSeWWkUx9xlvfXGLVfJ6iRE7A /MIhx2u7Qj1f6Xl6drOuGjMhw8MsDPLf6XQnu8lKtOwEMnYIBSLvGv7ruRtM7pXibn LkM9kUm3RGfcxzDV4eMoROzX8Z+KyuXRxBPBc3Q0= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH v2 3/5] tracing/user_events: Add auto cleanup and a flag to persist events Date: Mon, 5 Jun 2023 16:38:58 -0700 Message-Id: <20230605233900.2838-4-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605233900.2838-1-beaub@linux.microsoft.com> References: <20230605233900.2838-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767908224601295114?= X-GMAIL-MSGID: =?utf-8?q?1767909687005810576?= Currently user events need to be manually deleted via the delete IOCTL call or via the dynamic_events file. Most operators and processes wish to have these events auto cleanup when they are no longer used by anything to prevent them piling without manual maintenance. However, some operators may not want this, such as pre-registering events via the dynamic_events tracefs file. Add a persist flag to user facing header and honor it within the register IOCTL call. Add max flag as well to ensure that only known flags can be used now and in the future. Update user_event_put() to attempt an auto delete of the event if it's the last reference. The auto delete must run in a work queue to ensure proper behavior of class->reg() invocations that don't expect the call to go away from underneath them during the unregister. Add work_struct to user_event struct to ensure we can do this reliably. Link: https://lore.kernel.org/linux-trace-kernel/20230518093600.3f119d68@rorschach.local.home/ Suggested-by: Steven Rostedt Signed-off-by: Beau Belgrave --- include/uapi/linux/user_events.h | 10 ++- kernel/trace/trace_events_user.c | 118 +++++++++++++++++++++++++++---- 2 files changed, 114 insertions(+), 14 deletions(-) diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 2984aae4a2b4..74b909e520aa 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -17,6 +17,14 @@ /* Create dynamic location entry within a 32-bit value */ #define DYN_LOC(offset, size) ((size) << 16 | (offset)) +enum user_reg_flag { + /* Event will not delete upon last reference closing */ + USER_EVENT_REG_PERSIST = 1U << 0, + + /* This value or above is currently non-ABI */ + USER_EVENT_REG_MAX = 1U << 1, +}; + /* * Describes an event registration and stores the results of the registration. * This structure is passed to the DIAG_IOCSREG ioctl, callers at a minimum @@ -33,7 +41,7 @@ struct user_reg { /* Input: Enable size in bytes at address */ __u8 enable_size; - /* Input: Flags for future use, set to 0 */ + /* Input: Flags can be any of the above user_reg_flag values */ __u16 flags; /* Input: Address to update when enabled */ diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 8f0fb6cb0f33..170ec2f5076c 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -85,6 +85,7 @@ struct user_event { struct hlist_node node; struct list_head fields; struct list_head validators; + struct work_struct put_work; refcount_t refcnt; int min_size; int reg_flags; @@ -169,6 +170,7 @@ static int user_event_parse(struct user_event_group *group, char *name, static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm); static struct user_event_mm *user_event_mm_get_all(struct user_event *user); static void user_event_mm_put(struct user_event_mm *mm); +static int destroy_user_event(struct user_event *user); static u32 user_event_key(char *name) { @@ -182,19 +184,98 @@ static struct user_event *user_event_get(struct user_event *user) return user; } +static void delayed_destroy_user_event(struct work_struct *work) +{ + struct user_event *user = container_of( + work, struct user_event, put_work); + + mutex_lock(&event_mutex); + + if (!refcount_dec_and_test(&user->refcnt)) + goto out; + + if (destroy_user_event(user)) { + /* + * The only reason this would fail here is if we cannot + * update the visibility of the event. In this case the + * event stays in the hashtable, waiting for someone to + * attempt to delete it later. + */ + pr_warn("user_events: Unable to delete event\n"); + refcount_set(&user->refcnt, 1); + } +out: + mutex_unlock(&event_mutex); +} + static void user_event_put(struct user_event *user, bool locked) { -#ifdef CONFIG_LOCKDEP - if (locked) - lockdep_assert_held(&event_mutex); - else - lockdep_assert_not_held(&event_mutex); -#endif + bool delete; if (unlikely(!user)) return; - refcount_dec(&user->refcnt); + /* + * When the event is not enabled for auto-delete there will always + * be at least 1 reference to the event. During the event creation + * we initially set the refcnt to 2 to achieve this. In those cases + * the caller must acquire event_mutex and after decrement check if + * the refcnt is 1, meaning this is the last reference. When auto + * delete is enabled, there will only be 1 ref, IE: refcnt will be + * only set to 1 during creation to allow the below checks to go + * through upon the last put. The last put must always be done with + * the event mutex held. + */ + if (!locked) { + lockdep_assert_not_held(&event_mutex); + delete = refcount_dec_and_mutex_lock(&user->refcnt, &event_mutex); + } else { + lockdep_assert_held(&event_mutex); + delete = refcount_dec_and_test(&user->refcnt); + } + + if (!delete) + return; + + /* We now have the event_mutex in all cases */ + + if (user->reg_flags & USER_EVENT_REG_PERSIST) { + /* We should not get here when persist flag is set */ + pr_alert("BUG: Auto-delete engaged on persistent event\n"); + goto out; + } + + /* + * Unfortunately we have to attempt the actual destroy in a work + * queue. This is because not all cases handle a trace_event_call + * being removed within the class->reg() operation for unregister. + */ + INIT_WORK(&user->put_work, delayed_destroy_user_event); + + /* + * Since the event is still in the hashtable, we have to re-inc + * the ref count to 1. This count will be decremented and checked + * in the work queue to ensure it's still the last ref. This is + * needed because a user-process could register the same event in + * between the time of event_mutex release and the work queue + * running the delayed destroy. If we removed the item now from + * the hashtable, this would result in a timing window where a + * user process would fail a register because the trace_event_call + * register would fail in the tracing layers. + */ + refcount_set(&user->refcnt, 1); + + if (!schedule_work(&user->put_work)) { + /* + * If we fail we must wait for an admin to attempt delete or + * another register/close of the event, whichever is first. + */ + pr_warn("user_events: Unable to queue delayed destroy\n"); + } +out: + /* Ensure if we didn't have event_mutex before we unlock it */ + if (!locked) + mutex_unlock(&event_mutex); } static void user_event_group_destroy(struct user_event_group *group) @@ -793,7 +874,12 @@ static struct user_event_enabler static __always_inline __must_check bool user_event_last_ref(struct user_event *user) { - return refcount_read(&user->refcnt) == 1; + int last = 0; + + if (user->reg_flags & USER_EVENT_REG_PERSIST) + last = 1; + + return refcount_read(&user->refcnt) == last; } static __always_inline __must_check @@ -1609,7 +1695,8 @@ static int user_event_create(const char *raw_command) mutex_lock(&group->reg_mutex); - ret = user_event_parse_cmd(group, name, &user, 0); + /* Dyn events persist, otherwise they would cleanup immediately */ + ret = user_event_parse_cmd(group, name, &user, USER_EVENT_REG_PERSIST); if (!ret) user_event_put(user, false); @@ -1843,8 +1930,13 @@ static int user_event_parse(struct user_event_group *group, char *name, user->reg_flags = reg_flags; - /* Ensure we track self ref and caller ref (2) */ - refcount_set(&user->refcnt, 2); + if (user->reg_flags & USER_EVENT_REG_PERSIST) { + /* Ensure we track self ref and caller ref (2) */ + refcount_set(&user->refcnt, 2); + } else { + /* Ensure we track only caller ref (1) */ + refcount_set(&user->refcnt, 1); + } dyn_event_init(&user->devent, &user_event_dops); dyn_event_add(&user->devent, &user->call); @@ -2066,8 +2158,8 @@ static long user_reg_get(struct user_reg __user *ureg, struct user_reg *kreg) if (ret) return ret; - /* Ensure no flags, since we don't support any yet */ - if (kreg->flags != 0) + /* Ensure only valid flags */ + if (kreg->flags & ~(USER_EVENT_REG_MAX-1)) return -EINVAL; /* Ensure supported size */ From patchwork Mon Jun 5 23:38:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 103519 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3031729vqr; Mon, 5 Jun 2023 17:04:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7UWFdO6F5nN52rjUmQ+YbXMzCsLcK9J/xpcH/jKckKg6sTNZzR0/5FXNFAMIoOhpXnWAQm X-Received: by 2002:a17:903:456:b0:1af:c599:6a88 with SMTP id iw22-20020a170903045600b001afc5996a88mr245925plb.49.1686009886638; Mon, 05 Jun 2023 17:04:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686009886; cv=none; d=google.com; s=arc-20160816; b=gNBM6jn71rG5FCE9f4ISLqTHG3fFSBPok6M68MZEJSFXtbPdyAhAmCR4gqmJYTI+fl B/xrElIVhaSE+BE7rFZ2zpPUJcODDj/5FSWa3RU79+hdZrt6B9svqrgFZEoba65SenmH RegeE0vjJnQCnS5teZ6ktG/7IVXLLOzYoyZsYTrv43Y8nHrvVCE5hkKJibCyqQak0sAu 8967xbPSnl51SVsyBGFje5ZxyRzv3PxVYv8l3mV2QZL+3fsIlPr/MAn24G5dQHF9f6Qb CKxtEF3brAoBUkGOZ50H4IKyjRbr0+fC+D8gq94itXAOQCxC63Y+Wmk6pJAiJFty06Q7 l1bg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=6monnkZ7pd/D0U2iHah9YGHOLagyAkBtH4KggkTtfXw=; b=GyguD18UPexvNYEQv9qYDflIuzou5Pg1H7zGtVRkBREAtbBPSvx0fpZ6bg5e1p/zdU taFUZM5IePLEHBvnSmiM8rHhQnd4KyAByeH7Rsrtq7oQ5y2FZ96+Ij68k3AO7jR8KPMC AyPpB5xbQby/txyIy1Sgup7BkLSuXH+SE6CQnhe5fN4Mi1WG3YrUaJ0YJ8K4MGtxj772 bBQy+7flUx/wPkJGQ98UB0YgHW0DP876c7Q/y+GPOU/PFwoF7dx+UkaK8wxi9h7KTD2I liIA+UbYYOJOjCgI5ZTei+hsEWLBKQLYLSUHweoUseCC1z5aRX64tVdZsh0WnVXZ9XhX MwsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=pxo1Cre6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u14-20020a170902a60e00b001ab0993941bsi5888154plq.311.2023.06.05.17.04.31; Mon, 05 Jun 2023 17:04:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=pxo1Cre6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233624AbjFEXjQ (ORCPT + 99 others); Mon, 5 Jun 2023 19:39:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233476AbjFEXjI (ORCPT ); Mon, 5 Jun 2023 19:39:08 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 18446FA; Mon, 5 Jun 2023 16:39:07 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 813C320BD03E; Mon, 5 Jun 2023 16:39:06 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 813C320BD03E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1686008346; bh=6monnkZ7pd/D0U2iHah9YGHOLagyAkBtH4KggkTtfXw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pxo1Cre6//tGTFPdK28EPlQOfgLJVCOxiDcaqusq+3YmrxrW5yEAgkKXjBACTpv74 6iFdeXciOhamP5Uy5XjwZu/TYY7kCn+6ehlBQmYIPZC7d8QQiLpre2bfP5CL1uvzkP +9tZSbQhLJ3BB2yIHH8mdP7AUYsFqeGfJRGavvCw= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH v2 4/5] tracing/user_events: Add self-test for persist flag Date: Mon, 5 Jun 2023 16:38:59 -0700 Message-Id: <20230605233900.2838-5-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605233900.2838-1-beaub@linux.microsoft.com> References: <20230605233900.2838-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767909502914412529?= X-GMAIL-MSGID: =?utf-8?q?1767909502914412529?= A new flag for persisting user_events upon the last reference exists now. We must ensure this flag works correctly in the common cases. Update abi self test to ensure when this flag is used the user_event goes away at the appropriate time. Ensure last fd, enabler, and trace_event_call refs paths correctly delete the event for non-persist events. Signed-off-by: Beau Belgrave --- .../testing/selftests/user_events/abi_test.c | 144 +++++++++++++++++- .../selftests/user_events/ftrace_test.c | 1 + 2 files changed, 137 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c index 5125c42efe65..55aaec21fd8e 100644 --- a/tools/testing/selftests/user_events/abi_test.c +++ b/tools/testing/selftests/user_events/abi_test.c @@ -22,10 +22,61 @@ const char *data_file = "/sys/kernel/tracing/user_events_data"; const char *enable_file = "/sys/kernel/tracing/events/user_events/__abi_event/enable"; +const char *temp_enable_file = "/sys/kernel/tracing/events/user_events/__abi_temp_event/enable"; -static int change_event(bool enable) +static bool __exists(int grace_ms, const char *path) { - int fd = open(enable_file, O_RDWR); + int fd; + + usleep(grace_ms * 1000); + + fd = open(path, O_RDONLY); + + if (fd == -1) + return false; + + close(fd); + + return true; +} + +static bool temp_exists(int grace_ms) +{ + return __exists(grace_ms, temp_enable_file); +} + +static bool exists(int grace_ms) +{ + return __exists(grace_ms, enable_file); +} + +static int __clear(const char *name) +{ + int fd = open(data_file, O_RDWR); + int ret = 0; + + if (ioctl(fd, DIAG_IOCSDEL, name) == -1) + if (errno != ENOENT) + ret = -1; + + close(fd); + + return ret; +} + +static int clear_temp(void) +{ + return __clear("__abi_temp_event"); +} + +static int clear(void) +{ + return __clear("__abi_event"); +} + +static int __change_event(const char *path, bool enable) +{ + int fd = open(path, O_RDWR); int ret; if (fd < 0) @@ -46,22 +97,48 @@ static int change_event(bool enable) return ret; } -static int reg_enable(long *enable, int size, int bit) +static int change_temp_event(bool enable) +{ + return __change_event(temp_enable_file, enable); +} + +static int change_event(bool enable) +{ + return __change_event(enable_file, enable); +} + +static int __reg_enable(int *fd, const char *name, long *enable, int size, + int bit, int flags) { struct user_reg reg = {0}; - int fd = open(data_file, O_RDWR); - int ret; - if (fd < 0) + *fd = open(data_file, O_RDWR); + + if (*fd < 0) return -1; reg.size = sizeof(reg); - reg.name_args = (__u64)"__abi_event"; + reg.name_args = (__u64)name; reg.enable_bit = bit; reg.enable_addr = (__u64)enable; reg.enable_size = size; + reg.flags = flags; + + return ioctl(*fd, DIAG_IOCSREG, ®); +} - ret = ioctl(fd, DIAG_IOCSREG, ®); +static int reg_enable_temp(int *fd, long *enable, int size, int bit) +{ + return __reg_enable(fd, "__abi_temp_event", enable, size, bit, 0); +} + +static int reg_enable(long *enable, int size, int bit) +{ + int ret; + int fd; + + ret = __reg_enable(&fd, "__abi_event", enable, size, bit, + USER_EVENT_REG_PERSIST); close(fd); @@ -98,6 +175,8 @@ FIXTURE_SETUP(user) { } FIXTURE_TEARDOWN(user) { + clear(); + clear_temp(); } TEST_F(user, enablement) { @@ -223,6 +302,55 @@ TEST_F(user, clones) { ASSERT_EQ(0, change_event(false)); } +TEST_F(user, flags) { + int grace = 100; + int fd; + + /* FLAG: None */ + /* Removal path 1, close on last fd ref */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + close(fd); + ASSERT_EQ(false, temp_exists(grace)); + + /* Removal path 2, close on last enabler */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + close(fd); + ASSERT_EQ(true, temp_exists(grace)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(false, temp_exists(grace)); + + /* Removal path 3, close on last trace_event ref */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(0, change_temp_event(true)); + close(fd); + ASSERT_EQ(true, temp_exists(grace)); + ASSERT_EQ(0, change_temp_event(false)); + ASSERT_EQ(false, temp_exists(grace)); + + /* FLAG: USER_EVENT_REG_PERSIST */ + ASSERT_EQ(0, clear()); + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(true, exists(grace)); + ASSERT_EQ(0, clear()); + ASSERT_EQ(false, exists(grace)); + + /* FLAG: Non-ABI */ + /* Unknown flags should fail with EINVAL */ + ASSERT_EQ(-1, __reg_enable(&fd, "__abi_invalid_event", &self->check, + sizeof(int), 0, USER_EVENT_REG_MAX)); + ASSERT_EQ(EINVAL, errno); + + ASSERT_EQ(-1, __reg_enable(&fd, "__abi_invalid_event", &self->check, + sizeof(int), 0, USER_EVENT_REG_MAX + 1)); + ASSERT_EQ(EINVAL, errno); +} + int main(int argc, char **argv) { return test_harness_run(argc, argv); diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c index 7c99cef94a65..e5e966d77918 100644 --- a/tools/testing/selftests/user_events/ftrace_test.c +++ b/tools/testing/selftests/user_events/ftrace_test.c @@ -210,6 +210,7 @@ TEST_F(user, register_events) { reg.enable_bit = 31; reg.enable_addr = (__u64)&self->check; reg.enable_size = sizeof(self->check); + reg.flags = USER_EVENT_REG_PERSIST; unreg.size = sizeof(unreg); unreg.disable_bit = 31; From patchwork Mon Jun 5 23:39:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 103518 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3022738vqr; Mon, 5 Jun 2023 16:46:57 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4HZlm0aMJ7a9YN2YdumMBVhforM+eGY74Y/KjW/4QMeXapYiWe+wqHwjZVC0zfw2G9C7OG X-Received: by 2002:a05:6830:1213:b0:6af:8348:6a5f with SMTP id r19-20020a056830121300b006af83486a5fmr248862otp.22.1686008816957; Mon, 05 Jun 2023 16:46:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686008816; cv=none; d=google.com; s=arc-20160816; b=O0sOVw1rHeEim4Sz0bHudzU6ttrqmio37tJA650FY2YhqMea07RwIv60v9D4pikZwv dVfA/QrJKieN3oQeDFp5GSGbKgHZ2ri/n5vVJZpDuRKcgMB6uWVmk8EZLDj9nZ4wuKNL GTD9T1XEKQCJNgc/AapcdRiFVjTcabvbIiVamI7v9RPcF35/wx1dX1YNy63XXfi/SvEa TopDx1mO5yBqKUwzVqlkkr/OExV3FTsOBftDYxCjEFHzdrjhBqYSjtpujRM13dUM/LGH 6d1O5YBD3Md5HvjNsj1Q4byM0/XYCaiBj+i5n88E1suBgyWSas6juH77Aa8yRV29IlT5 HJyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=qnYype7JSfB3XhEdWuDY5z5z2M5sVSUXVc5FgH32LG4=; b=erDDj7Y3RK+AWR+kmL0xZvaQtWOofpEteCSauMHmCukqgReT8zJkDjgli8ok1uhHFa Bjz758kb9lgfpsRt/Z8ufA8CD9DYev4VtoV803WF9aKl/Z+CD/fQsImpIeXamy1cJv52 d/2p0D8bzONGBuBdhNOs84UaqnV8mmKA7AMteGUbABbOowPyjgJRgKKTkQ9IAHweTOIT mtYk/Ev1z0fck7XYPWg+wAbbZpav+29YkO18iiAIoXl8qJ3zXrMpxlEaVkdDuxD0em70 1fguMeA93+CpU30ilhYerpisWpoP11KRcpk3zWuJAfeiJuLfBli0eStOgRR1tPGLbNSG RFNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=asQxKbkk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o26-20020aa7979a000000b00643ba887601si5991888pfp.307.2023.06.05.16.46.42; Mon, 05 Jun 2023 16:46:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=asQxKbkk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233712AbjFEXjU (ORCPT + 99 others); Mon, 5 Jun 2023 19:39:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233465AbjFEXjI (ORCPT ); Mon, 5 Jun 2023 19:39:08 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5861AEC; Mon, 5 Jun 2023 16:39:07 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id B7E1520BE169; Mon, 5 Jun 2023 16:39:06 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com B7E1520BE169 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1686008346; bh=qnYype7JSfB3XhEdWuDY5z5z2M5sVSUXVc5FgH32LG4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=asQxKbkkYvi6HydgOHl5hyqwliXm+PEvNYs4t6Zb8l6ekyyaVcMQA/M88jj5sGp4r d5yLQstjlC3+JfLxoevYS7mpUjHMe6G4s/SP1ovksq9UVIBuYUYHHKT+Q9C01+CfhG sUB6Rko4IyXpghGhd3lrii3dpt7Bakv+bUjgK2C8= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH v2 5/5] tracing/user_events: Add persist flag documentation Date: Mon, 5 Jun 2023 16:39:00 -0700 Message-Id: <20230605233900.2838-6-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605233900.2838-1-beaub@linux.microsoft.com> References: <20230605233900.2838-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767908380968704765?= X-GMAIL-MSGID: =?utf-8?q?1767908380968704765?= There is now a flag for user_events to use when registering events to have events continue to exist upon the last reference put. Add the new flag, USER_EVENT_REG_PERSIST, to user_events documentation files to let people know when to use it. Signed-off-by: Beau Belgrave --- Documentation/trace/user_events.rst | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/Documentation/trace/user_events.rst b/Documentation/trace/user_events.rst index f79987e16cf4..6736e5a32293 100644 --- a/Documentation/trace/user_events.rst +++ b/Documentation/trace/user_events.rst @@ -39,6 +39,14 @@ DIAG_IOCSREG. This command takes a packed struct user_reg as an argument:: + enum user_reg_flag { + /* Event will not delete upon last reference closing */ + USER_EVENT_REG_PERSIST = 1U << 0, + + /* This value or above is currently non-ABI */ + USER_EVENT_REG_MAX = 1U << 1, + }; + struct user_reg { /* Input: Size of the user_reg structure being used */ __u32 size; @@ -49,7 +57,7 @@ This command takes a packed struct user_reg as an argument:: /* Input: Enable size in bytes at address */ __u8 enable_size; - /* Input: Flags for future use, set to 0 */ + /* Input: Flags can be any of the above user_reg_flag values */ __u16 flags; /* Input: Address to update when enabled */ @@ -73,10 +81,13 @@ The struct user_reg requires all the above inputs to be set appropriately. This must be 4 (32-bit) or 8 (64-bit). 64-bit values are only allowed to be used on 64-bit kernels, however, 32-bit can be used on all kernels. -+ flags: The flags to use, if any. For the initial version this must be 0. - Callers should first attempt to use flags and retry without flags to ensure - support for lower versions of the kernel. If a flag is not supported -EINVAL - is returned. ++ flags: The flags to use, if any. Callers should first attempt to use flags + and retry without flags to ensure support for lower versions of the kernel. + If a flag is not supported -EINVAL is returned. + + **USER_EVENT_REG_PERSIST** + When the last reference is closed for the event, the event will continue + to exist until a delete IOCTL is issued by a user. + enable_addr: The address of the value to use to reflect event status. This must be naturally aligned and write accessible within the user program.