From patchwork Tue May 30 23:53:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 101163 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2538734vqr; Tue, 30 May 2023 17:01:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6dYRrjqHh/63tVrY2oo43Oe7W+cdL5UQnlFhGkQKiLNvrStL0r2QT6vwYUmQrHwTPRcw+D X-Received: by 2002:a17:90a:17a2:b0:253:37ef:ce63 with SMTP id q31-20020a17090a17a200b0025337efce63mr13723147pja.18.1685491308252; Tue, 30 May 2023 17:01:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685491308; cv=none; d=google.com; s=arc-20160816; b=DGndsk13QeZZlmOrmZeFrjxBfSn1wHvtnHq5IjjPdP9weRZ6Bgc8vtE8s8ELNp9ory l8e+cqdc/5m+G4+AvPUsz+5KFRJYg4+hqfYYvAM3i9dfayTJkCR2ixYTyVzv31j2MmBg jgPFPAczru68zWIFGp7FTQS+OB9H8FLQ4+7BPdEvqAqZ1G/iu3cUDbowAMaJFU3zm3LC 8dQtPck9cz2T5h8AeQK4SPNb1gQIwazVz3dTJ6NGYZpxnf+2fZKMRK+RDefupoCgnyFM BFozZWFWynH6SzN1jzqAVxKth67b9iERVlOFAOq3NZAE2hKts/VIT6GFCZME4jM9sS2P NUxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=dz/CtqF8fC3WjlIzINF503ue//H9KjjhIANdrvLeVn4=; b=qnBgrPICfD9nC/Cv2A+GzeGwTeVSU1c/8EMarhsZb+k3UuAb1eHYTgV4xbt3ubQmBE 5vu4ulu+/wz0FQdAE+Vlo2J9rDBLhuW2t+srjY2Z9Ya0+PX/M45fQyvYf7sCCmsl0u76 s8oIEi1ohq2+Khxb19b83yPE4zf3C1W4VaUAks/KJUm4Y583rGRYIKdja3dYeojIxgxw 1PCWcDq8wvXDS1NSDzZxUi30tWry8OkDljCNtC0qfSQWOkj4STwlKvTPhzCEupgX8Vaa uTJrVlbOePyY29DPIaAWgSeVxWXAT0urpMvON40WjTQBOSu6ZunlUM2QNcJsbXVluE2A yuVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=iQORsCvD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ot14-20020a17090b3b4e00b00256931aef9bsi1786678pjb.166.2023.05.30.17.01.31; Tue, 30 May 2023 17:01:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=iQORsCvD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233946AbjE3XxR (ORCPT + 99 others); Tue, 30 May 2023 19:53:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbjE3XxP (ORCPT ); Tue, 30 May 2023 19:53:15 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6CB16B2; Tue, 30 May 2023 16:53:12 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id D449320FC46D; Tue, 30 May 2023 16:53:11 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com D449320FC46D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=dz/CtqF8fC3WjlIzINF503ue//H9KjjhIANdrvLeVn4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iQORsCvD6vpEbIv0pI6gHu/s66CGSc9VNmHP5zEn17G21gdKW0xyLpMqI8Gj2mSgQ Q6ospDsmeMg37r9SfigYKv/2e/O4tyUZFj7ot16O2wQUDQ4fpEGx6evvGncO5etTHI 3QoRk85Hzl0OAZgdBza/JiNHfAThEReUerZabxBQ= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 1/5] tracing/user_events: Store register flags on events Date: Tue, 30 May 2023 16:53:00 -0700 Message-Id: <20230530235304.2726-2-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767365734338300229?= X-GMAIL-MSGID: =?utf-8?q?1767365734338300229?= Currently we don't have any available flags for user processes to use to indicate options for user_events. We will soon have a flag to indicate the event should auto-delete once it's not being used by anyone. Add a reg_flags field to user_events and parameters to existing functions to allow for this in future patches. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index b1ecd7677642..34aa0a5d8e2a 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -87,6 +87,7 @@ struct user_event { struct list_head validators; refcount_t refcnt; int min_size; + int reg_flags; char status; }; @@ -163,7 +164,7 @@ typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i, static int user_event_parse(struct user_event_group *group, char *name, char *args, char *flags, - struct user_event **newuser); + struct user_event **newuser, int reg_flags); static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm); static struct user_event_mm *user_event_mm_get_all(struct user_event *user); @@ -809,7 +810,8 @@ static struct list_head *user_event_get_fields(struct trace_event_call *call) * Upon success user_event has its ref count increased by 1. */ static int user_event_parse_cmd(struct user_event_group *group, - char *raw_command, struct user_event **newuser) + char *raw_command, struct user_event **newuser, + int reg_flags) { char *name = raw_command; char *args = strpbrk(name, " "); @@ -823,7 +825,7 @@ static int user_event_parse_cmd(struct user_event_group *group, if (flags) *flags++ = '\0'; - return user_event_parse(group, name, args, flags, newuser); + return user_event_parse(group, name, args, flags, newuser, reg_flags); } static int user_field_array_size(const char *type) @@ -1587,7 +1589,7 @@ static int user_event_create(const char *raw_command) mutex_lock(&group->reg_mutex); - ret = user_event_parse_cmd(group, name, &user); + ret = user_event_parse_cmd(group, name, &user, 0); if (!ret) refcount_dec(&user->refcnt); @@ -1748,7 +1750,7 @@ static int user_event_trace_register(struct user_event *user) */ static int user_event_parse(struct user_event_group *group, char *name, char *args, char *flags, - struct user_event **newuser) + struct user_event **newuser, int reg_flags) { int ret; u32 key; @@ -1819,6 +1821,8 @@ static int user_event_parse(struct user_event_group *group, char *name, if (ret) goto put_user_lock; + user->reg_flags = reg_flags; + /* Ensure we track self ref and caller ref (2) */ refcount_set(&user->refcnt, 2); @@ -2117,7 +2121,7 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, return ret; } - ret = user_event_parse_cmd(info->group, name, &user); + ret = user_event_parse_cmd(info->group, name, &user, reg.flags); if (ret) { kfree(name); From patchwork Tue May 30 23:53:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 101156 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2536331vqr; Tue, 30 May 2023 16:56:16 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6tDIB9YNQITfujOJ1W+jXBz9RFnjDcEnCYj7g3uMZR6CZAl3l9bLQPw2qj+ZNjNyyVnLzR X-Received: by 2002:a05:6a21:3a82:b0:104:f534:6c8d with SMTP id zv2-20020a056a213a8200b00104f5346c8dmr3360841pzb.33.1685490976135; Tue, 30 May 2023 16:56:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685490976; cv=none; d=google.com; s=arc-20160816; b=UOIMFHG5S0xOGAvsrkGxBu9SW2ancKJlz1o+RpWc9UYm/T3zi6EgYwU2wDXBekku+1 NPsOlQyM+TyBRoWf3Zm30EgN27L+lc1CgbmXAAcY9U68Wfmb57MN5vMwCFJ53xNr0NSw Zqx0D0AkoHnK0KyCgC4fVHwNoMM8MmcLPN003VH7DZyY46tEEbsIPHpX2S0ZU6L6Teag dBN+Mo8NDP/U1ph5lt8yrn+z37/n9dZ96pQ3r5s1Rq7G4Oj3+5oyYtM0GXZQsl7s7JN1 KRCYYCDz6RfjCQJLlDATES/1lXDU54QT5ggRG6a9+fn36lwIPMVrMnwlZWuhsNFJk1Mz Zdzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=7HerbMz1DFTdP2RVbzOEQ58GbkMSyi6fAKvdAGruc5c=; b=D8HFllVYza0jg9X0x1RIV1V7Dp6o++NWcFg0vyRGk2hWpU966Q/d8j6W1jy+J/53Te kA4S4gIqjSEbPKoYZkk4pdpgL2tGCu/vyJ6HuTSx+uFR9KvHhyytPfax8pCgJ1NA5ePO gYgmTdEKjxc4U1EAoUaGbMHjmyrtPXQ3skj1pRypB60cJ3ME2/4YOQSkaxaQarqNlDGj AHFMZGLBIMzs0Poc0+CwjwBjgFwTFrvaeovPPj5D0dnOHjThscasYw5DxlBx38dY35rB eB/alSBO4Mg5Gl+4DKVP3VFnJ5VP3iTPRgBMwbqpGonu5Ni1ffuaav0Gj71LDu1dVGMW +Z/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=WhmiDcT1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v6-20020aa799c6000000b006489d6fa83fsi2506003pfi.142.2023.05.30.16.56.04; Tue, 30 May 2023 16:56:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=WhmiDcT1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233996AbjE3Xx1 (ORCPT + 99 others); Tue, 30 May 2023 19:53:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233601AbjE3XxQ (ORCPT ); Tue, 30 May 2023 19:53:16 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AD7BED9; Tue, 30 May 2023 16:53:12 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 17C0020FC46E; Tue, 30 May 2023 16:53:12 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 17C0020FC46E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=7HerbMz1DFTdP2RVbzOEQ58GbkMSyi6fAKvdAGruc5c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WhmiDcT1nPL1sVpiBaasbeIVmdU3LPTBysjeRFJgo1XpGmExRYSx6z3wN/K5ghrn8 Xp7beuRJz+4p6p2c2aZT1nLSRf5svBpjIZqZIIWRAnJEMCWVXnO6aJaZBBoRJx/NUz i/sux5T5FmdbmLjNTBwyYp6SsL+6osTAH8vjhFTs= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 2/5] tracing/user_events: Track refcount consistently via put/get Date: Tue, 30 May 2023 16:53:01 -0700 Message-Id: <20230530235304.2726-3-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767365385657051049?= X-GMAIL-MSGID: =?utf-8?q?1767365385657051049?= Various parts of the code today track user_event's refcnt field directly via a refcount_add/dec. This makes it hard to modify the behavior of the last reference decrement in all code paths consistently. For example, in the future we will auto-delete events upon the last reference going away. This last reference could happen in many places, but we want it to be consistently handled. Add user_event_get() and user_event_put() for the add/dec. Update all places where direct refcounts are being used to utilize these new functions. In each location pass if event_mutex is locked or not. This allows us to drop events automatically in future patches clearly. Ensure when caller states the lock is held, it really is (or is not) held. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 66 +++++++++++++++++++------------- 1 file changed, 40 insertions(+), 26 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 34aa0a5d8e2a..8f0fb6cb0f33 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -175,6 +175,28 @@ static u32 user_event_key(char *name) return jhash(name, strlen(name), 0); } +static struct user_event *user_event_get(struct user_event *user) +{ + refcount_inc(&user->refcnt); + + return user; +} + +static void user_event_put(struct user_event *user, bool locked) +{ +#ifdef CONFIG_LOCKDEP + if (locked) + lockdep_assert_held(&event_mutex); + else + lockdep_assert_not_held(&event_mutex); +#endif + + if (unlikely(!user)) + return; + + refcount_dec(&user->refcnt); +} + static void user_event_group_destroy(struct user_event_group *group) { kfree(group->system_name); @@ -258,12 +280,13 @@ static struct user_event_group return NULL; }; -static void user_event_enabler_destroy(struct user_event_enabler *enabler) +static void user_event_enabler_destroy(struct user_event_enabler *enabler, + bool locked) { list_del_rcu(&enabler->link); /* No longer tracking the event via the enabler */ - refcount_dec(&enabler->event->refcnt); + user_event_put(enabler->event, locked); kfree(enabler); } @@ -325,7 +348,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) /* User asked for enabler to be removed during fault */ if (test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler))) { - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, true); goto out; } @@ -489,13 +512,12 @@ static bool user_event_enabler_dup(struct user_event_enabler *orig, if (!enabler) return false; - enabler->event = orig->event; + enabler->event = user_event_get(orig->event); enabler->addr = orig->addr; /* Only dup part of value (ignore future flags, etc) */ enabler->values = orig->values & ENABLE_VAL_DUP_MASK; - refcount_inc(&enabler->event->refcnt); list_add_rcu(&enabler->link, &mm->enablers); return true; @@ -595,7 +617,7 @@ static void user_event_mm_destroy(struct user_event_mm *mm) struct user_event_enabler *enabler, *next; list_for_each_entry_safe(enabler, next, &mm->enablers, link) - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, false); mmdrop(mm->mm); kfree(mm); @@ -748,7 +770,7 @@ static struct user_event_enabler * exit or run exec(), which includes forks and clones. */ if (!*write_result) { - refcount_inc(&enabler->event->refcnt); + user_event_get(user); list_add_rcu(&enabler->link, &user_mm->enablers); } @@ -1336,10 +1358,8 @@ static struct user_event *find_user_event(struct user_event_group *group, *outkey = key; hash_for_each_possible(group->register_table, user, node, key) - if (!strcmp(EVENT_NAME(user), name)) { - refcount_inc(&user->refcnt); - return user; - } + if (!strcmp(EVENT_NAME(user), name)) + return user_event_get(user); return NULL; } @@ -1553,12 +1573,12 @@ static int user_event_reg(struct trace_event_call *call, return ret; inc: - refcount_inc(&user->refcnt); + user_event_get(user); update_enable_bit_for(user); return 0; dec: update_enable_bit_for(user); - refcount_dec(&user->refcnt); + user_event_put(user, true); return 0; } @@ -1592,7 +1612,7 @@ static int user_event_create(const char *raw_command) ret = user_event_parse_cmd(group, name, &user, 0); if (!ret) - refcount_dec(&user->refcnt); + user_event_put(user, false); mutex_unlock(&group->reg_mutex); @@ -1856,7 +1876,7 @@ static int delete_user_event(struct user_event_group *group, char *name) if (!user) return -ENOENT; - refcount_dec(&user->refcnt); + user_event_put(user, true); if (!user_event_last_ref(user)) return -EBUSY; @@ -2015,9 +2035,7 @@ static int user_events_ref_add(struct user_event_file_info *info, for (i = 0; i < count; ++i) new_refs->events[i] = refs->events[i]; - new_refs->events[i] = user; - - refcount_inc(&user->refcnt); + new_refs->events[i] = user_event_get(user); rcu_assign_pointer(info->refs, new_refs); @@ -2131,7 +2149,7 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, ret = user_events_ref_add(info, user); /* No longer need parse ref, ref_add either worked or not */ - refcount_dec(&user->refcnt); + user_event_put(user, false); /* Positive number is index and valid */ if (ret < 0) @@ -2280,7 +2298,7 @@ static long user_events_ioctl_unreg(unsigned long uarg) set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, true); /* Removed at least one */ ret = 0; @@ -2337,7 +2355,6 @@ static int user_events_release(struct inode *node, struct file *file) struct user_event_file_info *info = file->private_data; struct user_event_group *group; struct user_event_refs *refs; - struct user_event *user; int i; if (!info) @@ -2361,12 +2378,9 @@ static int user_events_release(struct inode *node, struct file *file) * The underlying user_events are ref counted, and cannot be freed. * After this decrement, the user_events may be freed elsewhere. */ - for (i = 0; i < refs->count; ++i) { - user = refs->events[i]; + for (i = 0; i < refs->count; ++i) + user_event_put(refs->events[i], false); - if (user) - refcount_dec(&user->refcnt); - } out: file->private_data = NULL; From patchwork Tue May 30 23:53:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 101157 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2536382vqr; Tue, 30 May 2023 16:56:27 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6m7ZH7PBlVnePvQxDq3bHTwKCRMKqXP+V0WOAvTbmUmDgbgGB6qKjG67cVzJrraI6ujVbH X-Received: by 2002:a05:6a20:7f84:b0:103:b436:aef7 with SMTP id d4-20020a056a207f8400b00103b436aef7mr13527803pzj.16.1685490987130; Tue, 30 May 2023 16:56:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685490987; cv=none; d=google.com; s=arc-20160816; b=TkcfCVronCnAAbTyTK/bNjS3njm0+SKBVzxT0tSGwhzx9jjrPyaDszPeFtTyc1ONs8 kNNWzgtRP62LlgDWVUULF0KltPyIE6V5FFsfutP0q+a46TxKXlN6PrAn8TcT7LCC12au 02MeqehZ0mUG4Katr5DBmTPn4s49AM6gc6NbokrbY3jwnRaZMVM2PwI9uG/b3Ajz+qgV cys+0QEeG+ULCZ9jTHxAOihtGi7G4qeeCj3Cj2MBriVal40Cl5kgtCk3JsUr2kKd39mL VMmLuyxmmfpYRBE17tYdWKtX/MwPn/Erb9+4Bz+V+1YTqv7MECjQJ9ZRF3VVI3DGIyES qqng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=o7cuIRxv95pgO2FziqXirK2G8dnOM07dCx9U3XlMIUI=; b=hx7/IFuGDKJla/Tr78QaHxNt8V2JiEs6gbfO+QC8vmiLu3W6wl/Ra1c4aO/gKK1tcw RLo9xogTegqWZCKbc65U3OfVeCNejSVxzobEjLF2WANvLAUw5+wsN86F/ASZAx5Qq7hx QX1C90PO59JE5No6Jeee8eL291rCE3ZIRq4s4ui9vCaESiqL02qw40K5xkvuGIPhsozK ojSA698sjP2LVHSN7CqvDvBJkkwBfu9EqIFpMPccfE9ESuZS7hewPCf58omcWEJYLgFi vRf7PvQgsLTLCkJ01hNPnp/goEvUthUlfnTdtvBXg5Cll20LstnmgDRh9BW3icWvrXAL jgdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=aOTkpJ81; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u69-20020a638548000000b0053ef547263bsi6810787pgd.165.2023.05.30.16.56.13; Tue, 30 May 2023 16:56:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=aOTkpJ81; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233911AbjE3Xxa (ORCPT + 99 others); Tue, 30 May 2023 19:53:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233833AbjE3XxQ (ORCPT ); Tue, 30 May 2023 19:53:16 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EC607EC; Tue, 30 May 2023 16:53:12 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 5251520FC471; Tue, 30 May 2023 16:53:12 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 5251520FC471 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=o7cuIRxv95pgO2FziqXirK2G8dnOM07dCx9U3XlMIUI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aOTkpJ81RJe4uRRChokSxB4GFanHrLA/nVtkSaf/euHUrjHczbPJAHtpD0Y2YXgE8 tgogBTtsibNyUFYbqa7FRs07PZQzslyp7Lv77yYmBC61FFxaI5WiXuLVPsb6VEmS52 6cr87mk5KIx4ZgrC5ACxw7DqBtGRVQj+gDbpYdWg= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 3/5] tracing/user_events: Add flag to auto-delete events Date: Tue, 30 May 2023 16:53:02 -0700 Message-Id: <20230530235304.2726-4-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767365397432857110?= X-GMAIL-MSGID: =?utf-8?q?1767365397432857110?= Currently user events need to be manually deleted via the delete IOCTL call or via the dynamic_events file. Some operators and processes wish to have these events auto cleanup when they are no longer used by anything to prevent them piling without manual maintenance. Add auto delete flag to user facing header and honor it within the register IOCTL call. Add max flag as well to ensure that only known flags can be used now and in the future. Update user_event_put() to attempt an auto delete of the event if it's the last reference. The auto delete must run in a work queue to ensure proper behavior of class->reg() invocations that don't expect the call to go away from underneath them during the unregister. Add work_struct to user_event struct to ensure we can do this reliably. Link: https://lore.kernel.org/linux-trace-kernel/20230518093600.3f119d68@rorschach.local.home/ Suggested-by: Steven Rostedt Signed-off-by: Beau Belgrave --- include/uapi/linux/user_events.h | 10 ++- kernel/trace/trace_events_user.c | 115 +++++++++++++++++++++++++++---- 2 files changed, 112 insertions(+), 13 deletions(-) diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 2984aae4a2b4..635f45bc6457 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -17,6 +17,14 @@ /* Create dynamic location entry within a 32-bit value */ #define DYN_LOC(offset, size) ((size) << 16 | (offset)) +enum user_reg_flag { + /* Event will auto delete upon last reference closing */ + USER_EVENT_REG_AUTO_DEL = 1U << 0, + + /* This value or above is currently non-ABI */ + USER_EVENT_REG_MAX = 1U << 1, +}; + /* * Describes an event registration and stores the results of the registration. * This structure is passed to the DIAG_IOCSREG ioctl, callers at a minimum @@ -33,7 +41,7 @@ struct user_reg { /* Input: Enable size in bytes at address */ __u8 enable_size; - /* Input: Flags for future use, set to 0 */ + /* Input: Flags can be any of the above user_reg_flag values */ __u16 flags; /* Input: Address to update when enabled */ diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 8f0fb6cb0f33..ddd199f286fe 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -85,6 +85,7 @@ struct user_event { struct hlist_node node; struct list_head fields; struct list_head validators; + struct work_struct put_work; refcount_t refcnt; int min_size; int reg_flags; @@ -169,6 +170,7 @@ static int user_event_parse(struct user_event_group *group, char *name, static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm); static struct user_event_mm *user_event_mm_get_all(struct user_event *user); static void user_event_mm_put(struct user_event_mm *mm); +static int destroy_user_event(struct user_event *user); static u32 user_event_key(char *name) { @@ -182,19 +184,98 @@ static struct user_event *user_event_get(struct user_event *user) return user; } +static void delayed_destroy_user_event(struct work_struct *work) +{ + struct user_event *user = container_of( + work, struct user_event, put_work); + + mutex_lock(&event_mutex); + + if (!refcount_dec_and_test(&user->refcnt)) + goto out; + + if (destroy_user_event(user)) { + /* + * The only reason this would fail here is if we cannot + * update the visibility of the event. In this case the + * event stays in the hashtable, waiting for someone to + * attempt to delete it later. + */ + pr_warn("user_events: Unable to delete event\n"); + refcount_set(&user->refcnt, 1); + } +out: + mutex_unlock(&event_mutex); +} + static void user_event_put(struct user_event *user, bool locked) { -#ifdef CONFIG_LOCKDEP - if (locked) - lockdep_assert_held(&event_mutex); - else - lockdep_assert_not_held(&event_mutex); -#endif + bool delete; if (unlikely(!user)) return; - refcount_dec(&user->refcnt); + /* + * When the event is not enabled for auto-delete there will always + * be at least 1 reference to the event. During the event creation + * we initially set the refcnt to 2 to achieve this. In those cases + * the caller must acquire event_mutex and after decrement check if + * the refcnt is 1, meaning this is the last reference. When auto + * delete is enabled, there will only be 1 ref, IE: refcnt will be + * only set to 1 during creation to allow the below checks to go + * through upon the last put. The last put must always be done with + * the event mutex held. + */ + if (!locked) { + lockdep_assert_not_held(&event_mutex); + delete = refcount_dec_and_mutex_lock(&user->refcnt, &event_mutex); + } else { + lockdep_assert_held(&event_mutex); + delete = refcount_dec_and_test(&user->refcnt); + } + + if (!delete) + return; + + /* We now have the event_mutex in all cases */ + + if (!(user->reg_flags & USER_EVENT_REG_AUTO_DEL)) { + /* We should not get here unless the auto-delete flag is set */ + pr_alert("BUG: Auto-delete engaged without it enabled\n"); + goto out; + } + + /* + * Unfortunately we have to attempt the actual destroy in a work + * queue. This is because not all cases handle a trace_event_call + * being removed within the class->reg() operation for unregister. + */ + INIT_WORK(&user->put_work, delayed_destroy_user_event); + + /* + * Since the event is still in the hashtable, we have to re-inc + * the ref count to 1. This count will be decremented and checked + * in the work queue to ensure it's still the last ref. This is + * needed because a user-process could register the same event in + * between the time of event_mutex release and the work queue + * running the delayed destroy. If we removed the item now from + * the hashtable, this would result in a timing window where a + * user process would fail a register because the trace_event_call + * register would fail in the tracing layers. + */ + refcount_set(&user->refcnt, 1); + + if (!schedule_work(&user->put_work)) { + /* + * If we fail we must wait for an admin to attempt delete or + * another register/close of the event, whichever is first. + */ + pr_warn("user_events: Unable to queue delayed destroy\n"); + } +out: + /* Ensure if we didn't have event_mutex before we unlock it */ + if (!locked) + mutex_unlock(&event_mutex); } static void user_event_group_destroy(struct user_event_group *group) @@ -793,7 +874,12 @@ static struct user_event_enabler static __always_inline __must_check bool user_event_last_ref(struct user_event *user) { - return refcount_read(&user->refcnt) == 1; + int last = 1; + + if (user->reg_flags & USER_EVENT_REG_AUTO_DEL) + last = 0; + + return refcount_read(&user->refcnt) == last; } static __always_inline __must_check @@ -1843,8 +1929,13 @@ static int user_event_parse(struct user_event_group *group, char *name, user->reg_flags = reg_flags; - /* Ensure we track self ref and caller ref (2) */ - refcount_set(&user->refcnt, 2); + if (user->reg_flags & USER_EVENT_REG_AUTO_DEL) { + /* Ensure we track only caller ref (1) */ + refcount_set(&user->refcnt, 1); + } else { + /* Ensure we track self ref and caller ref (2) */ + refcount_set(&user->refcnt, 2); + } dyn_event_init(&user->devent, &user_event_dops); dyn_event_add(&user->devent, &user->call); @@ -2066,8 +2157,8 @@ static long user_reg_get(struct user_reg __user *ureg, struct user_reg *kreg) if (ret) return ret; - /* Ensure no flags, since we don't support any yet */ - if (kreg->flags != 0) + /* Ensure only valid flags */ + if (kreg->flags & ~(USER_EVENT_REG_MAX-1)) return -EINVAL; /* Ensure supported size */ From patchwork Tue May 30 23:53:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 101155 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2536221vqr; Tue, 30 May 2023 16:56:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6Ky08jUsYb+GdI+musRaWWLA3//cD3WM4svozYL8JOZir8pzJs41qa26BXd8+dGGqlHG69 X-Received: by 2002:a05:6a20:ad90:b0:110:b0ab:8798 with SMTP id dd16-20020a056a20ad9000b00110b0ab8798mr4044608pzb.36.1685490961146; Tue, 30 May 2023 16:56:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685490961; cv=none; d=google.com; s=arc-20160816; b=NPMvbiBOPeIv2ZpQeOX+oPvX7ZR1UPJQV1foFcd/8yxZ9/Z/D/MHhVoap+1mEHGer3 JHiyBRyJa2xzdFFF5gN1aJivTDIOi1rYTNOxqFXeb0swlTIeSW4griPw3e3mPrVMErTL EYkXuDfknQKDyZN0lrN0lzs/YuEZRR8dOHYTi9TubkBtZzKjAai/eleTVejnGt3ZMk4J kt3JQDA5UN9J8tReCvuyjgR45MTZ0NOH5A/rGTHu3L9a4CL6+H7KaTUlKkhvvQ2y9Ouy eAOT8qEkTz3/fknnk+7klUvsDKrFmuyaPziBnB76j4hOvpFHmavG8HyVACoj0ZDHIeJd dzDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=jY9TSOdHVrfPeYA1EVOelAq4LjG1oi4wE8/rpaq0pvM=; b=fFkOm6D6kFommRgI9PXzJK6iTBwVcX8vxhHjpL8aEA4lXhv1cFf87aohv9hj5A3Off rnw+F6AS1+hNCshaYBkbg21//moS6pIQFXiustM3aZaFFHP5+a2jdP6T1AScJmLmkEy1 /vdV8uZZpQPT9IaiJKj/JuQM/16Vs78Qyfhiq92GM5435SK3gWRJQiFvCStkjIaLt7RW mN3aftxitKX7Wix0gO3PLqb9QqAljDe5DB1/vd8g1tVyDEmxgoY78LoWN+Wzourp2m5K qD7msoQtByim31YcofY8hkpgf9c74THjA4Fgev98H9h44Zz8xTf8Arf6+3T+9ZavBjju oRsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=KTalf09d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i184-20020a6387c1000000b0051f7686dfb7si9439748pge.189.2023.05.30.16.55.49; Tue, 30 May 2023 16:56:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=KTalf09d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233981AbjE3XxW (ORCPT + 99 others); Tue, 30 May 2023 19:53:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233493AbjE3XxQ (ORCPT ); Tue, 30 May 2023 19:53:16 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5207DF9; Tue, 30 May 2023 16:53:13 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 8AF2520FC475; Tue, 30 May 2023 16:53:12 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 8AF2520FC475 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=jY9TSOdHVrfPeYA1EVOelAq4LjG1oi4wE8/rpaq0pvM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KTalf09dM9FN1aKmNTp560gjdCVKv7BiWHr49EW1QWtkEc6a185AqMkvW3kzzh7+v ArtAE6Os4uFfCRo5E0PcJzYcbOcZYOBOkyru6T2VoSodd0B3JNbq07AvFjcpGAxzLV EaNmsJjsfUuC6cUYIIPUVOju1g8WPoBRCK13X60Y= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 4/5] tracing/user_events: Add self-test for auto-del flag Date: Tue, 30 May 2023 16:53:03 -0700 Message-Id: <20230530235304.2726-5-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767365370236650150?= X-GMAIL-MSGID: =?utf-8?q?1767365370236650150?= A new flag for auto-deleting user_events upon the last reference exists now. We must ensure this flag works correctly in the common cases. Update abi self test to ensure when this flag is used the user_event goes away at the appropriate time. Ensure last fd, enabler, and trace_event_call refs paths correctly delete the event. Signed-off-by: Beau Belgrave --- .../testing/selftests/user_events/abi_test.c | 115 ++++++++++++++++-- 1 file changed, 107 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c index 5125c42efe65..9c726616f763 100644 --- a/tools/testing/selftests/user_events/abi_test.c +++ b/tools/testing/selftests/user_events/abi_test.c @@ -22,10 +22,41 @@ const char *data_file = "/sys/kernel/tracing/user_events_data"; const char *enable_file = "/sys/kernel/tracing/events/user_events/__abi_event/enable"; +const char *temp_enable_file = "/sys/kernel/tracing/events/user_events/__abi_temp_event/enable"; -static int change_event(bool enable) +static bool temp_exists(int grace_ms) +{ + int fd; + + usleep(grace_ms * 1000); + + fd = open(temp_enable_file, O_RDONLY); + + if (fd == -1) + return false; + + close(fd); + + return true; +} + +static int clear_temp(void) +{ + int fd = open(data_file, O_RDWR); + int ret = 0; + + if (ioctl(fd, DIAG_IOCSDEL, "__abi_temp_event") == -1) + if (errno != ENOENT) + ret = -1; + + close(fd); + + return ret; +} + +static int __change_event(const char *path, bool enable) { - int fd = open(enable_file, O_RDWR); + int fd = open(path, O_RDWR); int ret; if (fd < 0) @@ -46,22 +77,48 @@ static int change_event(bool enable) return ret; } -static int reg_enable(long *enable, int size, int bit) +static int change_temp_event(bool enable) +{ + return __change_event(temp_enable_file, enable); +} + +static int change_event(bool enable) +{ + return __change_event(enable_file, enable); +} + +static int __reg_enable(int *fd, const char *name, long *enable, int size, + int bit, int flags) { struct user_reg reg = {0}; - int fd = open(data_file, O_RDWR); - int ret; - if (fd < 0) + *fd = open(data_file, O_RDWR); + + if (*fd < 0) return -1; reg.size = sizeof(reg); - reg.name_args = (__u64)"__abi_event"; + reg.name_args = (__u64)name; reg.enable_bit = bit; reg.enable_addr = (__u64)enable; reg.enable_size = size; + reg.flags = flags; + + return ioctl(*fd, DIAG_IOCSREG, ®); +} - ret = ioctl(fd, DIAG_IOCSREG, ®); +static int reg_enable_temp(int *fd, long *enable, int size, int bit) +{ + return __reg_enable(fd, "__abi_temp_event", enable, size, bit, + USER_EVENT_REG_AUTO_DEL); +} + +static int reg_enable(long *enable, int size, int bit) +{ + int ret; + int fd; + + ret = __reg_enable(&fd, "__abi_event", enable, size, bit, 0); close(fd); @@ -98,6 +155,7 @@ FIXTURE_SETUP(user) { } FIXTURE_TEARDOWN(user) { + clear_temp(); } TEST_F(user, enablement) { @@ -223,6 +281,47 @@ TEST_F(user, clones) { ASSERT_EQ(0, change_event(false)); } +TEST_F(user, flags) { + int grace = 100; + int fd; + + /* FLAG: USER_EVENT_REG_AUTO_DEL */ + /* Removal path 1, close on last fd ref */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + close(fd); + ASSERT_EQ(false, temp_exists(grace)); + + /* Removal path 2, close on last enabler */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + close(fd); + ASSERT_EQ(true, temp_exists(grace)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(false, temp_exists(grace)); + + /* Removal path 3, close on last trace_event ref */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(0, change_temp_event(true)); + close(fd); + ASSERT_EQ(true, temp_exists(grace)); + ASSERT_EQ(0, change_temp_event(false)); + ASSERT_EQ(false, temp_exists(grace)); + + /* FLAG: Non-ABI */ + /* Unknown flags should fail with EINVAL */ + ASSERT_EQ(-1, __reg_enable(&fd, "__abi_invalid_event", &self->check, + sizeof(int), 0, USER_EVENT_REG_MAX)); + ASSERT_EQ(EINVAL, errno); + + ASSERT_EQ(-1, __reg_enable(&fd, "__abi_invalid_event", &self->check, + sizeof(int), 0, USER_EVENT_REG_MAX + 1)); + ASSERT_EQ(EINVAL, errno); +} + int main(int argc, char **argv) { return test_harness_run(argc, argv); From patchwork Tue May 30 23:53:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 101158 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2536390vqr; Tue, 30 May 2023 16:56:29 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ63Rw+LugAXI0ar+LrJbF7OA6lUZiWz/OLk7KNfDkxls7cUn2lKs7OTGDo2/RSbnlMx9BcO X-Received: by 2002:a17:90a:a82:b0:255:2e48:70b1 with SMTP id 2-20020a17090a0a8200b002552e4870b1mr4017551pjw.26.1685490989572; Tue, 30 May 2023 16:56:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685490989; cv=none; d=google.com; s=arc-20160816; b=XFie0KsZb9jWqPukC8qiMIhF+66xYTGz6TwIhrpOgK/g2xC/ifmWrxYChw0TfOBz14 ysYvDsPfERVmN2S4isHPNCjbofgnU/ekmVB4h+WdPbUjaZABtC9eBYXrdSpqMViMjy+n gAGbn4Wvz0GmuKmfIEAi/k7Ku680H7YTkouqyQCexA7CH+zqQ2L5zNOJNS+9IUhWl2Br ZOkZUrzVaQ4jrcEPVyI1qDGdfM2LmiU46HCD+bfk6iyLljdNeA7cPvhVb70AJeAa+cPc ept9y6Y9IwkLgygB/JEX3FOfp5E3gJmD5Bo1z267odfwG+lk+nqRtTequ6dQm2XNDqFY IhvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=OnNeNVSXO4ERkaHgFCtG87kG7AJD7W63eM8/Z9KuQYo=; b=en+Z4AubmTFax8015Z9qe9v3BZkOV9tcPA/f4PQg2vp0XxYVba5T6ZulMzKcaTgDrm tQa4qPVcYHfSjVpCVn5To195YceN41LMGOx9xrkVFIBlTyf0KlqWtE0d8UXhl0AGHLwE Qyhkd6/IxPWrm/j8kgHfRV3DiTVcFX/+7rv1jQfgq08a1OEPZP6O7LplAbHmppBvKCBl 2gMIhLYQEzTW2i0+nlae/ebyMwpV6ea6uipd4BMa2MkY3jKeLLcAK3Cu8ParA6ixBess if67lr8dhKn9dLBj39kED6fjpHNYGa87OC//l0qMrKMH4Afn9GpH+/iplkXk1G1gDwjM 9Eyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=ZjeeYvha; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lr8-20020a17090b4b8800b002535dbae9b1si11643213pjb.22.2023.05.30.16.56.17; Tue, 30 May 2023 16:56:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=ZjeeYvha; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233987AbjE3Xxd (ORCPT + 99 others); Tue, 30 May 2023 19:53:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233897AbjE3XxR (ORCPT ); Tue, 30 May 2023 19:53:17 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B26D9102; Tue, 30 May 2023 16:53:15 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id C1CAA20FC478; Tue, 30 May 2023 16:53:12 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com C1CAA20FC478 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=OnNeNVSXO4ERkaHgFCtG87kG7AJD7W63eM8/Z9KuQYo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZjeeYvha+B2nxh6wbsBLm+UDoVXiEgiDGz5pSBxsZK1mzeyJ2FUM9wvakPcF+nlnp +QQnVM1844JyP3ncObsRr2dSaNiXB/wxPX90MMI9p7jbIxYJVcXBRTKvCxqMB0jN64 UrUQxOH+JW9hk3vIBdozhs+zM6S0ZgheYtuAkzao= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 5/5] tracing/user_events: Add auto-del flag documentation Date: Tue, 30 May 2023 16:53:04 -0700 Message-Id: <20230530235304.2726-6-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767365399792154159?= X-GMAIL-MSGID: =?utf-8?q?1767365399792154159?= There is now a flag for user_events to use when registering events to auto delete events upon the last reference put. Add the new flag, USER_EVENT_REG_AUTO_DEL, to user_events documentation files to let people know how to use it. Signed-off-by: Beau Belgrave --- Documentation/trace/user_events.rst | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/Documentation/trace/user_events.rst b/Documentation/trace/user_events.rst index f79987e16cf4..946da25be812 100644 --- a/Documentation/trace/user_events.rst +++ b/Documentation/trace/user_events.rst @@ -39,6 +39,14 @@ DIAG_IOCSREG. This command takes a packed struct user_reg as an argument:: + enum user_reg_flag { + /* Event will auto delete upon last reference closing */ + USER_EVENT_REG_AUTO_DEL = 1U << 0, + + /* This value or above is currently non-ABI */ + USER_EVENT_REG_MAX = 1U << 1, + }; + struct user_reg { /* Input: Size of the user_reg structure being used */ __u32 size; @@ -49,7 +57,7 @@ This command takes a packed struct user_reg as an argument:: /* Input: Enable size in bytes at address */ __u8 enable_size; - /* Input: Flags for future use, set to 0 */ + /* Input: Flags can be any of the above user_reg_flag values */ __u16 flags; /* Input: Address to update when enabled */ @@ -73,10 +81,13 @@ The struct user_reg requires all the above inputs to be set appropriately. This must be 4 (32-bit) or 8 (64-bit). 64-bit values are only allowed to be used on 64-bit kernels, however, 32-bit can be used on all kernels. -+ flags: The flags to use, if any. For the initial version this must be 0. - Callers should first attempt to use flags and retry without flags to ensure - support for lower versions of the kernel. If a flag is not supported -EINVAL - is returned. ++ flags: The flags to use, if any. Callers should first attempt to use flags + and retry without flags to ensure support for lower versions of the kernel. + If a flag is not supported -EINVAL is returned. + + **USER_EVENT_REG_AUTO_DEL** + When the last reference is closed for the event, the event will delete + itself automatically as if the delete IOCTL was issued by a user. + enable_addr: The address of the value to use to reflect event status. This must be naturally aligned and write accessible within the user program.