From patchwork Wed Mar 29 19:45:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76797 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp653404vqo; Wed, 29 Mar 2023 13:00:34 -0700 (PDT) X-Google-Smtp-Source: AKy350Z6m9c0PLKEQ7+nYRvZapLKGnwIJX1t0Qi3Q+TaOTR4Ani58FRczexOHrJU6B7TifGcT9xU X-Received: by 2002:a17:902:e5c8:b0:1a1:e112:461c with SMTP id u8-20020a170902e5c800b001a1e112461cmr3271978plf.30.1680120034141; Wed, 29 Mar 2023 13:00:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120034; cv=none; d=google.com; s=arc-20160816; b=NvUg1NTzew+q1ji3v3XiPAd2u2KvqTuXuXu/lzcjCs2xQ3UY/2qQCrtlg5mBKi3EIV gN6GW96nAuyOoyv+I5m/fsjGf4+FeFGli8BhGA3XIolfvW7Om+0jrUBLClTYqhHdwS8Q O11/jUPnaFx5LEywcztYpCjCD7XXjW3mXoqys8zinRWIsjnFopTxon2I3kr8s5X1scA5 XQNonodzP8B+Kvqcy54Oo1wtFSAhC+S15AtyDiDdGLajZE3Doi6gYtuk3sqTQs4YsnUy 5SP60St+B6b7d3E1jeF+jT0u2BwU+duBdNZj9A1nvEZSgSaJoFkwaRfeQL4MUpGYJb6k SwtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=YN8KqIb+GJQMpWQHyHML5yE1syQjVyM+Yz0ZtbV/GrQ=; b=g7zm8Rq1sAs8y3vWXbgUdp1W2avgMl0hEkl7JwhNGW7qDsuNqdIsa1IQjwYQ8HTA8R vsuFL3hKQ1aB+l1nV2uvPjYEgA6Y0QV4RUAV0T2bzjw9yYtReLOmkuIm6ATdJUwwhqN4 HaNWBWfvORICRupWd32aq+EKQ7fvcoxVFVAcyX0Wt2+zJI5DRj+eQpwn58HdUaIW5TTF qcV4iH7Nko728GtP/KMaUO+9EZSmgsHDZTmCB8hmiB5Lg5uDt2+WdeF/2m0K8TY5QWp/ N7k5qFqQkMAt7d3bGPJGwlCTs7ozvwj6jvCxG51X+lP/ld9a4Evg52LI+nV1IlgpMT3Y DCfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d11-20020a63f24b000000b0050c03dc7252si33734699pgk.337.2023.03.29.13.00.19; Wed, 29 Mar 2023 13:00:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230339AbjC2TqZ (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229556AbjC2Tpy (ORCPT ); Wed, 29 Mar 2023 15:45:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F340F193 for ; Wed, 29 Mar 2023 12:45:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A3F68B82437 for ; Wed, 29 Mar 2023 19:45:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5D127C4339C; Wed, 29 Mar 2023 19:45:50 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjx-002Rej-1C; Wed, 29 Mar 2023 15:45:49 -0400 Message-ID: <20230329194549.186982647@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:17 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Florent Revest , Will Deacon Subject: [for-next][PATCH 01/25] fprobe: Pass entry_data to handlers References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733544668782743?= X-GMAIL-MSGID: =?utf-8?q?1761733544668782743?= From: "Masami Hiramatsu (Google)" Pass the private entry_data to the entry and exit handlers so that they can share the context data, something like saved function arguments etc. User must specify the private entry_data size by @entry_data_size field before registering the fprobe. Link: https://lkml.kernel.org/r/167526696173.433354.17408372048319432574.stgit@mhiramat.roam.corp.google.com Cc: Florent Revest Cc: Mark Rutland Cc: Will Deacon Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- include/linux/fprobe.h | 8 ++++++-- kernel/trace/bpf_trace.c | 2 +- kernel/trace/fprobe.c | 21 ++++++++++++++------- lib/test_fprobe.c | 6 ++++-- samples/fprobe/fprobe_example.c | 6 ++++-- 5 files changed, 29 insertions(+), 14 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 1c2bde0ead73..e0d4e6136249 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -13,6 +13,7 @@ * @nmissed: The counter for missing events. * @flags: The status flag. * @rethook: The rethook data structure. (internal data) + * @entry_data_size: The private data storage size. * @entry_handler: The callback function for function entry. * @exit_handler: The callback function for function exit. */ @@ -29,9 +30,12 @@ struct fprobe { unsigned long nmissed; unsigned int flags; struct rethook *rethook; + size_t entry_data_size; - void (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs); - void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs); + void (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, + struct pt_regs *regs, void *entry_data); + void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, + struct pt_regs *regs, void *entry_data); }; /* This fprobe is soft-disabled. */ diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index e8da032bb6fc..fa403c323501 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2646,7 +2646,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, static void kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, - struct pt_regs *regs) + struct pt_regs *regs, void *data) { struct bpf_kprobe_multi_link *link; diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index e8143e368074..fa25d09c9d57 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -17,14 +17,16 @@ struct fprobe_rethook_node { struct rethook_node node; unsigned long entry_ip; + char data[]; }; static void fprobe_handler(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs) { struct fprobe_rethook_node *fpr; - struct rethook_node *rh; + struct rethook_node *rh = NULL; struct fprobe *fp; + void *entry_data = NULL; int bit; fp = container_of(ops, struct fprobe, ops); @@ -37,9 +39,6 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip, return; } - if (fp->entry_handler) - fp->entry_handler(fp, ip, ftrace_get_regs(fregs)); - if (fp->exit_handler) { rh = rethook_try_get(fp->rethook); if (!rh) { @@ -48,9 +47,16 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip, } fpr = container_of(rh, struct fprobe_rethook_node, node); fpr->entry_ip = ip; - rethook_hook(rh, ftrace_get_regs(fregs), true); + if (fp->entry_data_size) + entry_data = fpr->data; } + if (fp->entry_handler) + fp->entry_handler(fp, ip, ftrace_get_regs(fregs), entry_data); + + if (rh) + rethook_hook(rh, ftrace_get_regs(fregs), true); + out: ftrace_test_recursion_unlock(bit); } @@ -81,7 +87,8 @@ static void fprobe_exit_handler(struct rethook_node *rh, void *data, fpr = container_of(rh, struct fprobe_rethook_node, node); - fp->exit_handler(fp, fpr->entry_ip, regs); + fp->exit_handler(fp, fpr->entry_ip, regs, + fp->entry_data_size ? (void *)fpr->data : NULL); } NOKPROBE_SYMBOL(fprobe_exit_handler); @@ -146,7 +153,7 @@ static int fprobe_init_rethook(struct fprobe *fp, int num) for (i = 0; i < size; i++) { struct fprobe_rethook_node *node; - node = kzalloc(sizeof(*node), GFP_KERNEL); + node = kzalloc(sizeof(*node) + fp->entry_data_size, GFP_KERNEL); if (!node) { rethook_free(fp->rethook); fp->rethook = NULL; diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 1fb56cf5e5ce..e4f65d114ed2 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -30,7 +30,8 @@ static noinline u32 fprobe_selftest_target2(u32 value) return (value / div_factor) + 1; } -static notrace void fp_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs) +static notrace void fp_entry_handler(struct fprobe *fp, unsigned long ip, + struct pt_regs *regs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); /* This can be called on the fprobe_selftest_target and the fprobe_selftest_target2 */ @@ -39,7 +40,8 @@ static notrace void fp_entry_handler(struct fprobe *fp, unsigned long ip, struct entry_val = (rand1 / div_factor); } -static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs) +static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, + struct pt_regs *regs, void *data) { unsigned long ret = regs_return_value(regs); diff --git a/samples/fprobe/fprobe_example.c b/samples/fprobe/fprobe_example.c index e22da8573116..dd794990ad7e 100644 --- a/samples/fprobe/fprobe_example.c +++ b/samples/fprobe/fprobe_example.c @@ -48,7 +48,8 @@ static void show_backtrace(void) stack_trace_print(stacks, len, 24); } -static void sample_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs) +static void sample_entry_handler(struct fprobe *fp, unsigned long ip, + struct pt_regs *regs, void *data) { if (use_trace) /* @@ -63,7 +64,8 @@ static void sample_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_ show_backtrace(); } -static void sample_exit_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs) +static void sample_exit_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs, + void *data) { unsigned long rip = instruction_pointer(regs); From patchwork Wed Mar 29 19:45:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76782 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp649013vqo; Wed, 29 Mar 2023 12:51:36 -0700 (PDT) X-Google-Smtp-Source: AKy350Y4Y3+52SRGUXVcL76PQ4+r/0WqrR21sLU1foFPDl05aYnXI+WFEY2ZXmxcDUj5+EvvOUG2 X-Received: by 2002:a05:6a20:2156:b0:e3:763e:555 with SMTP id z22-20020a056a20215600b000e3763e0555mr1519311pzz.57.1680119496017; Wed, 29 Mar 2023 12:51:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119495; cv=none; d=google.com; s=arc-20160816; b=LQGgz7x9p3Jwsnset/YW2UQlMZ9dA0vCvh5PpgHT9OzTg5UmeDgjYqdn7mOZubCHoQ QqzZXcczulsVFVPShn9Am2yg/e3kqnlgusoetDsx+BtsUzkEafM4BWe018pmXhWUtNvK SbnIoGtxkAQ/uackiucNU2NdLxL1BJO4d15mfdWG+LkrXd5B9jyd7dM5htvMzF4NWOfm pSYLeeOd8N5/qswSSrnov/c7CFiBMgnzzbGEs64/qiB0OnTy7B3Lwmpf1ZlAykQi5v0g rI/Z2xfeLKymXHc4B6OcyX8MuF126d2gkhkG+tHQUcl4hTOSrwqOQhaMqda+jgZVLjnG gZSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=LZnyQfkKRfmKPlegvoq/9v+S1VC7JaQ/VBOynbWtrSo=; b=x7a0xjBvxmKsnWn3juxnRhZKK078q1sA5hUoXj6AthM45cfC5P/PnpBrmfHsOQrmbJ H9upJCAYx7kfm7cXsGGuiBTZpcPNZUyAiCCzcq3Qm/8vR9ltPM5y8nlTPIvHOaJr7hpK 50ckh5qN9Eln1vi2rgNkO2Sek/xYUtQhfq2A8Bl373cLjhkzaoDXgcJQi3ol3Ak3ThdE GshdelG0FJ873YeDJEWRAq5mAqQHTIrf3NyJ+Ecu2ZEV8Em1u6LI6UO5fAnjq2AmAheX SMLk1zwiAoonp/3mhKDcV7P5xzXVdmT3f/WfKoRlQcR2HFr95uz9ZM85lSFG9m1p2NMA Mraw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b6-20020a056a000cc600b005a91e757a39si33752744pfv.169.2023.03.29.12.51.22; Wed, 29 Mar 2023 12:51:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229999AbjC2Tp4 (ORCPT + 99 others); Wed, 29 Mar 2023 15:45:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229501AbjC2Tpw (ORCPT ); Wed, 29 Mar 2023 15:45:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94EEFCA for ; Wed, 29 Mar 2023 12:45:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1FF7961E1C for ; Wed, 29 Mar 2023 19:45:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 84487C433D2; Wed, 29 Mar 2023 19:45:50 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjx-002RfH-1q; Wed, 29 Mar 2023 15:45:49 -0400 Message-ID: <20230329194549.389335497@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:18 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Florent Revest , Will Deacon Subject: [for-next][PATCH 02/25] lib/test_fprobe: Add private entry_data testcases References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761732980575099623?= X-GMAIL-MSGID: =?utf-8?q?1761732980575099623?= From: "Masami Hiramatsu (Google)" Add test cases for checking whether private entry_data is correctly passed or not. Link: https://lkml.kernel.org/r/167526697074.433354.17790288501657876219.stgit@mhiramat.roam.corp.google.com Cc: Florent Revest Cc: Mark Rutland Cc: Will Deacon Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- lib/test_fprobe.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index e4f65d114ed2..6c7ef5acea21 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -38,6 +38,12 @@ static notrace void fp_entry_handler(struct fprobe *fp, unsigned long ip, if (ip != target_ip) KUNIT_EXPECT_EQ(current_test, ip, target2_ip); entry_val = (rand1 / div_factor); + if (fp->entry_data_size) { + KUNIT_EXPECT_NOT_NULL(current_test, data); + if (data) + *(u32 *)data = entry_val; + } else + KUNIT_EXPECT_NULL(current_test, data); } static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, @@ -53,6 +59,12 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, KUNIT_EXPECT_EQ(current_test, ret, (rand1 / div_factor)); KUNIT_EXPECT_EQ(current_test, entry_val, (rand1 / div_factor)); exit_val = entry_val + div_factor; + if (fp->entry_data_size) { + KUNIT_EXPECT_NOT_NULL(current_test, data); + if (data) + KUNIT_EXPECT_EQ(current_test, *(u32 *)data, entry_val); + } else + KUNIT_EXPECT_NULL(current_test, data); } /* Test entry only (no rethook) */ @@ -134,6 +146,23 @@ static void test_fprobe_syms(struct kunit *test) KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); } +/* Test private entry_data */ +static void test_fprobe_data(struct kunit *test) +{ + struct fprobe fp = { + .entry_handler = fp_entry_handler, + .exit_handler = fp_exit_handler, + .entry_data_size = sizeof(u32), + }; + + current_test = test; + KUNIT_EXPECT_EQ(test, 0, register_fprobe(&fp, "fprobe_selftest_target", NULL)); + + target(rand1); + + KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); +} + static unsigned long get_ftrace_location(void *func) { unsigned long size, addr = (unsigned long)func; @@ -159,6 +188,7 @@ static struct kunit_case fprobe_testcases[] = { KUNIT_CASE(test_fprobe_entry), KUNIT_CASE(test_fprobe), KUNIT_CASE(test_fprobe_syms), + KUNIT_CASE(test_fprobe_data), {} }; From patchwork Wed Mar 29 19:45:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76798 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp654122vqo; Wed, 29 Mar 2023 13:01:30 -0700 (PDT) X-Google-Smtp-Source: AKy350a3WTogEuNx2gHiOZAj+YdmAigeUAZMCyYl50UsO5CWDwTM2/B6NvhyORea1kdHli+JeCMs X-Received: by 2002:a62:4ec9:0:b0:575:b783:b6b3 with SMTP id c192-20020a624ec9000000b00575b783b6b3mr17790722pfb.28.1680120089682; Wed, 29 Mar 2023 13:01:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120089; cv=none; d=google.com; s=arc-20160816; b=MutKbV1q6gY0bAeL9aomvxmdzbiiKVhjgUqZwZ+IaO4XfvTo4zu/i/osCuAv9IAqHg +E+04HKCG75G0Gi+2ymGtlVjvRuGL4C3JtDC3BAmZ5CfXzhHgEg9mNKQL/QAMWxXEuNh sBSFvABkgTFhIwysRV6PI4KT8qQzzVk5FOiRI0hYpFaCnSv7BWfukuU3+y58WjpoB5XV ogu7spFQt/DviQGkmrZLA+jSxRRVsxHDee253J02RRJZi+hF0REwk9dD07r0x9juGRfA TouwG3ZH77PaS/2JRTEANQh6mGoWqtnbQayfaujpUWKnGExKhUX5LHZuAwhp27rI4X+p Jk/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=KGEOlHe+k6gxddAwVCYvOeV3jhRToCRNVqwPHbaYUUI=; b=axLXsXFkGYxjQ8yokhdMtylPuOJVmbLqZXi2IwZLY9NJXHq1V7+YqObCu6/ssi4/u9 EtSNWZCafhVRl2ASob6S1GTeF1piK7wAbHoS8fx2IgtwQEK2K66ImCz1g4GaOp3hOIc2 1lOjFv1KqBhIntdmK2/HWggNk25sG3xyyqpiXv2/iDyCHRy6L0lXOix5xVkYkb2bAXHO UZhiziqVYuNN87NGntkW+L8OF3TYNSj3GYNcT7KiWjwHg3PCmDf8WBSPFvNdqqZxSwHF /O/nzT0EeKCjLMHXcPBVf9cedwCErYiBM0kHZksddSOg8D79W1QrEQxXhipHj7MK6D0E Tr2A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q79-20020a632a52000000b0051334c90cbcsi11532824pgq.455.2023.03.29.13.01.03; Wed, 29 Mar 2023 13:01:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230422AbjC2TqW (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229955AbjC2Tpy (ORCPT ); Wed, 29 Mar 2023 15:45:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 673431B8 for ; Wed, 29 Mar 2023 12:45:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0A518B82433 for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4EB9C433A4; Wed, 29 Mar 2023 19:45:50 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjx-002Rfp-2V; Wed, 29 Mar 2023 15:45:49 -0400 Message-ID: <20230329194549.595038894@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:19 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Florent Revest , Will Deacon Subject: [for-next][PATCH 03/25] fprobe: Add nr_maxactive to specify rethook_node pool size References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733603268338971?= X-GMAIL-MSGID: =?utf-8?q?1761733603268338971?= From: "Masami Hiramatsu (Google)" Add nr_maxactive to specify rethook_node pool size. This means the maximum number of actively running target functions concurrently for probing by exit_handler. Note that if the running function is preempted or sleep, it is still counted as 'active'. Link: https://lkml.kernel.org/r/167526697917.433354.17779774988245113106.stgit@mhiramat.roam.corp.google.com Cc: Florent Revest Cc: Mark Rutland Cc: Will Deacon Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- include/linux/fprobe.h | 2 ++ kernel/trace/fprobe.c | 5 ++++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index e0d4e6136249..678f741a7b33 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -14,6 +14,7 @@ * @flags: The status flag. * @rethook: The rethook data structure. (internal data) * @entry_data_size: The private data storage size. + * @nr_maxactive: The max number of active functions. * @entry_handler: The callback function for function entry. * @exit_handler: The callback function for function exit. */ @@ -31,6 +32,7 @@ struct fprobe { unsigned int flags; struct rethook *rethook; size_t entry_data_size; + int nr_maxactive; void (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs, void *entry_data); diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index fa25d09c9d57..f222848571f2 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -143,7 +143,10 @@ static int fprobe_init_rethook(struct fprobe *fp, int num) } /* Initialize rethook if needed */ - size = num * num_possible_cpus() * 2; + if (fp->nr_maxactive) + size = fp->nr_maxactive; + else + size = num * num_possible_cpus() * 2; if (size < 0) return -E2BIG; From patchwork Wed Mar 29 19:45:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76794 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp653280vqo; Wed, 29 Mar 2023 13:00:23 -0700 (PDT) X-Google-Smtp-Source: AKy350ZUuYwgK/bppZjh9V6fg7Jmo+F2t6QTArwss64wl8D6MFR4ztKWf4q7bYsov95rA6UbHpii X-Received: by 2002:a05:6a20:3b1c:b0:df:2140:3b87 with SMTP id c28-20020a056a203b1c00b000df21403b87mr14754207pzh.32.1680120023384; Wed, 29 Mar 2023 13:00:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120023; cv=none; d=google.com; s=arc-20160816; b=ruaWXB0AXHnyPCB/vqBKtzVE6J0tspbrCd9hg0Dr0k5L+YrOHDkMzylg3GN1ysLJV9 z376b/CNVGkccsAhX9ew29MBcxPC4QYiQg9/M6yANdEgUjsEo5z9H0VpIof3+874Z/eP scUdgAsR20GkpV5RoXlf2Dju68ZJcSoai02mIWPA4fNYusNPkmLSO/eFjK8QoXT2v8j+ PTZxxPXH2mXNF8Jo9LdS0lKl+0YC/fkBKZ0TTZcVwE7C0Jeu96Jl6D/WoIYkCj+AXoQo 2D8yxNjfnTGn47x+parBL48L+ghrLtkWDQ7zWgOEofgtt2CFQ9PcTybl79Fl8cprPpbK v8hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=uEnnxFD8C3iB+jzZ9Mjkgc0rWbo682ijP0vOLe4wGLw=; b=grlN0aZeFHYS2hDcBTflbSl55Ljw6/p3UJAYtRdR1Z776SmFtze+4O7YKaGE9uLu0e nZiOOH8ssVJu6dwMGRGieYRN7z4nkHpPnmRkanP+rd0LZx1CHYwwqETXgqCzxeu8Cq3H cE4iZ4xBnchnJ4zGerod/3zLHDLERgkLHOGc/Ls/W3zgdZjky1I+EpqdLhMZzuGof5HA CnJZuefWYb/Wted1YEFgnWenTT/c0BFuzNHQIetNm+izR3lKQp4fE23H8Y1e+Bs2rPL9 Gagg90rbwQHOC8RJGi7db8jylVtLOSA2GiZ6kuT1JNiSZ92EuxNnSstlZs7gxn9PF8wd ADbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c11-20020a624e0b000000b0062ae6345c76si12542676pfb.401.2023.03.29.13.00.07; Wed, 29 Mar 2023 13:00:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230379AbjC2TqI (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229934AbjC2Tpx (ORCPT ); Wed, 29 Mar 2023 15:45:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3546DC for ; Wed, 29 Mar 2023 12:45:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6F93C61E19 for ; Wed, 29 Mar 2023 19:45:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6075C433AA; Wed, 29 Mar 2023 19:45:50 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjx-002RgN-3A; Wed, 29 Mar 2023 15:45:49 -0400 Message-ID: <20230329194549.798918632@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:20 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Florent Revest , Will Deacon Subject: [for-next][PATCH 04/25] lib/test_fprobe: Add a test case for nr_maxactive References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733533734305107?= X-GMAIL-MSGID: =?utf-8?q?1761733533734305107?= From: "Masami Hiramatsu (Google)" Add a test case for nr_maxactive. If the number of active functions is more than nr_maxactive, it must be skipped. Link: https://lkml.kernel.org/r/167526698856.433354.4430007340787176666.stgit@mhiramat.roam.corp.google.com Cc: Florent Revest Cc: Mark Rutland Cc: Will Deacon Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- lib/test_fprobe.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 6c7ef5acea21..4b37d7022f35 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -17,8 +17,10 @@ static u32 rand1, entry_val, exit_val; /* Use indirect calls to avoid inlining the target functions */ static u32 (*target)(u32 value); static u32 (*target2)(u32 value); +static u32 (*target_nest)(u32 value, u32 (*nest)(u32)); static unsigned long target_ip; static unsigned long target2_ip; +static unsigned long target_nest_ip; static noinline u32 fprobe_selftest_target(u32 value) { @@ -30,6 +32,11 @@ static noinline u32 fprobe_selftest_target2(u32 value) return (value / div_factor) + 1; } +static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32)) +{ + return nest(value + 2); +} + static notrace void fp_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs, void *data) { @@ -67,6 +74,19 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, KUNIT_EXPECT_NULL(current_test, data); } +static notrace void nest_entry_handler(struct fprobe *fp, unsigned long ip, + struct pt_regs *regs, void *data) +{ + KUNIT_EXPECT_FALSE(current_test, preemptible()); +} + +static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip, + struct pt_regs *regs, void *data) +{ + KUNIT_EXPECT_FALSE(current_test, preemptible()); + KUNIT_EXPECT_EQ(current_test, ip, target_nest_ip); +} + /* Test entry only (no rethook) */ static void test_fprobe_entry(struct kunit *test) { @@ -163,6 +183,25 @@ static void test_fprobe_data(struct kunit *test) KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); } +/* Test nr_maxactive */ +static void test_fprobe_nest(struct kunit *test) +{ + static const char *syms[] = {"fprobe_selftest_target", "fprobe_selftest_nest_target"}; + struct fprobe fp = { + .entry_handler = nest_entry_handler, + .exit_handler = nest_exit_handler, + .nr_maxactive = 1, + }; + + current_test = test; + KUNIT_EXPECT_EQ(test, 0, register_fprobe_syms(&fp, syms, 2)); + + target_nest(rand1, target); + KUNIT_EXPECT_EQ(test, 1, fp.nmissed); + + KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); +} + static unsigned long get_ftrace_location(void *func) { unsigned long size, addr = (unsigned long)func; @@ -178,8 +217,10 @@ static int fprobe_test_init(struct kunit *test) rand1 = get_random_u32_above(div_factor); target = fprobe_selftest_target; target2 = fprobe_selftest_target2; + target_nest = fprobe_selftest_nest_target; target_ip = get_ftrace_location(target); target2_ip = get_ftrace_location(target2); + target_nest_ip = get_ftrace_location(target_nest); return 0; } @@ -189,6 +230,7 @@ static struct kunit_case fprobe_testcases[] = { KUNIT_CASE(test_fprobe), KUNIT_CASE(test_fprobe_syms), KUNIT_CASE(test_fprobe_data), + KUNIT_CASE(test_fprobe_nest), {} }; From patchwork Wed Mar 29 19:45:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76793 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp653182vqo; Wed, 29 Mar 2023 13:00:15 -0700 (PDT) X-Google-Smtp-Source: AKy350bHr5lCC+wMIrgKHsJVCLcRQr89rMs1usap+GHV8gWiYSNadgGoRC8VJkuHrJq9b6ae29fN X-Received: by 2002:a17:90b:4b0a:b0:23f:7625:49b6 with SMTP id lx10-20020a17090b4b0a00b0023f762549b6mr23122207pjb.37.1680120015424; Wed, 29 Mar 2023 13:00:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120015; cv=none; d=google.com; s=arc-20160816; b=IDVAg9z4k5Ln/ojg4F5plQRNj+YwXErUXDB+D3AxpA0KApu60DBFefQpQhzyrDRHYJ jxuV5k7fIXrTKgOquXzT+L1aavknUceEbgb/XSqJXqGuUNxXiFqsA60rIUIllRVniio7 PWoVLW2kFqi2gK64maKko5vG8r4fB0BeO9BpOgAvCuCIwxsuYnrN9z0CXCNOlE2jkoFy LvcriOyOt8OwH6/wH3Mi30BNgdMk05vOuunx2X7dRsGFuLibU/gsGfif8ZlViHXuZxrv +SOK7qwjH2gAu8B8hibXxNddOdU6U3EysOUr6HQD5Xn9NeV9443r7e7nUgT5r9LcHTev Mr1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=bfpvwfyaI6blsRnd8CN5rc/cDapn0MgeG30XN6HR4cQ=; b=NRuK3zb1K09dgLBTXCaoop+6dcv5UlrP14iwp6vgjw3KJEHezRPt1Pe6eISTIkYgy3 xWfX//ySkEdtrN9oz67CGg0GOX2FuELxkpSK6wtut+3c/gPOQKEXnFIIO1uXwrwM9cTF ohklCNWtpVEDClGJpax/CuPq/f7rK3rzKJVflA1t7ONOgnNy3D2IwArbuNHEbL4lX5O5 yJbDmYMxMZ3+S2MxwpJFfgrcYR4cRW7/2gIlOtnZ3hXCHyrlafx4bfUyR/UUG+aPV1Nu rzrryyb9z/y2xmanBNbUIxe+VF87GOe6j+m6jIYSQHBM+lLYYfvELJMwk4P3tFe6lRGB OFeQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ci11-20020a17090afc8b00b002400c4a97c1si2099990pjb.81.2023.03.29.13.00.01; Wed, 29 Mar 2023 13:00:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230368AbjC2TqF (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229913AbjC2Tpx (ORCPT ); Wed, 29 Mar 2023 15:45:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12DFB126 for ; Wed, 29 Mar 2023 12:45:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 961DF61E1F for ; Wed, 29 Mar 2023 19:45:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0FF2AC433AF; Wed, 29 Mar 2023 19:45:51 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjy-002Rgv-0c; Wed, 29 Mar 2023 15:45:50 -0400 Message-ID: <20230329194550.004033858@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:21 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Florent Revest , Will Deacon Subject: [for-next][PATCH 05/25] fprobe: Skip exit_handler if entry_handler returns !0 References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733525448231297?= X-GMAIL-MSGID: =?utf-8?q?1761733525448231297?= From: "Masami Hiramatsu (Google)" Skip hooking function return and calling exit_handler if the entry_handler() returns !0. Link: https://lkml.kernel.org/r/167526699798.433354.10998365726830117303.stgit@mhiramat.roam.corp.google.com Cc: Florent Revest Cc: Mark Rutland Cc: Will Deacon Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- include/linux/fprobe.h | 4 ++-- kernel/trace/bpf_trace.c | 15 +++++++++++++-- kernel/trace/fprobe.c | 14 +++++++++----- lib/test_fprobe.c | 7 +++++-- samples/fprobe/fprobe_example.c | 5 +++-- 5 files changed, 32 insertions(+), 13 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 678f741a7b33..47fefc7f363b 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -34,8 +34,8 @@ struct fprobe { size_t entry_data_size; int nr_maxactive; - void (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, - struct pt_regs *regs, void *entry_data); + int (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, + struct pt_regs *regs, void *entry_data); void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs, void *entry_data); }; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index fa403c323501..d804172b709c 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2644,12 +2644,23 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, return err; } -static void +static int kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, struct pt_regs *regs, void *data) { struct bpf_kprobe_multi_link *link; + link = container_of(fp, struct bpf_kprobe_multi_link, fp); + kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); + return 0; +} + +static void +kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip, + struct pt_regs *regs, void *data) +{ + struct bpf_kprobe_multi_link *link; + link = container_of(fp, struct bpf_kprobe_multi_link, fp); kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); } @@ -2848,7 +2859,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr goto error; if (flags & BPF_F_KPROBE_MULTI_RETURN) - link->fp.exit_handler = kprobe_multi_link_handler; + link->fp.exit_handler = kprobe_multi_link_exit_handler; else link->fp.entry_handler = kprobe_multi_link_handler; diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index f222848571f2..9abb3905bc8e 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -27,7 +27,7 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip, struct rethook_node *rh = NULL; struct fprobe *fp; void *entry_data = NULL; - int bit; + int bit, ret; fp = container_of(ops, struct fprobe, ops); if (fprobe_disabled(fp)) @@ -52,11 +52,15 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip, } if (fp->entry_handler) - fp->entry_handler(fp, ip, ftrace_get_regs(fregs), entry_data); - - if (rh) - rethook_hook(rh, ftrace_get_regs(fregs), true); + ret = fp->entry_handler(fp, ip, ftrace_get_regs(fregs), entry_data); + /* If entry_handler returns !0, nmissed is not counted. */ + if (rh) { + if (ret) + rethook_recycle(rh); + else + rethook_hook(rh, ftrace_get_regs(fregs), true); + } out: ftrace_test_recursion_unlock(bit); } diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 4b37d7022f35..9fa2ac9eda83 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -37,7 +37,7 @@ static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32)) return nest(value + 2); } -static notrace void fp_entry_handler(struct fprobe *fp, unsigned long ip, +static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); @@ -51,6 +51,8 @@ static notrace void fp_entry_handler(struct fprobe *fp, unsigned long ip, *(u32 *)data = entry_val; } else KUNIT_EXPECT_NULL(current_test, data); + + return 0; } static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, @@ -74,10 +76,11 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, KUNIT_EXPECT_NULL(current_test, data); } -static notrace void nest_entry_handler(struct fprobe *fp, unsigned long ip, +static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); + return 0; } static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip, diff --git a/samples/fprobe/fprobe_example.c b/samples/fprobe/fprobe_example.c index dd794990ad7e..4efc8feb6277 100644 --- a/samples/fprobe/fprobe_example.c +++ b/samples/fprobe/fprobe_example.c @@ -48,8 +48,8 @@ static void show_backtrace(void) stack_trace_print(stacks, len, 24); } -static void sample_entry_handler(struct fprobe *fp, unsigned long ip, - struct pt_regs *regs, void *data) +static int sample_entry_handler(struct fprobe *fp, unsigned long ip, + struct pt_regs *regs, void *data) { if (use_trace) /* @@ -62,6 +62,7 @@ static void sample_entry_handler(struct fprobe *fp, unsigned long ip, nhit++; if (stackdump) show_backtrace(); + return 0; } static void sample_exit_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs, From patchwork Wed Mar 29 19:45:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76807 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp662701vqo; Wed, 29 Mar 2023 13:15:53 -0700 (PDT) X-Google-Smtp-Source: AKy350aSOPPr6KRBjgRugXFpdFEHZtGx6bvsDm01yCS1m2Kf+9Cin4s2Dvr4KVzc92QgriOILFmT X-Received: by 2002:a17:906:5288:b0:932:3d1b:b67a with SMTP id c8-20020a170906528800b009323d1bb67amr21078972ejm.41.1680120953552; Wed, 29 Mar 2023 13:15:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120953; cv=none; d=google.com; s=arc-20160816; b=kTRmGCvFZfdjB0x4z8pZi/zEHIJYsHyJWVWDdf8hoREICebYoOFyO4qcr+fpIsmFWU F5LEUBnkBu5MrPctBdY27PJU+xKA7lrxylkY08mN8tb1QIsNVLA9AWXCsyqed4EJz5bk 5RTcXofz3gKMQruWrki8lPuUX7nOFN5CV3NQOotwMMDWqSKQxz2Jg7ZVhmaTGLi8s0MZ wJrt/x7z+Rh4ot6z4zX3T7AgFlHKWL/9Cn/UwlT7HOkZaLJt+9y3oYWm2P/V7rhh/Jfw +rN9euzLe4VzAeZnXrC8TvA2iPhDFn7mJLBbPPXjWhDhUjBgPO/R++9kKRuwtvERQtjk VSEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=9WQyFyaAKr40/aAstVEU4/liSuzU/L3KXESgnDg3ra0=; b=yjjLRLpAm9WE/doxJoBWmjW7IH7ywx7bs/fmxPy4Uww2rSBcIdV0UoPyb7aB+xD6Ag W+TM9XnfvKM69QsaCXAXutlpEXvTRfize7mtUM5EWEu+EDiECj/4LyR6lu3v2Y5GAMYu NvV2lrPbObSrXvOyWI+j9Gu4Nx0HFSjTtvpjXYo2+CAvO+boNaUww+vtqYQ+ChhETNal XMYaux7Me46hV67BL2kymndZWrhHwPWSSZRSGSvZyhO+uk5gEq0ssvV1ZbHOvVGCoFX3 nT1CJ37TSD1Ns9N1LHmPAOFTl6QMxOvzdZifbbwMHseJzNl7YcYJWJi4bz3ScPtqCqxc bVLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z4-20020a170906240400b008d1b885791csi33620134eja.153.2023.03.29.13.15.28; Wed, 29 Mar 2023 13:15:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231129AbjC2Tqj (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230056AbjC2Tp4 (ORCPT ); Wed, 29 Mar 2023 15:45:56 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26F72126 for ; Wed, 29 Mar 2023 12:45:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 82DCDB82435 for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4964CC4339E; Wed, 29 Mar 2023 19:45:51 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjy-002RhT-1H; Wed, 29 Mar 2023 15:45:50 -0400 Message-ID: <20230329194550.208773263@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:22 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Florent Revest , Will Deacon Subject: [for-next][PATCH 06/25] lib/test_fprobe: Add a testcase for skipping exit_handler References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761734509223738342?= X-GMAIL-MSGID: =?utf-8?q?1761734509223738342?= From: "Masami Hiramatsu (Google)" Add a testcase for skipping exit_handler if entry_handler returns !0. Link: https://lkml.kernel.org/r/167526700658.433354.12922388040490848613.stgit@mhiramat.roam.corp.google.com Cc: Florent Revest Cc: Mark Rutland Cc: Will Deacon Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- lib/test_fprobe.c | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 9fa2ac9eda83..0fe5273e960b 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -21,6 +21,7 @@ static u32 (*target_nest)(u32 value, u32 (*nest)(u32)); static unsigned long target_ip; static unsigned long target2_ip; static unsigned long target_nest_ip; +static int entry_return_value; static noinline u32 fprobe_selftest_target(u32 value) { @@ -52,7 +53,7 @@ static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, } else KUNIT_EXPECT_NULL(current_test, data); - return 0; + return entry_return_value; } static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, @@ -205,6 +206,28 @@ static void test_fprobe_nest(struct kunit *test) KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); } +static void test_fprobe_skip(struct kunit *test) +{ + struct fprobe fp = { + .entry_handler = fp_entry_handler, + .exit_handler = fp_exit_handler, + }; + + current_test = test; + KUNIT_EXPECT_EQ(test, 0, register_fprobe(&fp, "fprobe_selftest_target", NULL)); + + entry_return_value = 1; + entry_val = 0; + exit_val = 0; + target(rand1); + KUNIT_EXPECT_NE(test, 0, entry_val); + KUNIT_EXPECT_EQ(test, 0, exit_val); + KUNIT_EXPECT_EQ(test, 0, fp.nmissed); + entry_return_value = 0; + + KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); +} + static unsigned long get_ftrace_location(void *func) { unsigned long size, addr = (unsigned long)func; @@ -234,6 +257,7 @@ static struct kunit_case fprobe_testcases[] = { KUNIT_CASE(test_fprobe_syms), KUNIT_CASE(test_fprobe_data), KUNIT_CASE(test_fprobe_nest), + KUNIT_CASE(test_fprobe_skip), {} }; From patchwork Wed Mar 29 19:45:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76801 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp656527vqo; Wed, 29 Mar 2023 13:04:58 -0700 (PDT) X-Google-Smtp-Source: AKy350aY6idecbYn8pX/7V2uap1jxnI1pmPBuO5LtdfyJDT1ToP1AtCZ6IP7OX+hXm9GiktDPnxd X-Received: by 2002:a17:903:2347:b0:1a0:50bd:31c0 with SMTP id c7-20020a170903234700b001a050bd31c0mr25710166plh.24.1680120298433; Wed, 29 Mar 2023 13:04:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120298; cv=none; d=google.com; s=arc-20160816; b=m1IJiOiCjE3PuZUFkpwOxPyY4GhG5WuvO0jL2pKaahFnN4firXPhlT2QdAD7T4WO3P 9NMzCQnS2wfVjsjaGb3O4m27EU/nkM+ayb/VdCAnmhdzSL2b2uC8dxyZ93dD0gXlQkeS GDmLJlulogkBxJhAcxWdq++UfC0hhKWbZmv0I55SNPy2XFVZc95LihW83lvzCZ39jVf+ Tr/r64DUZD2RY5RVKTPx//Q3Q1j6dtoInU/q+TzNFOaOic19J/5sTSnpFlvhuLQTcNdI 3P/QSWRiXWu5FRyrnDBJ7E4xFwekDVkAPGSzOi+Sy6SZjjORNoVHpVmgdFqtWVSrPmX6 /eUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=2/3N+yD/dQBNxMAGMB69AdE3RWTGR0FaAhpKzhs3l7s=; b=gR6IJqUAl3jMMCLaq25NzjMBDemEcHW/yKkD3wcB33BvkXRTEU6oWsaFMj+6gsz803 lVnJRQsTKfWU/WR0Um3mEYPmBQBGvlRhWb35VQZENZDPg/UsjjxqchAg8qG91+ogixB8 aBlIxZQsfGldg6JLVXK2JjIXtBfrUwM4MHWIC/1Hx6R7qDunYMeNU6evzguci9fpdcZE RguGZNqoscUZ8se34IWhzps3jnP0x5JZwosgUvkeHT9a0RVqyKpKzfMzhRpSMfcAcUf8 FrS/os7U4dT3mnKhJiDi9WCBApeu4O+L7mQhpO3YngXd7bYnXtDen0pj8bWciVhnWTqg yqxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e5-20020a17090301c500b0019e4154578esi34344257plh.76.2023.03.29.13.04.45; Wed, 29 Mar 2023 13:04:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230263AbjC2TqS (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229940AbjC2Tpx (ORCPT ); Wed, 29 Mar 2023 15:45:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A240CA2 for ; Wed, 29 Mar 2023 12:45:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0F9BA61E21 for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 787B0C43442; Wed, 29 Mar 2023 19:45:51 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjy-002Ri3-1x; Wed, 29 Mar 2023 15:45:50 -0400 Message-ID: <20230329194550.418276586@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:23 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Florent Revest , Will Deacon Subject: [for-next][PATCH 07/25] docs: tracing: Update fprobe documentation References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733821787556904?= X-GMAIL-MSGID: =?utf-8?q?1761733821787556904?= From: "Masami Hiramatsu (Google)" Update fprobe.rst for - the private entry_data argument - the return value of the entry handler - the nr_rethook_node field. Link: https://lkml.kernel.org/r/167526701579.433354.3057889264263546659.stgit@mhiramat.roam.corp.google.com Cc: Florent Revest Cc: Mark Rutland Cc: Will Deacon Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- Documentation/trace/fprobe.rst | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/Documentation/trace/fprobe.rst b/Documentation/trace/fprobe.rst index b64bec1ce144..40dd2fbce861 100644 --- a/Documentation/trace/fprobe.rst +++ b/Documentation/trace/fprobe.rst @@ -87,14 +87,16 @@ returns as same as unregister_ftrace_function(). The fprobe entry/exit handler ============================= -The prototype of the entry/exit callback function is as follows: +The prototype of the entry/exit callback function are as follows: .. code-block:: c - void callback_func(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs); + int entry_callback(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs, void *entry_data); -Note that both entry and exit callbacks have same ptototype. The @entry_ip is -saved at function entry and passed to exit handler. + void exit_callback(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs, void *entry_data); + +Note that the @entry_ip is saved at function entry and passed to exit handler. +If the entry callback function returns !0, the corresponding exit callback will be cancelled. @fp This is the address of `fprobe` data structure related to this handler. @@ -113,6 +115,12 @@ saved at function entry and passed to exit handler. to use @entry_ip. On the other hand, in the exit_handler, the instruction pointer of @regs is set to the currect return address. +@entry_data + This is a local storage to share the data between entry and exit handlers. + This storage is NULL by default. If the user specify `exit_handler` field + and `entry_data_size` field when registering the fprobe, the storage is + allocated and passed to both `entry_handler` and `exit_handler`. + Share the callbacks with kprobes ================================ From patchwork Wed Mar 29 19:45:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76783 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp649206vqo; Wed, 29 Mar 2023 12:52:06 -0700 (PDT) X-Google-Smtp-Source: AKy350b5DP+Tu84OH8JAEypnnYh8ncjiIX1g5EQJZbbxNrSyCuYhUl8x/pw1uuMzUydkM9rswWfp X-Received: by 2002:a62:5fc2:0:b0:627:fc31:1de with SMTP id t185-20020a625fc2000000b00627fc3101demr17448139pfb.7.1680119526123; Wed, 29 Mar 2023 12:52:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119526; cv=none; d=google.com; s=arc-20160816; b=kcissGC1+j1Bkx+9MN9/Gmi6wdAesho/DumxcuogqpXEOnYMt/980P6aI4VGqxrFk6 oa3lcRvdQ+qDjY6hXPj/W9h7hokM9p4CnbuIpl+o0VKi78owKwWexUWSWQhApsBnX5vD 8dEs6SDwq9QfK05pLTs0NXgSYUtSWGlDyLPP8VoAm1uOvDrgH1lyz83ibMPNSL1JKL9o F+rb6sg3a/7WMk+1KCNyZTGZyFERGQLIttdxML/MiJaRFgfsw8UFK6FqTPqMKp3vyH97 3UVgiG+BXEsTFL9vTAbNaAxm6xrLESrPyxV/rEmZ3+Ib74feZTwp8J7HtiUlG5JLbdrt 5BRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=5CLrWDlEeBfsEINgZ+vOaCGtezos1w+42+FkZYQvfgA=; b=i8PBCl947XTkLuAYQzOqF0uTGwIG8xcVutQBy6IwzgI4j4b+QJmyGQC5/Lfn2dwWMb uB9fhs3OavZt0ys6I5rn03tUWtTCYYlno6r3zxT3ulhEwb4eoo479ElHzkdSbnJe1dZP tU2LtuenjcqAqTyg7eZN3z/mSmobzTt+Npl7B2xgjN08KNuYHCMQCzqppjr4MpXgCRTQ 8JsxQzwy1zueEG0nyyjajp+tCjwPSekvAvfRKxJC8ocYKbvo3TI9xlNHAErLhrXDQyU1 kVyiG3OSo0InJ/pBjKy7PROpGrmFtRwEIuAYvOd5VzMxMtaZ0RtQiMCBmlhZW5YH0G7Y bCcw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w128-20020a623086000000b0062550a9236csi30582657pfw.248.2023.03.29.12.51.52; Wed, 29 Mar 2023 12:52:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230138AbjC2Tp6 (ORCPT + 99 others); Wed, 29 Mar 2023 15:45:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229852AbjC2Tpx (ORCPT ); Wed, 29 Mar 2023 15:45:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67F46135 for ; Wed, 29 Mar 2023 12:45:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 00D9161E15 for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0DACC4339B; Wed, 29 Mar 2023 19:45:51 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjy-002Ric-2d; Wed, 29 Mar 2023 15:45:50 -0400 Message-ID: <20230329194550.625315959@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:24 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , "Tobin C. Harding" , Paolo Bonzini , Shuah Khan , Tycho Andersen , Mukesh Ojha , Ross Zwisler Subject: [for-next][PATCH 08/25] selftests: use canonical ftrace path References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733012166560133?= X-GMAIL-MSGID: =?utf-8?q?1761733012166560133?= From: Ross Zwisler The canonical location for the tracefs filesystem is at /sys/kernel/tracing. But, from Documentation/trace/ftrace.rst: Before 4.1, all ftrace tracing control files were within the debugfs file system, which is typically located at /sys/kernel/debug/tracing. For backward compatibility, when mounting the debugfs file system, the tracefs file system will be automatically mounted at: /sys/kernel/debug/tracing A few spots in tools/testing/selftests still refer to this older debugfs path, so let's update them to avoid confusion. Link: https://lkml.kernel.org/r/20230313211746.1541525-1-zwisler@kernel.org Cc: "Tobin C. Harding" Cc: Andrew Morton Cc: Mark Rutland Cc: Masami Hiramatsu Cc: Paolo Bonzini Cc: Shuah Khan Cc: Tycho Andersen Reviewed-by: Steven Rostedt (Google) Reviewed-by: Mukesh Ojha Signed-off-by: Ross Zwisler Signed-off-by: Steven Rostedt (Google) --- tools/testing/selftests/mm/protection_keys.c | 4 ++-- tools/testing/selftests/user_events/dyn_test.c | 2 +- tools/testing/selftests/user_events/ftrace_test.c | 10 +++++----- tools/testing/selftests/user_events/perf_test.c | 8 ++++---- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/mm/protection_keys.c b/tools/testing/selftests/mm/protection_keys.c index 95f403a0c46d..0381c34fdd56 100644 --- a/tools/testing/selftests/mm/protection_keys.c +++ b/tools/testing/selftests/mm/protection_keys.c @@ -98,7 +98,7 @@ int tracing_root_ok(void) void tracing_on(void) { #if CONTROL_TRACING > 0 -#define TRACEDIR "/sys/kernel/debug/tracing" +#define TRACEDIR "/sys/kernel/tracing" char pidstr[32]; if (!tracing_root_ok()) @@ -124,7 +124,7 @@ void tracing_off(void) #if CONTROL_TRACING > 0 if (!tracing_root_ok()) return; - cat_into_file("0", "/sys/kernel/debug/tracing/tracing_on"); + cat_into_file("0", "/sys/kernel/tracing/tracing_on"); #endif } diff --git a/tools/testing/selftests/user_events/dyn_test.c b/tools/testing/selftests/user_events/dyn_test.c index d6265d14cd51..8879a7b04c6a 100644 --- a/tools/testing/selftests/user_events/dyn_test.c +++ b/tools/testing/selftests/user_events/dyn_test.c @@ -16,7 +16,7 @@ #include "../kselftest_harness.h" -const char *dyn_file = "/sys/kernel/debug/tracing/dynamic_events"; +const char *dyn_file = "/sys/kernel/tracing/dynamic_events"; const char *clear = "!u:__test_event"; static int Append(const char *value) diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c index 404a2713dcae..a0b2c96eb252 100644 --- a/tools/testing/selftests/user_events/ftrace_test.c +++ b/tools/testing/selftests/user_events/ftrace_test.c @@ -16,11 +16,11 @@ #include "../kselftest_harness.h" -const char *data_file = "/sys/kernel/debug/tracing/user_events_data"; -const char *status_file = "/sys/kernel/debug/tracing/user_events_status"; -const char *enable_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/enable"; -const char *trace_file = "/sys/kernel/debug/tracing/trace"; -const char *fmt_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/format"; +const char *data_file = "/sys/kernel/tracing/user_events_data"; +const char *status_file = "/sys/kernel/tracing/user_events_status"; +const char *enable_file = "/sys/kernel/tracing/events/user_events/__test_event/enable"; +const char *trace_file = "/sys/kernel/tracing/trace"; +const char *fmt_file = "/sys/kernel/tracing/events/user_events/__test_event/format"; static inline int status_check(char *status_page, int status_bit) { diff --git a/tools/testing/selftests/user_events/perf_test.c b/tools/testing/selftests/user_events/perf_test.c index 8b4c7879d5a7..31505642aa9b 100644 --- a/tools/testing/selftests/user_events/perf_test.c +++ b/tools/testing/selftests/user_events/perf_test.c @@ -18,10 +18,10 @@ #include "../kselftest_harness.h" -const char *data_file = "/sys/kernel/debug/tracing/user_events_data"; -const char *status_file = "/sys/kernel/debug/tracing/user_events_status"; -const char *id_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/id"; -const char *fmt_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/format"; +const char *data_file = "/sys/kernel/tracing/user_events_data"; +const char *status_file = "/sys/kernel/tracing/user_events_status"; +const char *id_file = "/sys/kernel/tracing/events/user_events/__test_event/id"; +const char *fmt_file = "/sys/kernel/tracing/events/user_events/__test_event/format"; struct event { __u32 index; From patchwork Wed Mar 29 19:45:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76789 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp652367vqo; Wed, 29 Mar 2023 12:58:45 -0700 (PDT) X-Google-Smtp-Source: AKy350agf2Yosa7C68mSDwnF6Uvk+6QN9tg9LXOk33MQivA4XtjN2pGq8DTdxtCkXx3gROPyf4mA X-Received: by 2002:a17:902:e744:b0:19d:1fce:c9ec with SMTP id p4-20020a170902e74400b0019d1fcec9ecmr24754016plf.37.1680119925317; Wed, 29 Mar 2023 12:58:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119925; cv=none; d=google.com; s=arc-20160816; b=wahglam9pRvWGalX0rwfbWLCyA+HcsMYkSh9wijoLthj1fzJXJJ269OpL8c060C/Kv T6eWHlOp10g7O38h91pbayEEfHci1AaNNcSC28UT62CUEEBgw83c1lK7BWTEdVMz6h2C hG8HNh4pTbCS6Gz7n+VLfoItLqiKk4vI1mGxAYxL/+gYUzU3qOFy5qGNIZWUSV7WVOuM AUNGgpBQJQzBe3l52QS2Y8Rgxex/qDOFHBsml1XZuspMxOZNfH9gvebx/ndf8GLgEDNH Blu+m4QKH1wuNgxUp+1jTJShD+wai4gWaZ6nxuSsvHvAqY+4KjFE19H1wkugdCIpJq2D lEiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=pM8mEIly2r1SKgU+57434SegX/MgmcNxrd8c4LBZSlE=; b=Jym6QkyfZaHPMZ3fld6lu3/+QohzXVc1vPV5F9cuTGgiyAvImvCVm4BnI5o6m+H7cq b2a04w6ZbQRJ3MWM9wAbTcsxa+EnztfYQ/Fn8Rm3gHUBesIUixb/flod6T4S7RISu5e/ NG40hV1jIoikbWoP9ysvzB620JDo5mg4AU1L+Neul6GugHiOppvoetC5JD7+5S592/SB d5lvnpTwoB3BT0KtyXYB1+yePSCvogy+XdHeHHcKXWk6PddMfMTSWDZr25v/+QwhAbJ8 H8MB7OmXdSRYcF2+RMQLEpXxeMfZY/32cmUHavnCu1ouiGpprFTDEZxl862FLJaR+/SY oM/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h10-20020a170902f7ca00b0019bf9b4b5f1si31546830plw.629.2023.03.29.12.58.32; Wed, 29 Mar 2023 12:58:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230164AbjC2TqC (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229922AbjC2Tpx (ORCPT ); Wed, 29 Mar 2023 15:45:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4057CA for ; Wed, 29 Mar 2023 12:45:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 406BC61E1D for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A377C433EF; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjz-002RjB-03; Wed, 29 Mar 2023 15:45:51 -0400 Message-ID: <20230329194550.833880823@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:25 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , "Tobin C. Harding" , Paolo Bonzini , Shuah Khan , Tycho Andersen , Ross Zwisler Subject: [for-next][PATCH 09/25] leaking_addresses: also skip canonical ftrace path References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733431016259369?= X-GMAIL-MSGID: =?utf-8?q?1761733431016259369?= From: Ross Zwisler The canonical location for the tracefs filesystem is at /sys/kernel/tracing. But, from Documentation/trace/ftrace.rst: Before 4.1, all ftrace tracing control files were within the debugfs file system, which is typically located at /sys/kernel/debug/tracing. For backward compatibility, when mounting the debugfs file system, the tracefs file system will be automatically mounted at: /sys/kernel/debug/tracing scripts/leaking_addresses.pl only skipped this older debugfs path, so let's add the canonical path as well. Link: https://lkml.kernel.org/r/20230313211746.1541525-2-zwisler@kernel.org Cc: "Tobin C. Harding" Cc: Andrew Morton Cc: Mark Rutland Cc: Masami Hiramatsu Cc: Paolo Bonzini Cc: Shuah Khan Acked-by: Tycho Andersen Reviewed-by: Steven Rostedt (Google) Signed-off-by: Ross Zwisler Signed-off-by: Steven Rostedt (Google) --- scripts/leaking_addresses.pl | 1 + 1 file changed, 1 insertion(+) diff --git a/scripts/leaking_addresses.pl b/scripts/leaking_addresses.pl index 8f636a23bc3f..e695634d153d 100755 --- a/scripts/leaking_addresses.pl +++ b/scripts/leaking_addresses.pl @@ -61,6 +61,7 @@ my @skip_abs = ( '/proc/device-tree', '/proc/1/syscall', '/sys/firmware/devicetree', + '/sys/kernel/tracing/trace_pipe', '/sys/kernel/debug/tracing/trace_pipe', '/sys/kernel/security/apparmor/revision'); From patchwork Wed Mar 29 19:45:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76791 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp652986vqo; Wed, 29 Mar 2023 13:00:02 -0700 (PDT) X-Google-Smtp-Source: AKy350Y143Nz4yWT6i3R2X3IYfLPbM2Te1LfOMDpuxY3zgPXN5yZS7jmj58g+fmj+DrJosXE4HWd X-Received: by 2002:a17:903:4d:b0:19c:f8c5:d504 with SMTP id l13-20020a170903004d00b0019cf8c5d504mr16511039pla.59.1680120002228; Wed, 29 Mar 2023 13:00:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120002; cv=none; d=google.com; s=arc-20160816; b=dUlaIwrpuKuYhOBNYTvPxwcNszbiHzszLF0SuD0Cw+5Q6LwYg8EuxaOOS3KHSQtlel KoPwX5jyj3AzbyCJjjsBbJ0LaHAJ238INT6rPwpL7dKTc4nVjGCYg+2uCAWXdVmVzyrn zjd+xHgHIOi8xEQAoA12IpRQz1lEVFuzi6cPPLqMbCdnjOA5tuixfeeEuJJNKkTLLgnz xuT+er3gFhGKYBnzO/zf+GZBaHb4ueb/3w4QMDqdADLoJmfmpLy2Z+gv/8zjsLksNstQ 7UTp5vu4nkF1H/9kNycR5CbEVBX/FVWnQfDJwY12tzcdqdO/Ax9J2JdkBkTF3zWH15mJ aNmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=lxI4GOwBwY8K7V2N/n4Bhk9o1L0zzd+HFrlLdlRAllE=; b=pb4eyRK/zh+NaPHaZpfgDw/MOzOD+9VULl/NryVhqcP6SJBfDu6jxX9bxwxu+wLeVz ifLfg3b1zbqExNQncMvr+1Hr9LOz4jCOPWVOSFb9zGDWQYnm1jDme+VgtVa2f1kLI891 Sl20C81m+C829hX7Y3WqIvERHAWngj8vuupE3WedXol12z7rg9ZFIM7U0PtVh4D20ts6 Tt6rzhvstHiR7eh8asLQc/vV4ZRiy5vUibYB12FW5CGOmvEYvAZpVXck06snyaPxi475 9JNL7IzNoDlIqFZuERwgBAhVPARm7CxoYJLjm/fmT7ioORP5D6odsDRKrRukPqLwO+tb /ikA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f20-20020a63f114000000b00502d85bfb5fsi17276890pgi.451.2023.03.29.12.59.48; Wed, 29 Mar 2023 13:00:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230192AbjC2TqM (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229705AbjC2Tpx (ORCPT ); Wed, 29 Mar 2023 15:45:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D93F9139 for ; Wed, 29 Mar 2023 12:45:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 48D2361E27 for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1D789C433A1; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjz-002Rjk-0i; Wed, 29 Mar 2023 15:45:51 -0400 Message-ID: <20230329194551.039752631@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:26 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , "Tobin C. Harding" , Shuah Khan , Tycho Andersen , Paolo Bonzini , Mukesh Ojha , Ross Zwisler Subject: [for-next][PATCH 10/25] tools/kvm_stat: use canonical ftrace path References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733511283960123?= X-GMAIL-MSGID: =?utf-8?q?1761733511283960123?= From: Ross Zwisler The canonical location for the tracefs filesystem is at /sys/kernel/tracing. But, from Documentation/trace/ftrace.rst: Before 4.1, all ftrace tracing control files were within the debugfs file system, which is typically located at /sys/kernel/debug/tracing. For backward compatibility, when mounting the debugfs file system, the tracefs file system will be automatically mounted at: /sys/kernel/debug/tracing A comment in kvm_stat still refers to this older debugfs path, so let's update it to avoid confusion. Link: https://lkml.kernel.org/r/20230313211746.1541525-3-zwisler@kernel.org Cc: "Tobin C. Harding" Cc: Andrew Morton Cc: Mark Rutland Cc: Masami Hiramatsu Cc: Shuah Khan Cc: Tycho Andersen Acked-by: Paolo Bonzini Reviewed-by: Steven Rostedt (Google) Reviewed-by: Mukesh Ojha Signed-off-by: Ross Zwisler Signed-off-by: Steven Rostedt (Google) --- tools/kvm/kvm_stat/kvm_stat | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/kvm/kvm_stat/kvm_stat b/tools/kvm/kvm_stat/kvm_stat index 6f28180ffeea..15bf00e79e3f 100755 --- a/tools/kvm/kvm_stat/kvm_stat +++ b/tools/kvm/kvm_stat/kvm_stat @@ -627,7 +627,7 @@ class TracepointProvider(Provider): name)'. All available events have directories under - /sys/kernel/debug/tracing/events/ which export information + /sys/kernel/tracing/events/ which export information about the specific event. Therefore, listing the dirs gives us a list of all available events. From patchwork Wed Mar 29 19:45:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76781 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp647291vqo; Wed, 29 Mar 2023 12:47:55 -0700 (PDT) X-Google-Smtp-Source: AKy350YNUnRRVxJCtYLGxcp6l8OO16VpjRwizjnpL1xZcNjzTrXb8CE+Kvjr3gySX7ID7ZfZJQU1 X-Received: by 2002:a17:906:5010:b0:92a:77dd:f6f with SMTP id s16-20020a170906501000b0092a77dd0f6fmr23156630ejj.73.1680119275175; Wed, 29 Mar 2023 12:47:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119275; cv=none; d=google.com; s=arc-20160816; b=kM3qO1r81WtzfM3Ly92vcSryarZjCSEhIqTgwpEttK/fhHL6y3t/yMqjNi4rCgq6zR NS+crP/0kndDvqaWzAeCdOJcEb9OpTMInAD1guSE2r0iEmrfwh+k1aaS7jac3Ug4mV1s JixoXa2JvKn4VewajhY/QGnaxjfXVvpYhLrlKLWRr/svpbz6xUBpn7ULOQ7Nijc5Wc69 UL1KHXh9giDmHFryeDFitiNMiZJH5ssvE7oGVc2LvN9UN85FfKpx3nDPhEzLqcO8A4Hz yIrDo3Wz9RSpHG3rJTJ4cvKJ8jGdsdnO8WXyUiGzZBbV38tjDqHC7B9aTG+JVm3cYwkJ cg/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=6uDwRhzpipOTTntpyFVWBYZRyXRx8D5xWwzVJj3ZZjY=; b=KIvrLvVGC7ZACgdT/Udian8r62wFBWeVeFTmtiRg9eZtpuT26vcwdoNqNL/s2+TRNC gRJ1u9hTdescVrDgo1c+Rc32spiyIBOivJ1YCA2WYH2kaa+kLlziIvh1j2rPnipT320A YRjfggC7blaK/FTvIwa2x/3Ur1mR4eZY2fRYlf7qZHqXCoyv2XioySfwzw6VRdOOvzjO 0s6xZHYtYqBmlIYeMKJh81FfCjFAPkdaNsDePST49PlQ2KtqwYqxPf9fu5o5wh47pQxM TT1wC2Yb8L3qAmEG1Hp5oxPONRPoBvOdEPrqWXI0lSXjJxwMPqwzb0NlGSsOOvcCjt4q CSHg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j12-20020aa7c40c000000b004bcedde1496si33517007edq.287.2023.03.29.12.47.29; Wed, 29 Mar 2023 12:47:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230415AbjC2Tqh (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230052AbjC2Tp4 (ORCPT ); Wed, 29 Mar 2023 15:45:56 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B83DE1711 for ; Wed, 29 Mar 2023 12:45:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B414861E2D for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7ED4AC433D2; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjz-002RkJ-1O; Wed, 29 Mar 2023 15:45:51 -0400 Message-ID: <20230329194551.245078704@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:27 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 11/25] tracing: Add "fields" option to show raw trace event fields References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761732749015440717?= X-GMAIL-MSGID: =?utf-8?q?1761732749015440717?= From: "Steven Rostedt (Google)" The hex, raw and bin formats come from the old PREEMPT_RT patch set latency tracer. That actually gave real alternatives to reading the ascii buffer. But they have started to bit rot and they do not give a good representation of the tracing data. Add "fields" option that will read the trace event fields and parse the data from how the fields are defined: With "fields" = 0 (default) echo 1 > events/sched/sched_switch/enable cat trace -0 [003] d..2. 540.078653: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=kworker/3:1 next_pid=83 next_prio=120 kworker/3:1-83 [003] d..2. 540.078860: sched_switch: prev_comm=kworker/3:1 prev_pid=83 prev_prio=120 prev_state=I ==> next_comm=swapper/3 next_pid=0 next_prio=120 -0 [003] d..2. 540.206423: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=sshd next_pid=807 next_prio=120 sshd-807 [003] d..2. 540.206531: sched_switch: prev_comm=sshd prev_pid=807 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120 -0 [001] d..2. 540.206597: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=kworker/u16:4 next_pid=58 next_prio=120 kworker/u16:4-58 [001] d..2. 540.206617: sched_switch: prev_comm=kworker/u16:4 prev_pid=58 prev_prio=120 prev_state=I ==> next_comm=bash next_pid=830 next_prio=120 bash-830 [001] d..2. 540.206678: sched_switch: prev_comm=bash prev_pid=830 prev_prio=120 prev_state=R ==> next_comm=kworker/u16:4 next_pid=58 next_prio=120 kworker/u16:4-58 [001] d..2. 540.206696: sched_switch: prev_comm=kworker/u16:4 prev_pid=58 prev_prio=120 prev_state=I ==> next_comm=bash next_pid=830 next_prio=120 bash-830 [001] d..2. 540.206713: sched_switch: prev_comm=bash prev_pid=830 prev_prio=120 prev_state=R ==> next_comm=kworker/u16:4 next_pid=58 next_prio=120 echo 1 > options/fields <...>-998 [002] d..2. 538.643732: sched_switch: next_prio=0x78 (120) next_pid=0x0 (0) next_comm=swapper/2 prev_state=0x20 (32) prev_prio=0x78 (120) prev_pid=0x3e6 (998) prev_comm=trace-cmd -0 [001] d..2. 538.643806: sched_switch: next_prio=0x78 (120) next_pid=0x33e (830) next_comm=bash prev_state=0x0 (0) prev_prio=0x78 (120) prev_pid=0x0 (0) prev_comm=swapper/1 bash-830 [001] d..2. 538.644106: sched_switch: next_prio=0x78 (120) next_pid=0x3a (58) next_comm=kworker/u16:4 prev_state=0x0 (0) prev_prio=0x78 (120) prev_pid=0x33e (830) prev_comm=bash kworker/u16:4-58 [001] d..2. 538.644130: sched_switch: next_prio=0x78 (120) next_pid=0x33e (830) next_comm=bash prev_state=0x80 (128) prev_prio=0x78 (120) prev_pid=0x3a (58) prev_comm=kworker/u16:4 bash-830 [001] d..2. 538.644180: sched_switch: next_prio=0x78 (120) next_pid=0x3a (58) next_comm=kworker/u16:4 prev_state=0x0 (0) prev_prio=0x78 (120) prev_pid=0x33e (830) prev_comm=bash kworker/u16:4-58 [001] d..2. 538.644185: sched_switch: next_prio=0x78 (120) next_pid=0x33e (830) next_comm=bash prev_state=0x80 (128) prev_prio=0x78 (120) prev_pid=0x3a (58) prev_comm=kworker/u16:4 bash-830 [001] d..2. 538.644204: sched_switch: next_prio=0x78 (120) next_pid=0x0 (0) next_comm=swapper/1 prev_state=0x1 (1) prev_prio=0x78 (120) prev_pid=0x33e (830) prev_comm=bash -0 [003] d..2. 538.644211: sched_switch: next_prio=0x78 (120) next_pid=0x327 (807) next_comm=sshd prev_state=0x0 (0) prev_prio=0x78 (120) prev_pid=0x0 (0) prev_comm=swapper/3 sshd-807 [003] d..2. 538.644340: sched_switch: next_prio=0x78 (120) next_pid=0x0 (0) next_comm=swapper/3 prev_state=0x1 (1) prev_prio=0x78 (120) prev_pid=0x327 (807) prev_comm=sshd It traces the data safely without using the trace print formatting. Link: https://lore.kernel.org/linux-trace-kernel/20230328145156.497651be@gandalf.local.home Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- Documentation/trace/ftrace.rst | 6 ++ kernel/trace/trace.c | 7 +- kernel/trace/trace.h | 2 + kernel/trace/trace_output.c | 168 +++++++++++++++++++++++++++++++++ kernel/trace/trace_output.h | 2 + 5 files changed, 183 insertions(+), 2 deletions(-) diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst index b927fb2b94dc..aaebb821912e 100644 --- a/Documentation/trace/ftrace.rst +++ b/Documentation/trace/ftrace.rst @@ -1027,6 +1027,7 @@ To see what is available, simply cat the file:: nohex nobin noblock + nofields trace_printk annotate nouserstacktrace @@ -1110,6 +1111,11 @@ Here are the available options: block When set, reading trace_pipe will not block when polled. + fields + Print the fields as described by their types. This is a better + option than using hex, bin or raw, as it gives a better parsing + of the content of the event. + trace_printk Can disable trace_printk() from writing into the buffer. diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 937e9676dfd4..076d893d2965 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3726,7 +3726,7 @@ __find_next_entry(struct trace_iterator *iter, int *ent_cpu, #define STATIC_FMT_BUF_SIZE 128 static char static_fmt_buf[STATIC_FMT_BUF_SIZE]; -static char *trace_iter_expand_format(struct trace_iterator *iter) +char *trace_iter_expand_format(struct trace_iterator *iter) { char *tmp; @@ -4446,8 +4446,11 @@ static enum print_line_t print_trace_fmt(struct trace_iterator *iter) if (trace_seq_has_overflowed(s)) return TRACE_TYPE_PARTIAL_LINE; - if (event) + if (event) { + if (tr->trace_flags & TRACE_ITER_FIELDS) + return print_event_fields(iter, event); return event->funcs->trace(iter, sym_flags, event); + } trace_seq_printf(s, "Unknown type %d\n", entry->type); diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 616e1aa1c4da..79bdefe9261b 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -619,6 +619,7 @@ bool trace_is_tracepoint_string(const char *str); const char *trace_event_format(struct trace_iterator *iter, const char *fmt); void trace_check_vprintf(struct trace_iterator *iter, const char *fmt, va_list ap) __printf(2, 0); +char *trace_iter_expand_format(struct trace_iterator *iter); int trace_empty(struct trace_iterator *iter); @@ -1199,6 +1200,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf, C(HEX, "hex"), \ C(BIN, "bin"), \ C(BLOCK, "block"), \ + C(FIELDS, "fields"), \ C(PRINTK, "trace_printk"), \ C(ANNOTATE, "annotate"), \ C(USERSTACKTRACE, "userstacktrace"), \ diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c index bd475a00f96d..780c6971c944 100644 --- a/kernel/trace/trace_output.c +++ b/kernel/trace/trace_output.c @@ -808,6 +808,174 @@ EXPORT_SYMBOL_GPL(unregister_trace_event); * Standard events */ +static void print_array(struct trace_iterator *iter, void *pos, + struct ftrace_event_field *field) +{ + int offset; + int len; + int i; + + offset = *(int *)pos & 0xffff; + len = *(int *)pos >> 16; + + if (field) + offset += field->offset; + + if (offset + len >= iter->ent_size) { + trace_seq_puts(&iter->seq, ""); + return; + } + + for (i = 0; i < len; i++, pos++) { + if (i) + trace_seq_putc(&iter->seq, ','); + trace_seq_printf(&iter->seq, "%02x", *(unsigned char *)pos); + } +} + +static void print_fields(struct trace_iterator *iter, struct trace_event_call *call, + struct list_head *head) +{ + struct ftrace_event_field *field; + int offset; + int len; + int ret; + void *pos; + + list_for_each_entry(field, head, link) { + trace_seq_printf(&iter->seq, " %s=", field->name); + if (field->offset + field->size > iter->ent_size) { + trace_seq_puts(&iter->seq, ""); + continue; + } + pos = (void *)iter->ent + field->offset; + + switch (field->filter_type) { + case FILTER_COMM: + case FILTER_STATIC_STRING: + trace_seq_printf(&iter->seq, "%.*s", field->size, (char *)pos); + break; + case FILTER_RDYN_STRING: + case FILTER_DYN_STRING: + offset = *(int *)pos & 0xffff; + len = *(int *)pos >> 16; + + if (field->filter_type == FILTER_RDYN_STRING) + offset += field->offset; + + if (offset + len >= iter->ent_size) { + trace_seq_puts(&iter->seq, ""); + break; + } + pos = (void *)iter->ent + offset; + trace_seq_printf(&iter->seq, "%.*s", len, (char *)pos); + break; + case FILTER_PTR_STRING: + if (!iter->fmt_size) + trace_iter_expand_format(iter); + pos = *(void **)pos; + ret = strncpy_from_kernel_nofault(iter->fmt, pos, + iter->fmt_size); + if (ret < 0) + trace_seq_printf(&iter->seq, "(0x%px)", pos); + else + trace_seq_printf(&iter->seq, "(0x%px:%s)", + pos, iter->fmt); + break; + case FILTER_TRACE_FN: + pos = *(void **)pos; + trace_seq_printf(&iter->seq, "%pS", pos); + break; + case FILTER_CPU: + case FILTER_OTHER: + switch (field->size) { + case 1: + if (isprint(*(char *)pos)) { + trace_seq_printf(&iter->seq, "'%c'", + *(unsigned char *)pos); + } + trace_seq_printf(&iter->seq, "(%d)", + *(unsigned char *)pos); + break; + case 2: + trace_seq_printf(&iter->seq, "0x%x (%d)", + *(unsigned short *)pos, + *(unsigned short *)pos); + break; + case 4: + /* dynamic array info is 4 bytes */ + if (strstr(field->type, "__data_loc")) { + print_array(iter, pos, NULL); + break; + } + + if (strstr(field->type, "__rel_loc")) { + print_array(iter, pos, field); + break; + } + + trace_seq_printf(&iter->seq, "0x%x (%d)", + *(unsigned int *)pos, + *(unsigned int *)pos); + break; + case 8: + trace_seq_printf(&iter->seq, "0x%llx (%lld)", + *(unsigned long long *)pos, + *(unsigned long long *)pos); + break; + default: + trace_seq_puts(&iter->seq, ""); + break; + } + break; + default: + trace_seq_puts(&iter->seq, ""); + } + } + trace_seq_putc(&iter->seq, '\n'); +} + +enum print_line_t print_event_fields(struct trace_iterator *iter, + struct trace_event *event) +{ + struct trace_event_call *call; + struct list_head *head; + + /* ftrace defined events have separate call structures */ + if (event->type <= __TRACE_LAST_TYPE) { + bool found = false; + + down_read(&trace_event_sem); + list_for_each_entry(call, &ftrace_events, list) { + if (call->event.type == event->type) { + found = true; + break; + } + /* No need to search all events */ + if (call->event.type > __TRACE_LAST_TYPE) + break; + } + up_read(&trace_event_sem); + if (!found) { + trace_seq_printf(&iter->seq, "UNKNOWN TYPE %d\n", event->type); + goto out; + } + } else { + call = container_of(event, struct trace_event_call, event); + } + head = trace_get_fields(call); + + trace_seq_printf(&iter->seq, "%s:", trace_event_name(call)); + + if (head && !list_empty(head)) + print_fields(iter, call, head); + else + trace_seq_puts(&iter->seq, "No fields found\n"); + + out: + return trace_handle_return(&iter->seq); +} + enum print_line_t trace_nop_print(struct trace_iterator *iter, int flags, struct trace_event *event) { diff --git a/kernel/trace/trace_output.h b/kernel/trace/trace_output.h index 4c954636caf0..dca40f1f1da4 100644 --- a/kernel/trace/trace_output.h +++ b/kernel/trace/trace_output.h @@ -19,6 +19,8 @@ seq_print_ip_sym(struct trace_seq *s, unsigned long ip, extern void trace_seq_print_sym(struct trace_seq *s, unsigned long address, bool offset); extern int trace_print_context(struct trace_iterator *iter); extern int trace_print_lat_context(struct trace_iterator *iter); +extern enum print_line_t print_event_fields(struct trace_iterator *iter, + struct trace_event *event); extern void trace_event_read_lock(void); extern void trace_event_read_unlock(void); From patchwork Wed Mar 29 19:45:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76795 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp653362vqo; Wed, 29 Mar 2023 13:00:29 -0700 (PDT) X-Google-Smtp-Source: AKy350YD2/nW93dbEQaAZsGVxS8ryKeIdDP6Nw7B09qHALYDZ6Dw4/3rXchGW+Fx9OmxKROWKVkr X-Received: by 2002:a17:902:e34c:b0:1a0:69ba:832e with SMTP id p12-20020a170902e34c00b001a069ba832emr16148773plc.0.1680120028513; Wed, 29 Mar 2023 13:00:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120028; cv=none; d=google.com; s=arc-20160816; b=ubll9L0DDz6MoDYpZ+ZCjaowGgFtjbymZZd66fKzDfi5bUjAVqpgE1i9Sd3NmgH+Hx 2wnCwnQ4Et6a1xLgYkaPr8/lLvs9I0GnufnO4f+PJtPnD/xgUA9uVUfFQdqJv1o9eHh/ L6zYvvBTDQWWgG0L6A0oPF/LbRoV4LfzhII/8Cby3EnI1v7ot6ylBMbJpanWdTmJTbrD qu3MABTlIZ2d5EdEjRi7STG+tbM4XxvokRCBbq82bk0aatBuhyZD2pKIPbkFOElGHlQk zbJ3cRNKsdfQ5CFhlxHhAm7//q3VUFAWChNQnHKc2lAd4OKXo0Y6Sd8O10rAG16Lnnmq ZKxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=Uu04A4hf9rh0/AhJVlDZNoLNV/sZiMFP4WvfFmrO96g=; b=kBxOORAvAeNkfupOnTVVTwY03ihOmbu0eOQW47OyhDWcwUNijgHx8/X57q3cd49wgf +qJYnfdImersorw/4rkou05tHOBrMv+V+mt64VxoEuXLETQTGkXMBLBDHSEv8lLW7ech AdgljA4wkNPOW7mN25KRxSQb9b6tzqQJNw/N5amfz2dAOAAn0VHth3BhhIoBSYM9/jG+ v9mCDh0uEGiIG20eovK53jBUelG6pIgsUE4V1Mzz0VTz3iNCdrG51s80uwdndB6Q54/n PtmGofKtaYmKGdsFwihLVMJtDPG56tZTHK+xEkYGh0z0xuVPUhk1DrCsw8o8lTMYpptm ntAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b6-20020a63d306000000b004d422660ffesi31013028pgg.393.2023.03.29.13.00.14; Wed, 29 Mar 2023 13:00:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230395AbjC2TqQ (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229954AbjC2Tpy (ORCPT ); Wed, 29 Mar 2023 15:45:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22E7F196 for ; Wed, 29 Mar 2023 12:45:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A398A61E26 for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F65AC433AF; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjz-002Rks-23; Wed, 29 Mar 2023 15:45:51 -0400 Message-ID: <20230329194551.451527297@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:28 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 12/25] tracing/user_events: Split header into uapi and kernel References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733538874266921?= X-GMAIL-MSGID: =?utf-8?q?1761733538874266921?= From: Beau Belgrave The UAPI parts need to be split out from the kernel parts of user_events now that other parts of the kernel will reference it. Do so by moving the existing include/linux/user_events.h into include/uapi/linux/user_events.h. Link: https://lkml.kernel.org/r/20230328235219.203-2-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- include/linux/user_events.h | 52 ++++---------------------------- include/uapi/linux/user_events.h | 48 +++++++++++++++++++++++++++++ kernel/trace/trace_events_user.c | 5 --- 3 files changed, 54 insertions(+), 51 deletions(-) create mode 100644 include/uapi/linux/user_events.h diff --git a/include/linux/user_events.h b/include/linux/user_events.h index 592a3fbed98e..13689589d36e 100644 --- a/include/linux/user_events.h +++ b/include/linux/user_events.h @@ -1,54 +1,14 @@ -/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (c) 2021, Microsoft Corporation. + * Copyright (c) 2022, Microsoft Corporation. * * Authors: * Beau Belgrave */ -#ifndef _UAPI_LINUX_USER_EVENTS_H -#define _UAPI_LINUX_USER_EVENTS_H -#include -#include +#ifndef _LINUX_USER_EVENTS_H +#define _LINUX_USER_EVENTS_H -#ifdef __KERNEL__ -#include -#else -#include -#endif +#include -#define USER_EVENTS_SYSTEM "user_events" -#define USER_EVENTS_PREFIX "u:" - -/* Create dynamic location entry within a 32-bit value */ -#define DYN_LOC(offset, size) ((size) << 16 | (offset)) - -/* - * Describes an event registration and stores the results of the registration. - * This structure is passed to the DIAG_IOCSREG ioctl, callers at a minimum - * must set the size and name_args before invocation. - */ -struct user_reg { - - /* Input: Size of the user_reg structure being used */ - __u32 size; - - /* Input: Pointer to string with event name, description and flags */ - __u64 name_args; - - /* Output: Bitwise index of the event within the status page */ - __u32 status_bit; - - /* Output: Index of the event to use when writing data */ - __u32 write_index; -} __attribute__((__packed__)); - -#define DIAG_IOC_MAGIC '*' - -/* Requests to register a user_event */ -#define DIAG_IOCSREG _IOWR(DIAG_IOC_MAGIC, 0, struct user_reg*) - -/* Requests to delete a user_event */ -#define DIAG_IOCSDEL _IOW(DIAG_IOC_MAGIC, 1, char*) - -#endif /* _UAPI_LINUX_USER_EVENTS_H */ +#endif /* _LINUX_USER_EVENTS_H */ diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h new file mode 100644 index 000000000000..03f92366068d --- /dev/null +++ b/include/uapi/linux/user_events.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * Copyright (c) 2021-2022, Microsoft Corporation. + * + * Authors: + * Beau Belgrave + */ +#ifndef _UAPI_LINUX_USER_EVENTS_H +#define _UAPI_LINUX_USER_EVENTS_H + +#include +#include + +#define USER_EVENTS_SYSTEM "user_events" +#define USER_EVENTS_PREFIX "u:" + +/* Create dynamic location entry within a 32-bit value */ +#define DYN_LOC(offset, size) ((size) << 16 | (offset)) + +/* + * Describes an event registration and stores the results of the registration. + * This structure is passed to the DIAG_IOCSREG ioctl, callers at a minimum + * must set the size and name_args before invocation. + */ +struct user_reg { + + /* Input: Size of the user_reg structure being used */ + __u32 size; + + /* Input: Pointer to string with event name, description and flags */ + __u64 name_args; + + /* Output: Bitwise index of the event within the status page */ + __u32 status_bit; + + /* Output: Index of the event to use when writing data */ + __u32 write_index; +} __attribute__((__packed__)); + +#define DIAG_IOC_MAGIC '*' + +/* Request to register a user_event */ +#define DIAG_IOCSREG _IOWR(DIAG_IOC_MAGIC, 0, struct user_reg *) + +/* Request to delete a user_event */ +#define DIAG_IOCSDEL _IOW(DIAG_IOC_MAGIC, 1, char *) + +#endif /* _UAPI_LINUX_USER_EVENTS_H */ diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 908e8a13c675..070551480747 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -19,12 +19,7 @@ #include #include #include -/* Reminder to move to uapi when everything works */ -#ifdef CONFIG_COMPILE_TEST #include -#else -#include -#endif #include "trace.h" #include "trace_dynevent.h" From patchwork Wed Mar 29 19:45:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76809 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp663978vqo; Wed, 29 Mar 2023 13:18:26 -0700 (PDT) X-Google-Smtp-Source: AKy350YGjQlBAmkjKdIhjU4q/KAJFmvoDrKL0nfol7uRBAgqRCvXp+JBgEw6CR0BkGLEIsOLq8aK X-Received: by 2002:a17:906:e918:b0:839:74cf:7c4f with SMTP id ju24-20020a170906e91800b0083974cf7c4fmr16462651ejb.8.1680121106586; Wed, 29 Mar 2023 13:18:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680121106; cv=none; d=google.com; s=arc-20160816; b=EC94jCr4mnu1PJzifr5ukTwruYfi+K525Rj64FPip80hMut4YEtAIYgWRLCCRe5CGW qb3PDoN4c6Xs0kEkXyU/CLgL6nLSozbMLa6G45sGFPULTXv1mNeyjBt/ZBrk/oZ74BMx aKR0ypBaM+K/XAPdELXPdhsqeMBzAcuI2ZadxAwMYl0VVmw9NWrbEMI0r9a1KdQwNo9u VF9U8howQT/+A57+GzwaPrqeodDTHLqK4+NYBDLCGRy4PgT+TWtgIEsUA8nJW/Z0LR/O sRLdyTCd2Uj7l7+NTA/21F+LpQt6ZTa9/UPo7FiHCFLAN67XhHsUzJThae0h2L+JY9+D lNMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=j2v5Cx1xdvY7OWIuyDgF80JTRA6pWVw+tHpcRQ1qOiw=; b=PerZyXGvhYf/fouDqSfiRHJCB2HvrrBrx/u49mlD++mr5d2Ex19E99nXBamQa2CYV3 4xMpqcieMbu9c4Hkl1C2N7FlhNFYBUaZkPAbZ3qwjk+wJDTvrONr0jLp7dTG+RkzFTli m5ucskzEDPmYw5FZuXlMLiBdoM4gzF9oE4qellap1lO8nLBruwW/zdH12DMuMZ3+5hDo uLyfz8cAf2cz5MA2DPbkHojZ0c+WXW5+SpVAAsTpaWWJagF5hIXNhQTriiiFcOSD6dea BKsmBrsFWqXL0pzwPl7+VS832gbRbVJzUiDwTlkkQp/TTl8hR1W6zyRX1aDQ7L60ORtX kkiQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gl7-20020a170906e0c700b0093cda757bd8si16954241ejb.132.2023.03.29.13.18.02; Wed, 29 Mar 2023 13:18:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230463AbjC2Tq1 (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230002AbjC2Tpz (ORCPT ); Wed, 29 Mar 2023 15:45:55 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B72129F for ; Wed, 29 Mar 2023 12:45:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E77F661E29 for ; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3508C433A1; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbjz-002RlR-2i; Wed, 29 Mar 2023 15:45:51 -0400 Message-ID: <20230329194551.655419033@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:29 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Mathieu Desnoyers , Beau Belgrave Subject: [for-next][PATCH 13/25] tracing/user_events: Track fork/exec/exit for mm lifetime References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761734669462925492?= X-GMAIL-MSGID: =?utf-8?q?1761734669462925492?= From: Beau Belgrave During tracefs discussions it was decided instead of requiring a mapping within a user-process to track the lifetime of memory descriptors we should hook the appropriate calls. Do this by adding the minimal stubs required for task fork, exec, and exit. Currently this is just a NOP. Future patches will implement these calls fully. Link: https://lkml.kernel.org/r/20230328235219.203-3-beaub@linux.microsoft.com Suggested-by: Mathieu Desnoyers Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- fs/exec.c | 2 ++ include/linux/sched.h | 5 +++++ include/linux/user_events.h | 18 ++++++++++++++++++ kernel/exit.c | 2 ++ kernel/fork.c | 2 ++ 5 files changed, 29 insertions(+) diff --git a/fs/exec.c b/fs/exec.c index 7c44d0c65b1b..2b0042f8deec 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -65,6 +65,7 @@ #include #include #include +#include #include #include @@ -1859,6 +1860,7 @@ static int bprm_execve(struct linux_binprm *bprm, current->fs->in_exec = 0; current->in_execve = 0; rseq_execve(current); + user_events_execve(current); acct_update_integrals(current); task_numa_free(current, false); return retval; diff --git a/include/linux/sched.h b/include/linux/sched.h index 63d242164b1a..bf37846e90c2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -69,6 +69,7 @@ struct sighand_struct; struct signal_struct; struct task_delay_info; struct task_group; +struct user_event_mm; /* * Task state bitmask. NOTE! These bits are also @@ -1528,6 +1529,10 @@ struct task_struct { union rv_task_monitor rv[RV_PER_TASK_MONITORS]; #endif +#ifdef CONFIG_USER_EVENTS + struct user_event_mm *user_event_mm; +#endif + /* * New fields for task_struct should be added above here, so that * they are included in the randomized portion of task_struct. diff --git a/include/linux/user_events.h b/include/linux/user_events.h index 13689589d36e..3d747c45d2fa 100644 --- a/include/linux/user_events.h +++ b/include/linux/user_events.h @@ -11,4 +11,22 @@ #include +#ifdef CONFIG_USER_EVENTS +struct user_event_mm { +}; +#endif + +static inline void user_events_fork(struct task_struct *t, + unsigned long clone_flags) +{ +} + +static inline void user_events_execve(struct task_struct *t) +{ +} + +static inline void user_events_exit(struct task_struct *t) +{ +} + #endif /* _LINUX_USER_EVENTS_H */ diff --git a/kernel/exit.c b/kernel/exit.c index f2afdb0add7c..875d6a134df8 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -68,6 +68,7 @@ #include #include #include +#include #include #include @@ -818,6 +819,7 @@ void __noreturn do_exit(long code) coredump_task_exit(tsk); ptrace_event(PTRACE_EVENT_EXIT, code); + user_events_exit(tsk); validate_creds_for_do_exit(tsk); diff --git a/kernel/fork.c b/kernel/fork.c index d8cda4c6de6c..efb1f2257772 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -97,6 +97,7 @@ #include #include #include +#include #include #include @@ -2505,6 +2506,7 @@ static __latent_entropy struct task_struct *copy_process( trace_task_newtask(p, clone_flags); uprobe_copy_process(p, clone_flags); + user_events_fork(p, clone_flags); copy_oom_score_adj(clone_flags, p); From patchwork Wed Mar 29 19:45:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76788 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp652266vqo; Wed, 29 Mar 2023 12:58:31 -0700 (PDT) X-Google-Smtp-Source: AKy350aVJVBYPnZjFB0V4pNXHeYIFvszSmUhJ6trrNvtermsJijWCildvzIAHTopOqwQZNQPg3k0 X-Received: by 2002:a17:90b:4c10:b0:22c:816e:d67d with SMTP id na16-20020a17090b4c1000b0022c816ed67dmr22764185pjb.24.1680119911075; Wed, 29 Mar 2023 12:58:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119911; cv=none; d=google.com; s=arc-20160816; b=Rqhln6Ny6/uta+sdI/tgcRaqKIRmwVwjjPEizn7+3+vqlSpOFEk6hSuNkj/rWrKpRW 4/yqeD3au6zdNFboqWUfd9hKqfhOFpsSVPHGWdSF4EPhikdrAnFidX6a/MHoGGdJQ0TZ ksJE+HRhOb3Ygrw5Qci6du9IooVE+Fra21Jfawiq2ERRVhg/6DAkQmd7pDBWOwiS7IWJ 8cNyvQ2ruzv+NqToT475KOH62wpLa2vIfImTt69tg472e0J47vqeftLjIRKHiwFT2bZq UjhVtO7M86fh7gOIC8Ch9j33iETe7AbVHcJjRvPwgFQiF8XK6wMhXrVSxUPmNuQBJQ27 p/Lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=5iiJHH89RX4P8TCKq4X7QMywVPHDXJZuPJgeh65AhZY=; b=N0H2pBVQPgYg9TQSQ9WSf4Q6jXmfNWSZ1JFARhud2lhPnjhDc6dxaeQ8iE9Vp4MUMq TV1RqWOwVcsUFZ139ahFPixVm0NmvnYFsZ4fmAabV+p8jUHBIfS0SnY4dBFZVbugq1QF ubbfnsBN+irWmDWKXRdEIMb/PNXyXneVZ/u1K565V5bft/3Yue+77JZLGA0ATDg54p1j Wr7O6SQfFKuV4rwiuZPzP6/xaCQBpJIfH7sErPg/3UfZ0Zv5JEG87y4T7/rIJVIrPAj4 AwyJb9G1xYXXiOvZ/CM+bEdhXWjOfFLEiriqhlTVVXwKaL8Z1NVK1UqN9/T4Toy3hbHl gFnQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ay24-20020a17090b031800b0023b54bedd5csi1982546pjb.137.2023.03.29.12.58.18; Wed, 29 Mar 2023 12:58:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231211AbjC2Tq6 (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230126AbjC2Tp6 (ORCPT ); Wed, 29 Mar 2023 15:45:58 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27BD619BA for ; Wed, 29 Mar 2023 12:45:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3615061E25 for ; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E0749C433D2; Wed, 29 Mar 2023 19:45:52 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk0-002Rm0-09; Wed, 29 Mar 2023 15:45:52 -0400 Message-ID: <20230329194551.862838962@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:30 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Mathieu Desnoyers , Beau Belgrave Subject: [for-next][PATCH 14/25] tracing/user_events: Use remote writes for event enablement References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733415978254407?= X-GMAIL-MSGID: =?utf-8?q?1761733415978254407?= From: Beau Belgrave As part of the discussions for user_events aligned with user space tracers, it was determined that user programs should register a aligned value to set or clear a bit when an event becomes enabled. Currently a shared page is being used that requires mmap(). Remove the shared page implementation and move to a user registered address implementation. In this new model during the event registration from user programs 3 new values are specified. The first is the address to update when the event is either enabled or disabled. The second is the bit to set/clear to reflect the event being enabled. The third is the size of the value at the specified address. This allows for a local 32/64-bit value in user programs to support both kernel and user tracers. As an example, setting bit 31 for kernel tracers when the event becomes enabled allows for user tracers to use the other bits for ref counts or other flags. The kernel side updates the bit atomically, user programs need to also update these values atomically. User provided addresses must be aligned on a natural boundary, this allows for single page checking and prevents odd behaviors such as a enable value straddling 2 pages instead of a single page. Currently page faults are only logged, future patches will handle these. Link: https://lkml.kernel.org/r/20230328235219.203-4-beaub@linux.microsoft.com Suggested-by: Mathieu Desnoyers Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- include/linux/user_events.h | 53 ++- include/uapi/linux/user_events.h | 15 +- kernel/trace/Kconfig | 5 +- kernel/trace/trace_events_user.c | 586 ++++++++++++++++++++++++------- 4 files changed, 517 insertions(+), 142 deletions(-) diff --git a/include/linux/user_events.h b/include/linux/user_events.h index 3d747c45d2fa..0120b3dd5b03 100644 --- a/include/linux/user_events.h +++ b/include/linux/user_events.h @@ -9,13 +9,63 @@ #ifndef _LINUX_USER_EVENTS_H #define _LINUX_USER_EVENTS_H +#include +#include +#include +#include #include #ifdef CONFIG_USER_EVENTS struct user_event_mm { + struct list_head link; + struct list_head enablers; + struct mm_struct *mm; + struct user_event_mm *next; + refcount_t refcnt; + refcount_t tasks; + struct rcu_work put_rwork; }; -#endif +extern void user_event_mm_dup(struct task_struct *t, + struct user_event_mm *old_mm); + +extern void user_event_mm_remove(struct task_struct *t); + +static inline void user_events_fork(struct task_struct *t, + unsigned long clone_flags) +{ + struct user_event_mm *old_mm; + + if (!t || !current->user_event_mm) + return; + + old_mm = current->user_event_mm; + + if (clone_flags & CLONE_VM) { + t->user_event_mm = old_mm; + refcount_inc(&old_mm->tasks); + return; + } + + user_event_mm_dup(t, old_mm); +} + +static inline void user_events_execve(struct task_struct *t) +{ + if (!t || !t->user_event_mm) + return; + + user_event_mm_remove(t); +} + +static inline void user_events_exit(struct task_struct *t) +{ + if (!t || !t->user_event_mm) + return; + + user_event_mm_remove(t); +} +#else static inline void user_events_fork(struct task_struct *t, unsigned long clone_flags) { @@ -28,5 +78,6 @@ static inline void user_events_execve(struct task_struct *t) static inline void user_events_exit(struct task_struct *t) { } +#endif /* CONFIG_USER_EVENTS */ #endif /* _LINUX_USER_EVENTS_H */ diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 03f92366068d..22521bc622db 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -27,12 +27,21 @@ struct user_reg { /* Input: Size of the user_reg structure being used */ __u32 size; + /* Input: Bit in enable address to use */ + __u8 enable_bit; + + /* Input: Enable size in bytes at address */ + __u8 enable_size; + + /* Input: Flags for future use, set to 0 */ + __u16 flags; + + /* Input: Address to update when enabled */ + __u64 enable_addr; + /* Input: Pointer to string with event name, description and flags */ __u64 name_args; - /* Output: Bitwise index of the event within the status page */ - __u32 status_bit; - /* Output: Index of the event to use when writing data */ __u32 write_index; } __attribute__((__packed__)); diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 5b1e7fa41ca8..c7020e071bf9 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -798,9 +798,10 @@ config USER_EVENTS can be used like an existing kernel trace event. User trace events are generated by writing to a tracefs file. User processes can determine if their tracing events should be - generated by memory mapping a tracefs file and checking for - an associated byte being non-zero. + generated by registering a value and bit with the kernel + that reflects when it is enabled or not. + See Documentation/trace/user_events.rst. If in doubt, say N. config HIST_TRIGGERS diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 070551480747..553a82ee7aeb 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include "trace.h" #include "trace_dynevent.h" @@ -29,34 +30,11 @@ #define FIELD_DEPTH_NAME 1 #define FIELD_DEPTH_SIZE 2 -/* - * Limits how many trace_event calls user processes can create: - * Must be a power of two of PAGE_SIZE. - */ -#define MAX_PAGE_ORDER 0 -#define MAX_PAGES (1 << MAX_PAGE_ORDER) -#define MAX_BYTES (MAX_PAGES * PAGE_SIZE) -#define MAX_EVENTS (MAX_BYTES * 8) - /* Limit how long of an event name plus args within the subsystem. */ #define MAX_EVENT_DESC 512 #define EVENT_NAME(user_event) ((user_event)->tracepoint.name) #define MAX_FIELD_ARRAY_SIZE 1024 -/* - * The MAP_STATUS_* macros are used for taking a index and determining the - * appropriate byte and the bit in the byte to set/reset for an event. - * - * The lower 3 bits of the index decide which bit to set. - * The remaining upper bits of the index decide which byte to use for the bit. - * - * This is used when an event has a probe attached/removed to reflect live - * status of the event wanting tracing or not to user-programs via shared - * memory maps. - */ -#define MAP_STATUS_BYTE(index) ((index) >> 3) -#define MAP_STATUS_MASK(index) BIT((index) & 7) - /* * Internal bits (kernel side only) to keep track of connected probes: * These are used when status is requested in text form about an event. These @@ -70,20 +48,14 @@ #define EVENT_STATUS_OTHER BIT(7) /* - * Stores the pages, tables, and locks for a group of events. - * Each logical grouping of events has its own group, with a - * matching page for status checks within user programs. This - * allows for isolation of events to user programs by various - * means. + * Stores the system name, tables, and locks for a group of events. This + * allows isolation for events by various means. */ struct user_event_group { - struct page *pages; - char *register_page_data; char *system_name; struct hlist_node node; struct mutex reg_mutex; DECLARE_HASHTABLE(register_table, 8); - DECLARE_BITMAP(page_bitmap, MAX_EVENTS); }; /* Group for init_user_ns mapping, top-most group */ @@ -106,12 +78,34 @@ struct user_event { struct list_head fields; struct list_head validators; refcount_t refcnt; - int index; - int flags; int min_size; char status; }; +/* + * Stores per-mm/event properties that enable an address to be + * updated properly for each task. As tasks are forked, we use + * these to track enablement sites that are tied to an event. + */ +struct user_event_enabler { + struct list_head link; + struct user_event *event; + unsigned long addr; + + /* Track enable bit, flags, etc. Aligned for bitops. */ + unsigned int values; +}; + +/* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ +#define ENABLE_VAL_BIT_MASK 0x3F + +/* Only duplicate the bit value */ +#define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK + +/* Global list of memory descriptors using user_events */ +static LIST_HEAD(user_event_mms); +static DEFINE_SPINLOCK(user_event_mms_lock); + /* * Stores per-file events references, as users register events * within a file this structure is modified and freed via RCU. @@ -145,33 +139,17 @@ static int user_event_parse(struct user_event_group *group, char *name, char *args, char *flags, struct user_event **newuser); +static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm); +static struct user_event_mm *user_event_mm_get_all(struct user_event *user); +static void user_event_mm_put(struct user_event_mm *mm); + static u32 user_event_key(char *name) { return jhash(name, strlen(name), 0); } -static void set_page_reservations(char *pages, bool set) -{ - int page; - - for (page = 0; page < MAX_PAGES; ++page) { - void *addr = pages + (PAGE_SIZE * page); - - if (set) - SetPageReserved(virt_to_page(addr)); - else - ClearPageReserved(virt_to_page(addr)); - } -} - static void user_event_group_destroy(struct user_event_group *group) { - if (group->register_page_data) - set_page_reservations(group->register_page_data, false); - - if (group->pages) - __free_pages(group->pages, MAX_PAGE_ORDER); - kfree(group->system_name); kfree(group); } @@ -242,19 +220,6 @@ static struct user_event_group if (!group->system_name) goto error; - group->pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, MAX_PAGE_ORDER); - - if (!group->pages) - goto error; - - group->register_page_data = page_address(group->pages); - - set_page_reservations(group->register_page_data, true); - - /* Zero all bits beside 0 (which is reserved for failures) */ - bitmap_zero(group->page_bitmap, MAX_EVENTS); - set_bit(0, group->page_bitmap); - mutex_init(&group->reg_mutex); hash_init(group->register_table); @@ -266,20 +231,367 @@ static struct user_event_group return NULL; }; -static __always_inline -void user_event_register_set(struct user_event *user) +static void user_event_enabler_destroy(struct user_event_enabler *enabler) +{ + list_del_rcu(&enabler->link); + + /* No longer tracking the event via the enabler */ + refcount_dec(&enabler->event->refcnt); + + kfree(enabler); +} + +static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) +{ + bool unlocked; + int ret; + + mmap_read_lock(mm->mm); + + /* Ensure MM has tasks, cannot use after exit_mm() */ + if (refcount_read(&mm->tasks) == 0) { + ret = -ENOENT; + goto out; + } + + ret = fixup_user_fault(mm->mm, uaddr, FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, + &unlocked); +out: + mmap_read_unlock(mm->mm); + + return ret; +} + +static int user_event_enabler_write(struct user_event_mm *mm, + struct user_event_enabler *enabler) +{ + unsigned long uaddr = enabler->addr; + unsigned long *ptr; + struct page *page; + void *kaddr; + int ret; + + lockdep_assert_held(&event_mutex); + mmap_assert_locked(mm->mm); + + /* Ensure MM has tasks, cannot use after exit_mm() */ + if (refcount_read(&mm->tasks) == 0) + return -ENOENT; + + ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, + &page, NULL, NULL); + + if (ret <= 0) { + pr_warn("user_events: Enable write failed\n"); + return -EFAULT; + } + + kaddr = kmap_local_page(page); + ptr = kaddr + (uaddr & ~PAGE_MASK); + + /* Update bit atomically, user tracers must be atomic as well */ + if (enabler->event && enabler->event->status) + set_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr); + else + clear_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr); + + kunmap_local(kaddr); + unpin_user_pages_dirty_lock(&page, 1, true); + + return 0; +} + +static void user_event_enabler_update(struct user_event *user) +{ + struct user_event_enabler *enabler; + struct user_event_mm *mm = user_event_mm_get_all(user); + struct user_event_mm *next; + + while (mm) { + next = mm->next; + mmap_read_lock(mm->mm); + rcu_read_lock(); + + list_for_each_entry_rcu(enabler, &mm->enablers, link) + if (enabler->event == user) + user_event_enabler_write(mm, enabler); + + rcu_read_unlock(); + mmap_read_unlock(mm->mm); + user_event_mm_put(mm); + mm = next; + } +} + +static bool user_event_enabler_dup(struct user_event_enabler *orig, + struct user_event_mm *mm) +{ + struct user_event_enabler *enabler; + + enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT); + + if (!enabler) + return false; + + enabler->event = orig->event; + enabler->addr = orig->addr; + + /* Only dup part of value (ignore future flags, etc) */ + enabler->values = orig->values & ENABLE_VAL_DUP_MASK; + + refcount_inc(&enabler->event->refcnt); + list_add_rcu(&enabler->link, &mm->enablers); + + return true; +} + +static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm) +{ + refcount_inc(&mm->refcnt); + + return mm; +} + +static struct user_event_mm *user_event_mm_get_all(struct user_event *user) +{ + struct user_event_mm *found = NULL; + struct user_event_enabler *enabler; + struct user_event_mm *mm; + + /* + * We do not want to block fork/exec while enablements are being + * updated, so we use RCU to walk the current tasks that have used + * user_events ABI for 1 or more events. Each enabler found in each + * task that matches the event being updated has a write to reflect + * the kernel state back into the process. Waits/faults must not occur + * during this. So we scan the list under RCU for all the mm that have + * the event within it. This is needed because mm_read_lock() can wait. + * Each user mm returned has a ref inc to handle remove RCU races. + */ + rcu_read_lock(); + + list_for_each_entry_rcu(mm, &user_event_mms, link) + list_for_each_entry_rcu(enabler, &mm->enablers, link) + if (enabler->event == user) { + mm->next = found; + found = user_event_mm_get(mm); + break; + } + + rcu_read_unlock(); + + return found; +} + +static struct user_event_mm *user_event_mm_create(struct task_struct *t) +{ + struct user_event_mm *user_mm; + unsigned long flags; + + user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL); + + if (!user_mm) + return NULL; + + user_mm->mm = t->mm; + INIT_LIST_HEAD(&user_mm->enablers); + refcount_set(&user_mm->refcnt, 1); + refcount_set(&user_mm->tasks, 1); + + spin_lock_irqsave(&user_event_mms_lock, flags); + list_add_rcu(&user_mm->link, &user_event_mms); + spin_unlock_irqrestore(&user_event_mms_lock, flags); + + t->user_event_mm = user_mm; + + /* + * The lifetime of the memory descriptor can slightly outlast + * the task lifetime if a ref to the user_event_mm is taken + * between list_del_rcu() and call_rcu(). Therefore we need + * to take a reference to it to ensure it can live this long + * under this corner case. This can also occur in clones that + * outlast the parent. + */ + mmgrab(user_mm->mm); + + return user_mm; +} + +static struct user_event_mm *current_user_event_mm(void) +{ + struct user_event_mm *user_mm = current->user_event_mm; + + if (user_mm) + goto inc; + + user_mm = user_event_mm_create(current); + + if (!user_mm) + goto error; +inc: + refcount_inc(&user_mm->refcnt); +error: + return user_mm; +} + +static void user_event_mm_destroy(struct user_event_mm *mm) +{ + struct user_event_enabler *enabler, *next; + + list_for_each_entry_safe(enabler, next, &mm->enablers, link) + user_event_enabler_destroy(enabler); + + mmdrop(mm->mm); + kfree(mm); +} + +static void user_event_mm_put(struct user_event_mm *mm) +{ + if (mm && refcount_dec_and_test(&mm->refcnt)) + user_event_mm_destroy(mm); +} + +static void delayed_user_event_mm_put(struct work_struct *work) +{ + struct user_event_mm *mm; + + mm = container_of(to_rcu_work(work), struct user_event_mm, put_rwork); + user_event_mm_put(mm); +} + +void user_event_mm_remove(struct task_struct *t) { - int i = user->index; + struct user_event_mm *mm; + unsigned long flags; + + might_sleep(); + + mm = t->user_event_mm; + t->user_event_mm = NULL; + + /* Clone will increment the tasks, only remove if last clone */ + if (!refcount_dec_and_test(&mm->tasks)) + return; + + /* Remove the mm from the list, so it can no longer be enabled */ + spin_lock_irqsave(&user_event_mms_lock, flags); + list_del_rcu(&mm->link); + spin_unlock_irqrestore(&user_event_mms_lock, flags); + + /* + * We need to wait for currently occurring writes to stop within + * the mm. This is required since exit_mm() snaps the current rss + * stats and clears them. On the final mmdrop(), check_mm() will + * report a bug if these increment. + * + * All writes/pins are done under mmap_read lock, take the write + * lock to ensure in-progress faults have completed. Faults that + * are pending but yet to run will check the task count and skip + * the fault since the mm is going away. + */ + mmap_write_lock(mm->mm); + mmap_write_unlock(mm->mm); - user->group->register_page_data[MAP_STATUS_BYTE(i)] |= MAP_STATUS_MASK(i); + /* + * Put for mm must be done after RCU delay to handle new refs in + * between the list_del_rcu() and now. This ensures any get refs + * during rcu_read_lock() are accounted for during list removal. + * + * CPU A | CPU B + * --------------------------------------------------------------- + * user_event_mm_remove() | rcu_read_lock(); + * list_del_rcu() | list_for_each_entry_rcu(); + * call_rcu() | refcount_inc(); + * . | rcu_read_unlock(); + * schedule_work() | . + * user_event_mm_put() | . + * + * mmdrop() cannot be called in the softirq context of call_rcu() + * so we use a work queue after call_rcu() to run within. + */ + INIT_RCU_WORK(&mm->put_rwork, delayed_user_event_mm_put); + queue_rcu_work(system_wq, &mm->put_rwork); } -static __always_inline -void user_event_register_clear(struct user_event *user) +void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm) { - int i = user->index; + struct user_event_mm *mm = user_event_mm_create(t); + struct user_event_enabler *enabler; - user->group->register_page_data[MAP_STATUS_BYTE(i)] &= ~MAP_STATUS_MASK(i); + if (!mm) + return; + + rcu_read_lock(); + + list_for_each_entry_rcu(enabler, &old_mm->enablers, link) + if (!user_event_enabler_dup(enabler, mm)) + goto error; + + rcu_read_unlock(); + + return; +error: + rcu_read_unlock(); + user_event_mm_remove(t); +} + +static struct user_event_enabler +*user_event_enabler_create(struct user_reg *reg, struct user_event *user, + int *write_result) +{ + struct user_event_enabler *enabler; + struct user_event_mm *user_mm; + unsigned long uaddr = (unsigned long)reg->enable_addr; + + user_mm = current_user_event_mm(); + + if (!user_mm) + return NULL; + + enabler = kzalloc(sizeof(*enabler), GFP_KERNEL); + + if (!enabler) + goto out; + + enabler->event = user; + enabler->addr = uaddr; + enabler->values = reg->enable_bit; +retry: + /* Prevents state changes from racing with new enablers */ + mutex_lock(&event_mutex); + + /* Attempt to reflect the current state within the process */ + mmap_read_lock(user_mm->mm); + *write_result = user_event_enabler_write(user_mm, enabler); + mmap_read_unlock(user_mm->mm); + + /* + * If the write works, then we will track the enabler. A ref to the + * underlying user_event is held by the enabler to prevent it going + * away while the enabler is still in use by a process. The ref is + * removed when the enabler is destroyed. This means a event cannot + * be forcefully deleted from the system until all tasks using it + * exit or run exec(), which includes forks and clones. + */ + if (!*write_result) { + refcount_inc(&enabler->event->refcnt); + list_add_rcu(&enabler->link, &user_mm->enablers); + } + + mutex_unlock(&event_mutex); + + if (*write_result) { + /* Attempt to fault-in and retry if it worked */ + if (!user_event_mm_fault_in(user_mm, uaddr)) + goto retry; + + kfree(enabler); + enabler = NULL; + } +out: + user_event_mm_put(user_mm); + + return enabler; } static __always_inline __must_check @@ -824,9 +1136,6 @@ static int destroy_user_event(struct user_event *user) return ret; dyn_event_remove(&user->devent); - - user_event_register_clear(user); - clear_bit(user->index, user->group->page_bitmap); hash_del(&user->node); user_event_destroy_validators(user); @@ -972,9 +1281,9 @@ static void user_event_perf(struct user_event *user, struct iov_iter *i, #endif /* - * Update the register page that is shared between user processes. + * Update the enabled bit among all user processes. */ -static void update_reg_page_for(struct user_event *user) +static void update_enable_bit_for(struct user_event *user) { struct tracepoint *tp = &user->tracepoint; char status = 0; @@ -1005,12 +1314,9 @@ static void update_reg_page_for(struct user_event *user) rcu_read_unlock_sched(); } - if (status) - user_event_register_set(user); - else - user_event_register_clear(user); - user->status = status; + + user_event_enabler_update(user); } /* @@ -1067,10 +1373,10 @@ static int user_event_reg(struct trace_event_call *call, return ret; inc: refcount_inc(&user->refcnt); - update_reg_page_for(user); + update_enable_bit_for(user); return 0; dec: - update_reg_page_for(user); + update_enable_bit_for(user); refcount_dec(&user->refcnt); return 0; } @@ -1266,7 +1572,6 @@ static int user_event_parse(struct user_event_group *group, char *name, struct user_event **newuser) { int ret; - int index; u32 key; struct user_event *user; @@ -1285,11 +1590,6 @@ static int user_event_parse(struct user_event_group *group, char *name, return 0; } - index = find_first_zero_bit(group->page_bitmap, MAX_EVENTS); - - if (index == MAX_EVENTS) - return -EMFILE; - user = kzalloc(sizeof(*user), GFP_KERNEL); if (!user) @@ -1335,14 +1635,11 @@ static int user_event_parse(struct user_event_group *group, char *name, if (ret) goto put_user_lock; - user->index = index; - /* Ensure we track self ref and caller ref (2) */ refcount_set(&user->refcnt, 2); dyn_event_init(&user->devent, &user_event_dops); dyn_event_add(&user->devent, &user->call); - set_bit(user->index, group->page_bitmap); hash_add(group->register_table, &user->node, key); mutex_unlock(&event_mutex); @@ -1559,6 +1856,37 @@ static long user_reg_get(struct user_reg __user *ureg, struct user_reg *kreg) if (ret) return ret; + /* Ensure no flags, since we don't support any yet */ + if (kreg->flags != 0) + return -EINVAL; + + /* Ensure supported size */ + switch (kreg->enable_size) { + case 4: + /* 32-bit */ + break; +#if BITS_PER_LONG >= 64 + case 8: + /* 64-bit */ + break; +#endif + default: + return -EINVAL; + } + + /* Ensure natural alignment */ + if (kreg->enable_addr % kreg->enable_size) + return -EINVAL; + + /* Ensure bit range for size */ + if (kreg->enable_bit > (kreg->enable_size * BITS_PER_BYTE) - 1) + return -EINVAL; + + /* Ensure accessible */ + if (!access_ok((const void __user *)(uintptr_t)kreg->enable_addr, + kreg->enable_size)) + return -EFAULT; + kreg->size = size; return 0; @@ -1573,8 +1901,10 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, struct user_reg __user *ureg = (struct user_reg __user *)uarg; struct user_reg reg; struct user_event *user; + struct user_event_enabler *enabler; char *name; long ret; + int write_result; ret = user_reg_get(ureg, ®); @@ -1605,8 +1935,28 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, if (ret < 0) return ret; + /* + * user_events_ref_add succeeded: + * At this point we have a user_event, it's lifetime is bound by the + * reference count, not this file. If anything fails, the user_event + * still has a reference until the file is released. During release + * any remaining references (from user_events_ref_add) are decremented. + * + * Attempt to create an enabler, which too has a lifetime tied in the + * same way for the event. Once the task that caused the enabler to be + * created exits or issues exec() then the enablers it has created + * will be destroyed and the ref to the event will be decremented. + */ + enabler = user_event_enabler_create(®, user, &write_result); + + if (!enabler) + return -ENOMEM; + + /* Write failed/faulted, give error back to caller */ + if (write_result) + return write_result; + put_user((u32)ret, &ureg->write_index); - put_user(user->index, &ureg->status_bit); return 0; } @@ -1720,38 +2070,6 @@ static const struct file_operations user_data_fops = { .release = user_events_release, }; -static struct user_event_group *user_status_group(struct file *file) -{ - struct seq_file *m = file->private_data; - - if (!m) - return NULL; - - return m->private; -} - -/* - * Maps the shared page into the user process for checking if event is enabled. - */ -static int user_status_mmap(struct file *file, struct vm_area_struct *vma) -{ - char *pages; - struct user_event_group *group = user_status_group(file); - unsigned long size = vma->vm_end - vma->vm_start; - - if (size != MAX_BYTES) - return -EINVAL; - - if (!group) - return -EINVAL; - - pages = group->register_page_data; - - return remap_pfn_range(vma, vma->vm_start, - virt_to_phys(pages) >> PAGE_SHIFT, - size, vm_get_page_prot(VM_READ)); -} - static void *user_seq_start(struct seq_file *m, loff_t *pos) { if (*pos) @@ -1775,7 +2093,7 @@ static int user_seq_show(struct seq_file *m, void *p) struct user_event_group *group = m->private; struct user_event *user; char status; - int i, active = 0, busy = 0, flags; + int i, active = 0, busy = 0; if (!group) return -EINVAL; @@ -1784,11 +2102,10 @@ static int user_seq_show(struct seq_file *m, void *p) hash_for_each(group->register_table, i, user, node) { status = user->status; - flags = user->flags; - seq_printf(m, "%d:%s", user->index, EVENT_NAME(user)); + seq_printf(m, "%s", EVENT_NAME(user)); - if (flags != 0 || status != 0) + if (status != 0) seq_puts(m, " #"); if (status != 0) { @@ -1811,7 +2128,6 @@ static int user_seq_show(struct seq_file *m, void *p) seq_puts(m, "\n"); seq_printf(m, "Active: %d\n", active); seq_printf(m, "Busy: %d\n", busy); - seq_printf(m, "Max: %ld\n", MAX_EVENTS); return 0; } @@ -1847,7 +2163,6 @@ static int user_status_open(struct inode *node, struct file *file) static const struct file_operations user_status_fops = { .open = user_status_open, - .mmap = user_status_mmap, .read = seq_read, .llseek = seq_lseek, .release = seq_release, @@ -1868,8 +2183,7 @@ static int create_user_tracefs(void) goto err; } - /* mmap with MAP_SHARED requires writable fd */ - emmap = tracefs_create_file("user_events_status", TRACE_MODE_WRITE, + emmap = tracefs_create_file("user_events_status", TRACE_MODE_READ, NULL, NULL, &user_status_fops); if (!emmap) { From patchwork Wed Mar 29 19:45:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76799 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp654117vqo; Wed, 29 Mar 2023 13:01:29 -0700 (PDT) X-Google-Smtp-Source: AKy350Z1mk5RJJPHhEI2dXNn3FE0vWd0yvmQ8kwYbvgCX5b3LyFHdpcFwXz+rSOPIFDIC/lQcsbd X-Received: by 2002:a05:6a20:1e56:b0:d5:1863:fe5f with SMTP id cy22-20020a056a201e5600b000d51863fe5fmr3620244pzb.2.1680120089564; Wed, 29 Mar 2023 13:01:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120089; cv=none; d=google.com; s=arc-20160816; b=ooP+vVX3FLjaUsU1Hl3c/ySHP0SwzbLc86jZPcbAz00hkyKXfw6MX2rn9q+pGuGiqc algF0REwHqoGY50tQAu+b/eFiDQE3gP/S0022Dw+PkpM/jg8jf3Po1hEle6O1uaQ5zxz gVcH4HoBcS6kovsZUMpofyO3uesOlJZHm7dvrLWWbHvYCVwSMTfHL0Zjrzq8hfXY+KrX P+bLRbMPkw9GJK1Emx/2jLcfjKin1aXjOcIVuuobnDqtHhTkvBNhpw07Kn0YCsEzhtyl 61VpfNvYBaMpB/hxJ3MLwzPEUa3qAiUSyV5vNN5ac7QqtghcAcSlH8CKMC9ZJxOyVVRH LVmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=mf93w3ERRZIkGC52ItWvPjtqHnZoSkxVFEqasWUHZL0=; b=hfMT7VuAzB5OaygmCN5xA+vQ5oeUFqOvQn84eHF0o8ljQP6mxA3WQ9/Y+ij9WImOD4 HMBQ5OF+ptR4JPdG+b4jKZ59C8a4Ex81R6JIGx/2um2VuKttNMk6vvwemdFC4ng7Fk6k wH3WxR1y07g7LesF8Zi9DCDNorfhf4YZ7A25Cnk8JwkZqYTgVv31Ruwr2yxhhWhQR6Li IPfZAEbi906JbIXkrD9wteoMu2l8swBMLlakwiqOxMOe77FCM1gnljNZxpdhTDsbhY6G 31dsbgAR3gPthvltj2lnBfGSIENJb5Eudg6zpHAVvwD9AvEb+FqrpbaXoh6eJDdby/wZ XWpw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fd21-20020a056a002e9500b00608f52c3f20si33774617pfb.302.2023.03.29.13.01.03; Wed, 29 Mar 2023 13:01:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230498AbjC2Tqd (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230039AbjC2Tpz (ORCPT ); Wed, 29 Mar 2023 15:45:55 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 270011732 for ; Wed, 29 Mar 2023 12:45:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 49A4C61E2F for ; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 21C06C4339B; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk0-002RmY-0o; Wed, 29 Mar 2023 15:45:52 -0400 Message-ID: <20230329194552.069419879@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:31 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 15/25] tracing/user_events: Fixup enable faults asyncly References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733603010977113?= X-GMAIL-MSGID: =?utf-8?q?1761733603010977113?= From: Beau Belgrave When events are enabled within the various tracing facilities, such as ftrace/perf, the event_mutex is held. As events are enabled pages are accessed. We do not want page faults to occur under this lock. Instead queue the fault to a workqueue to be handled in a process context safe way without the lock. The enable address is marked faulting while the async fault-in occurs. This ensures that we don't attempt to fault-in more than is necessary. Once the page has been faulted in, an address write is re-attempted. If the page couldn't fault-in, then we wait until the next time the event is enabled to prevent any potential infinite loops. Link: https://lkml.kernel.org/r/20230328235219.203-5-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace_events_user.c | 120 +++++++++++++++++++++++++++++-- 1 file changed, 114 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 553a82ee7aeb..86bda1660536 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -99,9 +99,23 @@ struct user_event_enabler { /* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ #define ENABLE_VAL_BIT_MASK 0x3F +/* Bit 6 is for faulting status of enablement */ +#define ENABLE_VAL_FAULTING_BIT 6 + /* Only duplicate the bit value */ #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK +#define ENABLE_BITOPS(e) ((unsigned long *)&(e)->values) + +/* Used for asynchronous faulting in of pages */ +struct user_event_enabler_fault { + struct work_struct work; + struct user_event_mm *mm; + struct user_event_enabler *enabler; +}; + +static struct kmem_cache *fault_cache; + /* Global list of memory descriptors using user_events */ static LIST_HEAD(user_event_mms); static DEFINE_SPINLOCK(user_event_mms_lock); @@ -263,7 +277,85 @@ static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) } static int user_event_enabler_write(struct user_event_mm *mm, - struct user_event_enabler *enabler) + struct user_event_enabler *enabler, + bool fixup_fault); + +static void user_event_enabler_fault_fixup(struct work_struct *work) +{ + struct user_event_enabler_fault *fault = container_of( + work, struct user_event_enabler_fault, work); + struct user_event_enabler *enabler = fault->enabler; + struct user_event_mm *mm = fault->mm; + unsigned long uaddr = enabler->addr; + int ret; + + ret = user_event_mm_fault_in(mm, uaddr); + + if (ret && ret != -ENOENT) { + struct user_event *user = enabler->event; + + pr_warn("user_events: Fault for mm: 0x%pK @ 0x%llx event: %s\n", + mm->mm, (unsigned long long)uaddr, EVENT_NAME(user)); + } + + /* Prevent state changes from racing */ + mutex_lock(&event_mutex); + + /* + * If we managed to get the page, re-issue the write. We do not + * want to get into a possible infinite loop, which is why we only + * attempt again directly if the page came in. If we couldn't get + * the page here, then we will try again the next time the event is + * enabled/disabled. + */ + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + if (!ret) { + mmap_read_lock(mm->mm); + user_event_enabler_write(mm, enabler, true); + mmap_read_unlock(mm->mm); + } + + mutex_unlock(&event_mutex); + + /* In all cases we no longer need the mm or fault */ + user_event_mm_put(mm); + kmem_cache_free(fault_cache, fault); +} + +static bool user_event_enabler_queue_fault(struct user_event_mm *mm, + struct user_event_enabler *enabler) +{ + struct user_event_enabler_fault *fault; + + fault = kmem_cache_zalloc(fault_cache, GFP_NOWAIT | __GFP_NOWARN); + + if (!fault) + return false; + + INIT_WORK(&fault->work, user_event_enabler_fault_fixup); + fault->mm = user_event_mm_get(mm); + fault->enabler = enabler; + + /* Don't try to queue in again while we have a pending fault */ + set_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + if (!schedule_work(&fault->work)) { + /* Allow another attempt later */ + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + user_event_mm_put(mm); + kmem_cache_free(fault_cache, fault); + + return false; + } + + return true; +} + +static int user_event_enabler_write(struct user_event_mm *mm, + struct user_event_enabler *enabler, + bool fixup_fault) { unsigned long uaddr = enabler->addr; unsigned long *ptr; @@ -278,11 +370,19 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (refcount_read(&mm->tasks) == 0) return -ENOENT; + if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))) + return -EBUSY; + ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, &page, NULL, NULL); - if (ret <= 0) { - pr_warn("user_events: Enable write failed\n"); + if (unlikely(ret <= 0)) { + if (!fixup_fault) + return -EFAULT; + + if (!user_event_enabler_queue_fault(mm, enabler)) + pr_warn("user_events: Unable to queue fault handler\n"); + return -EFAULT; } @@ -314,7 +414,7 @@ static void user_event_enabler_update(struct user_event *user) list_for_each_entry_rcu(enabler, &mm->enablers, link) if (enabler->event == user) - user_event_enabler_write(mm, enabler); + user_event_enabler_write(mm, enabler, true); rcu_read_unlock(); mmap_read_unlock(mm->mm); @@ -562,7 +662,7 @@ static struct user_event_enabler /* Attempt to reflect the current state within the process */ mmap_read_lock(user_mm->mm); - *write_result = user_event_enabler_write(user_mm, enabler); + *write_result = user_event_enabler_write(user_mm, enabler, false); mmap_read_unlock(user_mm->mm); /* @@ -2201,16 +2301,24 @@ static int __init trace_events_user_init(void) { int ret; + fault_cache = KMEM_CACHE(user_event_enabler_fault, 0); + + if (!fault_cache) + return -ENOMEM; + init_group = user_event_group_create(&init_user_ns); - if (!init_group) + if (!init_group) { + kmem_cache_destroy(fault_cache); return -ENOMEM; + } ret = create_user_tracefs(); if (ret) { pr_warn("user_events could not register with tracefs\n"); user_event_group_destroy(init_group); + kmem_cache_destroy(fault_cache); init_group = NULL; return ret; } From patchwork Wed Mar 29 19:45:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76805 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp662388vqo; Wed, 29 Mar 2023 13:15:19 -0700 (PDT) X-Google-Smtp-Source: AKy350ajvL3X24I+EDaLXpAfiT5OMSlWo31WATfkNe563owZI2RizxyWf2s6Hq9NIf4I7IK0SMch X-Received: by 2002:a05:6402:e:b0:4fa:d75c:16cd with SMTP id d14-20020a056402000e00b004fad75c16cdmr18133480edu.34.1680120918772; Wed, 29 Mar 2023 13:15:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120918; cv=none; d=google.com; s=arc-20160816; b=xbVQw6Nr3x2OSEQIQ5nELnqDm0XGcFZ1vZ9t4spaph+iNGofgJQbGTB3DJkfyHsGhH kJ1QQJb1U4lTadghHPdlTpFn2mspsBB0q7xxhAs2VcKg5iwjCVE9n0tWfJlSSbEEhzP0 56hzdSrU1Oj2MSCGp177TOXsF/gArI7CK0EVfti8NqG0/W19KZuuBNQWeKzbYLa0xF23 l2WBe62YIBDU8IPRLEzCD6kJ4/YQsiV+O9L+wKUucYxyFe5UNVqKObDEcLlGQlAburmI gc2temtVgLQgU6bS6S+sO6oarV3NvZIJP93CKf/OCwjuWBiVI4WBZ7ZxtY5VqpZ6Mo0g 7Miw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=t91pBzR1lnMyKWgSusWaJZ7X3ss1lsEIFl04B024llY=; b=Fr5Ya05vpmBeZnWU0KMHUZFYIpp2vOrBiz+enrr6MEJbeT2t54WFsOzPJ9CzywYO8w anKNSKkuBoAWER/FgzGfF81dT5kt381yPs1xCb7sUuutunsAAJ9JCTZe9GL2GDA9ETok isrJMno1SowLgu+TK12UFGVFkR15f3nftVP/3uoqOLd1x150YaBAfrw8/WCchY4c2A9O vxUOrjyKkbxdJQpwTKZCQZAI7ieV/kSdDOyOX3ZVcrBu+urOVk3WSSuxEcFfs8lxMpTF EwRYsm4v2ZDvuF+bG9TjU2B9FpKNFzwXNzUKokkVuCiuWTSNT1LJnOvbjAVZiw2CGZZU jFlA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n6-20020aa7c686000000b005026897d7bdsi764137edq.1.2023.03.29.13.14.53; Wed, 29 Mar 2023 13:15:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230400AbjC2Tqa (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230025AbjC2Tpz (ORCPT ); Wed, 29 Mar 2023 15:45:55 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD206135 for ; Wed, 29 Mar 2023 12:45:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7665261E40 for ; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5295FC4339E; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk0-002Rn8-1T; Wed, 29 Mar 2023 15:45:52 -0400 Message-ID: <20230329194552.271100571@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:32 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 16/25] tracing/user_events: Add ioctl for disabling addresses References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761734472782797081?= X-GMAIL-MSGID: =?utf-8?q?1761734472782797081?= From: Beau Belgrave Enablements are now tracked by the lifetime of the task/mm. User processes need to be able to disable their addresses if tracing is requested to be turned off. Before unmapping the page would suffice. However, we now need a stronger contract. Add an ioctl to enable this. A new flag bit is added, freeing, to user_event_enabler to ensure that if the event is attempted to be removed while a fault is being handled that the remove is delayed until after the fault is reattempted. Link: https://lkml.kernel.org/r/20230328235219.203-6-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- include/uapi/linux/user_events.h | 24 ++++++++ kernel/trace/trace_events_user.c | 97 +++++++++++++++++++++++++++++++- 2 files changed, 119 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 22521bc622db..3e7275e3234a 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -46,6 +46,27 @@ struct user_reg { __u32 write_index; } __attribute__((__packed__)); +/* + * Describes an event unregister, callers must set the size, address and bit. + * This structure is passed to the DIAG_IOCSUNREG ioctl to disable bit updates. + */ +struct user_unreg { + /* Input: Size of the user_unreg structure being used */ + __u32 size; + + /* Input: Bit to unregister */ + __u8 disable_bit; + + /* Input: Reserved, set to 0 */ + __u8 __reserved; + + /* Input: Reserved, set to 0 */ + __u16 __reserved2; + + /* Input: Address to unregister */ + __u64 disable_addr; +} __attribute__((__packed__)); + #define DIAG_IOC_MAGIC '*' /* Request to register a user_event */ @@ -54,4 +75,7 @@ struct user_reg { /* Request to delete a user_event */ #define DIAG_IOCSDEL _IOW(DIAG_IOC_MAGIC, 1, char *) +/* Requests to unregister a user_event */ +#define DIAG_IOCSUNREG _IOW(DIAG_IOC_MAGIC, 2, struct user_unreg*) + #endif /* _UAPI_LINUX_USER_EVENTS_H */ diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 86bda1660536..f88bab3f1fe1 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -102,6 +102,9 @@ struct user_event_enabler { /* Bit 6 is for faulting status of enablement */ #define ENABLE_VAL_FAULTING_BIT 6 +/* Bit 7 is for freeing status of enablement */ +#define ENABLE_VAL_FREEING_BIT 7 + /* Only duplicate the bit value */ #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK @@ -301,6 +304,12 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) /* Prevent state changes from racing */ mutex_lock(&event_mutex); + /* User asked for enabler to be removed during fault */ + if (test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler))) { + user_event_enabler_destroy(enabler); + goto out; + } + /* * If we managed to get the page, re-issue the write. We do not * want to get into a possible infinite loop, which is why we only @@ -315,7 +324,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) user_event_enabler_write(mm, enabler, true); mmap_read_unlock(mm->mm); } - +out: mutex_unlock(&event_mutex); /* In all cases we no longer need the mm or fault */ @@ -370,7 +379,8 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (refcount_read(&mm->tasks) == 0) return -ENOENT; - if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))) + if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)) || + test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)))) return -EBUSY; ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, @@ -428,6 +438,10 @@ static bool user_event_enabler_dup(struct user_event_enabler *orig, { struct user_event_enabler *enabler; + /* Skip pending frees */ + if (unlikely(test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(orig)))) + return true; + enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT); if (!enabler) @@ -2086,6 +2100,79 @@ static long user_events_ioctl_del(struct user_event_file_info *info, return ret; } +static long user_unreg_get(struct user_unreg __user *ureg, + struct user_unreg *kreg) +{ + u32 size; + long ret; + + ret = get_user(size, &ureg->size); + + if (ret) + return ret; + + if (size > PAGE_SIZE) + return -E2BIG; + + if (size < offsetofend(struct user_unreg, disable_addr)) + return -EINVAL; + + ret = copy_struct_from_user(kreg, sizeof(*kreg), ureg, size); + + /* Ensure no reserved values, since we don't support any yet */ + if (kreg->__reserved || kreg->__reserved2) + return -EINVAL; + + return ret; +} + +/* + * Unregisters an enablement address/bit within a task/user mm. + */ +static long user_events_ioctl_unreg(unsigned long uarg) +{ + struct user_unreg __user *ureg = (struct user_unreg __user *)uarg; + struct user_event_mm *mm = current->user_event_mm; + struct user_event_enabler *enabler, *next; + struct user_unreg reg; + long ret; + + ret = user_unreg_get(ureg, ®); + + if (ret) + return ret; + + if (!mm) + return -ENOENT; + + ret = -ENOENT; + + /* + * Flags freeing and faulting are used to indicate if the enabler is in + * use at all. When faulting is set a page-fault is occurring asyncly. + * During async fault if freeing is set, the enabler will be destroyed. + * If no async fault is happening, we can destroy it now since we hold + * the event_mutex during these checks. + */ + mutex_lock(&event_mutex); + + list_for_each_entry_safe(enabler, next, &mm->enablers, link) + if (enabler->addr == reg.disable_addr && + (enabler->values & ENABLE_VAL_BIT_MASK) == reg.disable_bit) { + set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); + + if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) + user_event_enabler_destroy(enabler); + + /* Removed at least one */ + ret = 0; + } + + mutex_unlock(&event_mutex); + + return ret; +} + /* * Handles the ioctl from user mode to register or alter operations. */ @@ -2108,6 +2195,12 @@ static long user_events_ioctl(struct file *file, unsigned int cmd, ret = user_events_ioctl_del(info, uarg); mutex_unlock(&group->reg_mutex); break; + + case DIAG_IOCSUNREG: + mutex_lock(&group->reg_mutex); + ret = user_events_ioctl_unreg(uarg); + mutex_unlock(&group->reg_mutex); + break; } return ret; From patchwork Wed Mar 29 19:45:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76802 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp661985vqo; Wed, 29 Mar 2023 13:14:33 -0700 (PDT) X-Google-Smtp-Source: AKy350YL8BLIkODYR3Dq7i3SY5Q9bG670WxS8qF+b+Jgf3QR6O7HJ/rp9q1ptsw1cGzkfDl2oV/v X-Received: by 2002:a17:906:1d0a:b0:8b1:7b10:61d5 with SMTP id n10-20020a1709061d0a00b008b17b1061d5mr22738171ejh.33.1680120872725; Wed, 29 Mar 2023 13:14:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120872; cv=none; d=google.com; s=arc-20160816; b=lWgxs9J9IKj5Hsa+3G6m6K3qExKrdde8KROnklVPtEhOtsL5WE4IBIOzqsM0EEFG3b SjIDBtIBqH/q/z890AiMcxZuKGernNsYxlckewGf3uKwD0gJomqgF9iMwlYJdTrZUONN gbu3zypwgGusi392Brsl+a04LKSYTiZ1cK74yXBL01wtrUsF3WJWHGuAexTM2Fn3uhA7 IET3C85tFjKagDnbmBJinMGLFmI2CekVgPfERzj1vrMA4NnLm8jzBc1BiZVi+7sNZuTn TuXVKhWShl+lF8kabV8H/36eS8qxueV08QPuZyf2OZWxAWG6amg7m8JfA8byCz453i0Q SO5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=deDO9h4UZgfbxuVKBsM74jaBs6EJHB1QiDyd8atBy3Y=; b=wEbDuwgeSurenVONaXDX8MhcDGUEpDMcTdTjt24NQlc8q1UVmqnoVFS/1HH7T5jOQp i+pdolMZ7J2UMO6TYLoW7fgKW4upyrIbj4ZedzV1yYy6J0P3EpyoUc4/fvPoXjB6rov8 7PP+I7xDFBMj2nJ82q+CjhTsDEC/MLS7VNQcoJE2GIRF7oNmhAbBMgp5elPjxYjzvscX GHGsrjNqzXfg4OSRIv8jzmi4vfiHtAtNV0SnmOLw+RqaChNEgxqwdEfm8nJLurHHK8ko yhFcmb+emKWFnYfLzjyQ93c0Ur35IcssDrMcA7QRb/X/mzQMkq6361Z72eCyeSB3o13O Tyzw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u17-20020aa7d0d1000000b00501dffe7dc3si21702624edo.248.2023.03.29.13.14.08; Wed, 29 Mar 2023 13:14:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231168AbjC2Tqt (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230096AbjC2Tp6 (ORCPT ); Wed, 29 Mar 2023 15:45:58 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BCA94219 for ; Wed, 29 Mar 2023 12:45:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5890361E2B for ; Wed, 29 Mar 2023 19:45:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D557C433D2; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk0-002Rng-29; Wed, 29 Mar 2023 15:45:52 -0400 Message-ID: <20230329194552.474991854@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:33 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 17/25] tracing/user_events: Update self-tests to write ABI References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761734424413595356?= X-GMAIL-MSGID: =?utf-8?q?1761734424413595356?= From: Beau Belgrave ABI has been changed to remote writes, update existing test cases to use this new ABI to ensure existing functionality continues to work. Link: https://lkml.kernel.org/r/20230328235219.203-7-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- .../selftests/user_events/ftrace_test.c | 152 ++++++++++-------- .../testing/selftests/user_events/perf_test.c | 33 ++-- 2 files changed, 96 insertions(+), 89 deletions(-) diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c index a0b2c96eb252..aceafacfb126 100644 --- a/tools/testing/selftests/user_events/ftrace_test.c +++ b/tools/testing/selftests/user_events/ftrace_test.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include "../kselftest_harness.h" @@ -22,11 +23,6 @@ const char *enable_file = "/sys/kernel/tracing/events/user_events/__test_event/e const char *trace_file = "/sys/kernel/tracing/trace"; const char *fmt_file = "/sys/kernel/tracing/events/user_events/__test_event/format"; -static inline int status_check(char *status_page, int status_bit) -{ - return status_page[status_bit >> 3] & (1 << (status_bit & 7)); -} - static int trace_bytes(void) { int fd = open(trace_file, O_RDONLY); @@ -106,13 +102,23 @@ static int get_print_fmt(char *buffer, int len) return -1; } -static int clear(void) +static int clear(int *check) { + struct user_unreg unreg = {0}; + + unreg.size = sizeof(unreg); + unreg.disable_bit = 31; + unreg.disable_addr = (__u64)check; + int fd = open(data_file, O_RDWR); if (fd == -1) return -1; + if (ioctl(fd, DIAG_IOCSUNREG, &unreg) == -1) + if (errno != ENOENT) + return -1; + if (ioctl(fd, DIAG_IOCSDEL, "__test_event") == -1) if (errno != ENOENT) return -1; @@ -122,7 +128,7 @@ static int clear(void) return 0; } -static int check_print_fmt(const char *event, const char *expected) +static int check_print_fmt(const char *event, const char *expected, int *check) { struct user_reg reg = {0}; char print_fmt[256]; @@ -130,7 +136,7 @@ static int check_print_fmt(const char *event, const char *expected) int fd; /* Ensure cleared */ - ret = clear(); + ret = clear(check); if (ret != 0) return ret; @@ -142,14 +148,19 @@ static int check_print_fmt(const char *event, const char *expected) reg.size = sizeof(reg); reg.name_args = (__u64)event; + reg.enable_bit = 31; + reg.enable_addr = (__u64)check; + reg.enable_size = sizeof(*check); /* Register should work */ ret = ioctl(fd, DIAG_IOCSREG, ®); close(fd); - if (ret != 0) + if (ret != 0) { + printf("Reg failed in fmt\n"); return ret; + } /* Ensure correct print_fmt */ ret = get_print_fmt(print_fmt, sizeof(print_fmt)); @@ -164,6 +175,7 @@ FIXTURE(user) { int status_fd; int data_fd; int enable_fd; + int check; }; FIXTURE_SETUP(user) { @@ -185,59 +197,56 @@ FIXTURE_TEARDOWN(user) { close(self->enable_fd); } - ASSERT_EQ(0, clear()); + if (clear(&self->check) != 0) + printf("WARNING: Clear didn't work!\n"); } TEST_F(user, register_events) { struct user_reg reg = {0}; - int page_size = sysconf(_SC_PAGESIZE); - char *status_page; + struct user_unreg unreg = {0}; reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event u32 field1; u32 field2"; + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); - status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED, - self->status_fd, 0); + unreg.size = sizeof(unreg); + unreg.disable_bit = 31; + unreg.disable_addr = (__u64)&self->check; /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); /* Multiple registers should result in same index */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); /* Ensure disabled */ self->enable_fd = open(enable_file, O_RDWR); ASSERT_NE(-1, self->enable_fd); ASSERT_NE(-1, write(self->enable_fd, "0", sizeof("0"))) - /* MMAP should work and be zero'd */ - ASSERT_NE(MAP_FAILED, status_page); - ASSERT_NE(NULL, status_page); - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); - /* Enable event and ensure bits updated in status */ ASSERT_NE(-1, write(self->enable_fd, "1", sizeof("1"))) - ASSERT_NE(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(1 << reg.enable_bit, self->check); /* Disable event and ensure bits updated in status */ ASSERT_NE(-1, write(self->enable_fd, "0", sizeof("0"))) - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(0, self->check); /* File still open should return -EBUSY for delete */ ASSERT_EQ(-1, ioctl(self->data_fd, DIAG_IOCSDEL, "__test_event")); ASSERT_EQ(EBUSY, errno); - /* Delete should work only after close */ + /* Unregister */ + ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSUNREG, &unreg)); + + /* Delete should work only after close and unregister */ close(self->data_fd); self->data_fd = open(data_file, O_RDWR); ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSDEL, "__test_event")); - - /* Unmap should work */ - ASSERT_EQ(0, munmap(status_page, page_size)); } TEST_F(user, write_events) { @@ -245,11 +254,12 @@ TEST_F(user, write_events) { struct iovec io[3]; __u32 field1, field2; int before = 0, after = 0; - int page_size = sysconf(_SC_PAGESIZE); - char *status_page; reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event u32 field1; u32 field2"; + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); field1 = 1; field2 = 2; @@ -261,18 +271,10 @@ TEST_F(user, write_events) { io[2].iov_base = &field2; io[2].iov_len = sizeof(field2); - status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED, - self->status_fd, 0); - /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); - - /* MMAP should work and be zero'd */ - ASSERT_NE(MAP_FAILED, status_page); - ASSERT_NE(NULL, status_page); - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(0, self->check); /* Write should fail on invalid slot with ENOENT */ io[0].iov_base = &field2; @@ -287,7 +289,7 @@ TEST_F(user, write_events) { ASSERT_NE(-1, write(self->enable_fd, "1", sizeof("1"))) /* Event should now be enabled */ - ASSERT_NE(0, status_check(status_page, reg.status_bit)); + ASSERT_NE(1 << reg.enable_bit, self->check); /* Write should make it out to ftrace buffers */ before = trace_bytes(); @@ -304,6 +306,9 @@ TEST_F(user, write_fault) { reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event u64 anon"; + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); anon = mmap(NULL, l, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); ASSERT_NE(MAP_FAILED, anon); @@ -316,7 +321,6 @@ TEST_F(user, write_fault) { /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); /* Write should work normally */ ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 2)); @@ -333,24 +337,17 @@ TEST_F(user, write_validator) { int loc, bytes; char data[8]; int before = 0, after = 0; - int page_size = sysconf(_SC_PAGESIZE); - char *status_page; - - status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED, - self->status_fd, 0); reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event __rel_loc char[] data"; + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); - - /* MMAP should work and be zero'd */ - ASSERT_NE(MAP_FAILED, status_page); - ASSERT_NE(NULL, status_page); - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(0, self->check); io[0].iov_base = ®.write_index; io[0].iov_len = sizeof(reg.write_index); @@ -369,7 +366,7 @@ TEST_F(user, write_validator) { ASSERT_NE(-1, write(self->enable_fd, "1", sizeof("1"))) /* Event should now be enabled */ - ASSERT_NE(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(1 << reg.enable_bit, self->check); /* Full in-bounds write should work */ before = trace_bytes(); @@ -409,71 +406,88 @@ TEST_F(user, print_fmt) { int ret; ret = check_print_fmt("__test_event __rel_loc char[] data", - "print fmt: \"data=%s\", __get_rel_str(data)"); + "print fmt: \"data=%s\", __get_rel_str(data)", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event __data_loc char[] data", - "print fmt: \"data=%s\", __get_str(data)"); + "print fmt: \"data=%s\", __get_str(data)", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event s64 data", - "print fmt: \"data=%lld\", REC->data"); + "print fmt: \"data=%lld\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event u64 data", - "print fmt: \"data=%llu\", REC->data"); + "print fmt: \"data=%llu\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event s32 data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event u32 data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event int data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event unsigned int data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event s16 data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event u16 data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event short data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event unsigned short data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event s8 data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event u8 data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event char data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event unsigned char data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event char[4] data", - "print fmt: \"data=%s\", REC->data"); + "print fmt: \"data=%s\", REC->data", + &self->check); ASSERT_EQ(0, ret); } diff --git a/tools/testing/selftests/user_events/perf_test.c b/tools/testing/selftests/user_events/perf_test.c index 31505642aa9b..a070258d4449 100644 --- a/tools/testing/selftests/user_events/perf_test.c +++ b/tools/testing/selftests/user_events/perf_test.c @@ -19,7 +19,6 @@ #include "../kselftest_harness.h" const char *data_file = "/sys/kernel/tracing/user_events_data"; -const char *status_file = "/sys/kernel/tracing/user_events_status"; const char *id_file = "/sys/kernel/tracing/events/user_events/__test_event/id"; const char *fmt_file = "/sys/kernel/tracing/events/user_events/__test_event/format"; @@ -35,11 +34,6 @@ static long perf_event_open(struct perf_event_attr *pe, pid_t pid, return syscall(__NR_perf_event_open, pe, pid, cpu, group_fd, flags); } -static inline int status_check(char *status_page, int status_bit) -{ - return status_page[status_bit >> 3] & (1 << (status_bit & 7)); -} - static int get_id(void) { FILE *fp = fopen(id_file, "r"); @@ -88,45 +82,38 @@ static int get_offset(void) } FIXTURE(user) { - int status_fd; int data_fd; + int check; }; FIXTURE_SETUP(user) { - self->status_fd = open(status_file, O_RDONLY); - ASSERT_NE(-1, self->status_fd); - self->data_fd = open(data_file, O_RDWR); ASSERT_NE(-1, self->data_fd); } FIXTURE_TEARDOWN(user) { - close(self->status_fd); close(self->data_fd); } TEST_F(user, perf_write) { struct perf_event_attr pe = {0}; struct user_reg reg = {0}; - int page_size = sysconf(_SC_PAGESIZE); - char *status_page; struct event event; struct perf_event_mmap_page *perf_page; + int page_size = sysconf(_SC_PAGESIZE); int id, fd, offset; __u32 *val; reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event u32 field1; u32 field2"; - - status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED, - self->status_fd, 0); - ASSERT_NE(MAP_FAILED, status_page); + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(0, self->check); /* Id should be there */ id = get_id(); @@ -149,7 +136,7 @@ TEST_F(user, perf_write) { ASSERT_NE(MAP_FAILED, perf_page); /* Status should be updated */ - ASSERT_NE(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(1 << reg.enable_bit, self->check); event.index = reg.write_index; event.field1 = 0xc001; @@ -165,6 +152,12 @@ TEST_F(user, perf_write) { /* Ensure correct */ ASSERT_EQ(event.field1, *val++); ASSERT_EQ(event.field2, *val++); + + munmap(perf_page, page_size * 2); + close(fd); + + /* Status should be updated */ + ASSERT_EQ(0, self->check); } int main(int argc, char **argv) From patchwork Wed Mar 29 19:45:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76790 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp652462vqo; Wed, 29 Mar 2023 12:58:59 -0700 (PDT) X-Google-Smtp-Source: AKy350almw0i2G7tQqjkZqRlveSVT0KXxb8SIGEm4/WAX3oIF/tWMakZV6u6/NXZQP9jBqzGbwwx X-Received: by 2002:a17:902:d101:b0:1a1:c671:8bc9 with SMTP id w1-20020a170902d10100b001a1c6718bc9mr16206463plw.7.1680119938939; Wed, 29 Mar 2023 12:58:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119938; cv=none; d=google.com; s=arc-20160816; b=PCpeH/LNnrMPcW70r26v9jvddPQn197uBVbdfWsX2y05SjCFbU8LjH6nrKbJG+CcI+ V6mW5h0yEwWZqjLw46LV/QN1mxSiIX5dKG6fwgCZkevNdAS2cFMlF8MJXwx4OtjLBLBS zjX6feZU6JRR29woLgrk4jiFAwkrQcVciNAnMKAncCaAUdwths6h7TgWsqIYkTjxJjEv pLQejwiM8sGENdHLwz1d5H4QzA9lVVKM0GFlwZxUMxD2ZbI7izNbrVNpIPInOMhIPfST L3jTWqiKX6cikEeDuDVXVbliuJ4I5BcrHgLVnpX1MtKu882XdlvfFSJy+DTCV7Eieoj2 3+Eg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=RajYIb9JqqtK3Fpkvhg6wac+dKxQVACS0VoC0p6l324=; b=BjYsw2LBUSP/EKQzE7TxwBfiGI4kHoqXuewHBevC1H8SNS98T6WU2g7yicQwmDczHp /aADCaQQytiZauCZNIFY+A7BgPz8FNnFzFfHlzCc+uCGUOsD+djyId8F2/7uYObozude TNHGicbUNRyf6efjLefOShYCUiGQ9HvUj+uqsLGNkp06gPwDoWqPaReLchz3pOEeo7qV m6X791XUHi/e8/quvxyJmUEP2WPDy95XU/LOkvAkKTag6icVVQMgPJhMiQH3Ss4tVR/X MA8c3f18Kbui906GJ/Dc8+u+6KNnJnlPTEKtdwIKC1jDxoeDJywfE72MhSuByktehZkw 3YhQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j5-20020a170902da8500b001a1bfe84f8dsi28833764plx.611.2023.03.29.12.58.45; Wed, 29 Mar 2023 12:58:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231140AbjC2Tqm (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230048AbjC2Tp5 (ORCPT ); Wed, 29 Mar 2023 15:45:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FD024C01 for ; Wed, 29 Mar 2023 12:45:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7181D61E1D for ; Wed, 29 Mar 2023 19:45:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E44D8C4339C; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk0-002RoG-2n; Wed, 29 Mar 2023 15:45:52 -0400 Message-ID: <20230329194552.684425169@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:34 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 18/25] tracing/user_events: Add ABI self-test References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733445160708424?= X-GMAIL-MSGID: =?utf-8?q?1761733445160708424?= From: Beau Belgrave Add ABI specific self-test to ensure enablements work in various scenarios such as fork, VM_CLONE, and basic event enable/disable. Ensure ABI contracts/limits are also being upheld, such as bit limits and data size limits. Link: https://lkml.kernel.org/r/20230328235219.203-8-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- tools/testing/selftests/user_events/Makefile | 2 +- .../testing/selftests/user_events/abi_test.c | 226 ++++++++++++++++++ 2 files changed, 227 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/user_events/abi_test.c diff --git a/tools/testing/selftests/user_events/Makefile b/tools/testing/selftests/user_events/Makefile index 6b512b86aec3..9e95bd41b0b4 100644 --- a/tools/testing/selftests/user_events/Makefile +++ b/tools/testing/selftests/user_events/Makefile @@ -10,7 +10,7 @@ LDLIBS += -lrt -lpthread -lm # This test will not compile until user_events.h is added # back to uapi. -TEST_GEN_PROGS = ftrace_test dyn_test perf_test +TEST_GEN_PROGS = ftrace_test dyn_test perf_test abi_test TEST_FILES := settings diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c new file mode 100644 index 000000000000..e0323d3777a7 --- /dev/null +++ b/tools/testing/selftests/user_events/abi_test.c @@ -0,0 +1,226 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * User Events ABI Test Program + * + * Copyright (c) 2022 Beau Belgrave + */ + +#define _GNU_SOURCE +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest_harness.h" + +const char *data_file = "/sys/kernel/tracing/user_events_data"; +const char *enable_file = "/sys/kernel/tracing/events/user_events/__abi_event/enable"; + +static int change_event(bool enable) +{ + int fd = open(enable_file, O_RDWR); + int ret; + + if (fd < 0) + return -1; + + if (enable) + ret = write(fd, "1", 1); + else + ret = write(fd, "0", 1); + + close(fd); + + if (ret == 1) + ret = 0; + else + ret = -1; + + return ret; +} + +static int reg_enable(long *enable, int size, int bit) +{ + struct user_reg reg = {0}; + int fd = open(data_file, O_RDWR); + int ret; + + if (fd < 0) + return -1; + + reg.size = sizeof(reg); + reg.name_args = (__u64)"__abi_event"; + reg.enable_bit = bit; + reg.enable_addr = (__u64)enable; + reg.enable_size = size; + + ret = ioctl(fd, DIAG_IOCSREG, ®); + + close(fd); + + return ret; +} + +static int reg_disable(long *enable, int bit) +{ + struct user_unreg reg = {0}; + int fd = open(data_file, O_RDWR); + int ret; + + if (fd < 0) + return -1; + + reg.size = sizeof(reg); + reg.disable_bit = bit; + reg.disable_addr = (__u64)enable; + + ret = ioctl(fd, DIAG_IOCSUNREG, ®); + + close(fd); + + return ret; +} + +FIXTURE(user) { + long check; +}; + +FIXTURE_SETUP(user) { + change_event(false); + self->check = 0; +} + +FIXTURE_TEARDOWN(user) { +} + +TEST_F(user, enablement) { + /* Changes should be reflected immediately */ + ASSERT_EQ(0, self->check); + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, change_event(true)); + ASSERT_EQ(1, self->check); + ASSERT_EQ(0, change_event(false)); + ASSERT_EQ(0, self->check); + + /* Should not change after disable */ + ASSERT_EQ(0, change_event(true)); + ASSERT_EQ(1, self->check); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(0, change_event(false)); + ASSERT_EQ(1, self->check); + self->check = 0; +} + +TEST_F(user, bit_sizes) { + /* Allow 0-31 bits for 32-bit */ + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 31)); + ASSERT_NE(0, reg_enable(&self->check, sizeof(int), 32)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(0, reg_disable(&self->check, 31)); + +#if BITS_PER_LONG == 8 + /* Allow 0-64 bits for 64-bit */ + ASSERT_EQ(0, reg_enable(&self->check, sizeof(long), 63)); + ASSERT_NE(0, reg_enable(&self->check, sizeof(long), 64)); + ASSERT_EQ(0, reg_disable(&self->check, 63)); +#endif + + /* Disallowed sizes (everything beside 4 and 8) */ + ASSERT_NE(0, reg_enable(&self->check, 1, 0)); + ASSERT_NE(0, reg_enable(&self->check, 2, 0)); + ASSERT_NE(0, reg_enable(&self->check, 3, 0)); + ASSERT_NE(0, reg_enable(&self->check, 5, 0)); + ASSERT_NE(0, reg_enable(&self->check, 6, 0)); + ASSERT_NE(0, reg_enable(&self->check, 7, 0)); + ASSERT_NE(0, reg_enable(&self->check, 9, 0)); + ASSERT_NE(0, reg_enable(&self->check, 128, 0)); +} + +TEST_F(user, forks) { + int i; + + /* Ensure COW pages get updated after fork */ + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, self->check); + + if (fork() == 0) { + /* Force COW */ + self->check = 0; + + /* Up to 1 sec for enablement */ + for (i = 0; i < 10; ++i) { + usleep(100000); + + if (self->check) + exit(0); + } + + exit(1); + } + + /* Allow generous time for COW, then enable */ + usleep(100000); + ASSERT_EQ(0, change_event(true)); + + ASSERT_NE(-1, wait(&i)); + ASSERT_EQ(0, WEXITSTATUS(i)); + + /* Ensure child doesn't disable parent */ + if (fork() == 0) + exit(reg_disable(&self->check, 0)); + + ASSERT_NE(-1, wait(&i)); + ASSERT_EQ(0, WEXITSTATUS(i)); + ASSERT_EQ(1, self->check); + ASSERT_EQ(0, change_event(false)); + ASSERT_EQ(0, self->check); +} + +/* Waits up to 1 sec for enablement */ +static int clone_check(void *check) +{ + int i; + + for (i = 0; i < 10; ++i) { + usleep(100000); + + if (*(long *)check) + return 0; + } + + return 1; +} + +TEST_F(user, clones) { + int i, stack_size = 4096; + void *stack = mmap(NULL, stack_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, + -1, 0); + + ASSERT_NE(MAP_FAILED, stack); + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, self->check); + + /* Shared VM should see enablements */ + ASSERT_NE(-1, clone(&clone_check, stack + stack_size, + CLONE_VM | SIGCHLD, &self->check)); + + ASSERT_EQ(0, change_event(true)); + ASSERT_NE(-1, wait(&i)); + ASSERT_EQ(0, WEXITSTATUS(i)); + munmap(stack, stack_size); + ASSERT_EQ(0, change_event(false)); +} + +int main(int argc, char **argv) +{ + return test_harness_run(argc, argv); +} From patchwork Wed Mar 29 19:45:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76784 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp649330vqo; Wed, 29 Mar 2023 12:52:21 -0700 (PDT) X-Google-Smtp-Source: AKy350bSLubh0grzipJ5/sGsIscSejdeVyll6chbjUDvSJgWgX5tCoQr1P1O/KJrICvbRUHsN0Ud X-Received: by 2002:a17:902:d492:b0:1a2:8866:e8b2 with SMTP id c18-20020a170902d49200b001a28866e8b2mr59582plg.1.1680119541284; Wed, 29 Mar 2023 12:52:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119541; cv=none; d=google.com; s=arc-20160816; b=wxKCganiwCIvR6rW7+0dsBAcjjvNZXiILtdo9cF30l7AB6BIMq9+PM58I8borkswzm k5l7/X2f0NiqRu6LJ8qqKg+AQ2uIkW97H4YbgQZifPDP5VMURg9w4DvoJ0tw3LoVa51L ED75rFhsCChl0fFUw/EjQBxux6y37dvBK00B4h3S7xk67oqeOFBmhdVUTF8sBA5/d/cc FH9EpTYrAV5R+qdDIgNF5t/0zD5N0SGC4T8ATLP5XdZnhNaYPG+0S45VcFb5hoepXHKM 5Q5s/9WEL6OKaQFGiOXuy+4JMKCrz8TTKqxDJ+XoT6xQo2ffOgZ7/bvLau0foX7SZQoO RQoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=02d7TFgJzLRkkG5zB+NkdUfduz3lCjl4V3vD+kU8pvs=; b=JFKXl9vEA6VRYXHBY/i6ZEzk0YeFCmKfF4vA6kFtVkYFynqwOKUORMhNdqUn9mwNtN eT9KHPx7IWQEyospMTMyLtnvo3jcm1yD/9l9KmujwWCCwxajiMUZ5Ri+v+o6wR56EXUS oK3gfD5CjLFNYbqHC4kJ5Oi+vshtcYPJV/AaZXqMFBhfI4IBJ6N0TbygckDz3MxJn7UN /SNzwaKCVF03V0K1XBV3R0yvtcE6yS/u9TLuTB9Pkhg1cn11sGg+3uczSx/gQU4Qvzio H99WnZ8zHTghm6I9UCFVDU86KJYK9lPu0FrwfTZRmqAIj53pNB2jPim7o2VSiu04gMc/ JndA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h6-20020a170902ac8600b001a1a83b02a4si31013669plr.258.2023.03.29.12.52.08; Wed, 29 Mar 2023 12:52:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230173AbjC2TrD (ORCPT + 99 others); Wed, 29 Mar 2023 15:47:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230131AbjC2Tp6 (ORCPT ); Wed, 29 Mar 2023 15:45:58 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D540065B7 for ; Wed, 29 Mar 2023 12:45:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 22B7BB82371 for ; Wed, 29 Mar 2023 19:45:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA0C5C4339E; Wed, 29 Mar 2023 19:45:53 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk1-002Rop-0E; Wed, 29 Mar 2023 15:45:53 -0400 Message-ID: <20230329194552.889121252@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:35 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 19/25] tracing/user_events: Use write ABI in example References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733028337806898?= X-GMAIL-MSGID: =?utf-8?q?1761733028337806898?= From: Beau Belgrave The ABI has changed to use a remote write approach. Update the example to show the expected use of this new ABI. Also remove debugfs path and use tracefs to ensure example works in more environments. Link: https://lkml.kernel.org/r/20230328235219.203-9-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- samples/user_events/example.c | 45 +++++++---------------------------- 1 file changed, 8 insertions(+), 37 deletions(-) diff --git a/samples/user_events/example.c b/samples/user_events/example.c index 18e34c9d708e..28165a096697 100644 --- a/samples/user_events/example.c +++ b/samples/user_events/example.c @@ -9,51 +9,28 @@ #include #include #include +#include #include #include #include -#include -#include #include -#if __BITS_PER_LONG == 64 -#define endian_swap(x) htole64(x) -#else -#define endian_swap(x) htole32(x) -#endif - -/* Assumes debugfs is mounted */ const char *data_file = "/sys/kernel/tracing/user_events_data"; -const char *status_file = "/sys/kernel/tracing/user_events_status"; +int enabled = 0; -static int event_status(long **status) -{ - int fd = open(status_file, O_RDONLY); - - *status = mmap(NULL, sysconf(_SC_PAGESIZE), PROT_READ, - MAP_SHARED, fd, 0); - - close(fd); - - if (*status == MAP_FAILED) - return -1; - - return 0; -} - -static int event_reg(int fd, const char *command, long *index, long *mask, - int *write) +static int event_reg(int fd, const char *command, int *write, int *enabled) { struct user_reg reg = {0}; reg.size = sizeof(reg); + reg.enable_bit = 31; + reg.enable_size = sizeof(*enabled); + reg.enable_addr = (__u64)enabled; reg.name_args = (__u64)command; if (ioctl(fd, DIAG_IOCSREG, ®) == -1) return -1; - *index = reg.status_bit / __BITS_PER_LONG; - *mask = endian_swap(1L << (reg.status_bit % __BITS_PER_LONG)); *write = reg.write_index; return 0; @@ -62,17 +39,12 @@ static int event_reg(int fd, const char *command, long *index, long *mask, int main(int argc, char **argv) { int data_fd, write; - long index, mask; - long *status_page; struct iovec io[2]; __u32 count = 0; - if (event_status(&status_page) == -1) - return errno; - data_fd = open(data_file, O_RDWR); - if (event_reg(data_fd, "test u32 count", &index, &mask, &write) == -1) + if (event_reg(data_fd, "test u32 count", &write, &enabled) == -1) return errno; /* Setup iovec */ @@ -80,13 +52,12 @@ int main(int argc, char **argv) io[0].iov_len = sizeof(write); io[1].iov_base = &count; io[1].iov_len = sizeof(count); - ask: printf("Press enter to check status...\n"); getchar(); /* Check if anyone is listening */ - if (status_page[index] & mask) { + if (enabled) { /* Yep, trace out our data */ writev(data_fd, (const struct iovec *)io, 2); From patchwork Wed Mar 29 19:45:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76803 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp662019vqo; Wed, 29 Mar 2023 13:14:37 -0700 (PDT) X-Google-Smtp-Source: AKy350aet7dghWwXiSkmmHGCyVSXmQbi+0JvFMiSrLFqQlLKEK27hpwyFx2Ogympzcmrn/MxXgnU X-Received: by 2002:a17:906:74f:b0:933:3b2e:6016 with SMTP id z15-20020a170906074f00b009333b2e6016mr20576846ejb.7.1680120876849; Wed, 29 Mar 2023 13:14:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120876; cv=none; d=google.com; s=arc-20160816; b=LeDy5qaC2uvXGT32KpWsBm5sRzJlZfeecWbU+F+OnL2Ut9l27PYZh9xhXVX59TyZJ0 foFNq9yXHJjz8fwuJH12sYFzgwMwN8/rpObxtKq/8mWhAxc6IIYD8xF39RW6oeRDWX2P DNr06+FBi3ipucvZoRpdbiuTjsiQFGGDKOw5D9hN5JX/PTWu7syb+sNnZuowJvdejZ9T LebavEx0gqza6nQ8jGpySIOff9pjOxHnnIQVXXMsY+e+YMl8SCrUX+EZmgENeuUkC/Vd XBgojt81CEHNepuyEjD2MPCKgcGa3vaFWHdv61s3EPaCZob+ziSguXl4SRs57k7STFUB Z26w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=ZhnGtiGff/L2/5LNK6HDz+jK9zwmenA5jawX9x4gA9Q=; b=rwovJkNmaF8RISYxuw5oxPLw/cBLz16dw6cOgcmCpxl/9kl7zemlZfSm6WTDurFOxn RlOK4gUbhU/iK0QS1v0cFeUWAHzif2eVmurEI1aOExbhehm3MaUFPJ34P83nuQpM13DR tnaNKQFoen/T63WQnmzWUk+Zb5j4W6VWbDyT66Bde0bLCc/qvkIgS1u0T2nT9AJ89vJb rwa3cEY82fT+ReKsv2aA1gETcUs7150RdVHTkw/zLM53ZAYh/Rqxo1jl6djGUuKMeUTs qQit3MfxZisJR6xxrdCLdEQWNAb5qBBL/8pZ3K2G/EZbWx1h7Q437zLmHCIoxCP4wFsk DFhg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u11-20020a170906950b00b009332d9b2a96si24348019ejx.955.2023.03.29.13.14.11; Wed, 29 Mar 2023 13:14:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231240AbjC2TrI (ORCPT + 99 others); Wed, 29 Mar 2023 15:47:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230181AbjC2Tp6 (ORCPT ); Wed, 29 Mar 2023 15:45:58 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6A886A58 for ; Wed, 29 Mar 2023 12:45:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 604FCB82433 for ; Wed, 29 Mar 2023 19:45:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A1EDC433A4; Wed, 29 Mar 2023 19:45:54 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk1-002RpO-0u; Wed, 29 Mar 2023 15:45:53 -0400 Message-ID: <20230329194553.094479648@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:36 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 20/25] tracing/user_events: Update documentation for ABI References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761734428652329252?= X-GMAIL-MSGID: =?utf-8?q?1761734428652329252?= From: Beau Belgrave The ABI for user_events has changed from mmap() based to remote writes. Update the documentation to reflect these changes, add new section for unregistering events since lifetime is now tied to tasks instead of files. Link: https://lkml.kernel.org/r/20230328235219.203-10-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- Documentation/trace/user_events.rst | 167 ++++++++++++++++------------ 1 file changed, 97 insertions(+), 70 deletions(-) diff --git a/Documentation/trace/user_events.rst b/Documentation/trace/user_events.rst index 422802ef4025..f79987e16cf4 100644 --- a/Documentation/trace/user_events.rst +++ b/Documentation/trace/user_events.rst @@ -20,11 +20,10 @@ dynamic_events is the same as the ioctl with the u: prefix applied. Typically programs will register a set of events that they wish to expose to tools that can read trace_events (such as ftrace and perf). The registration -process gives back two ints to the program for each event. The first int is -the status bit. This describes which bit in little-endian format in the -/sys/kernel/tracing/user_events_status file represents this event. The -second int is the write index which describes the data when a write() or -writev() is called on the /sys/kernel/tracing/user_events_data file. +process tells the kernel which address and bit to reflect if any tool has +enabled the event and data should be written. The registration will give back +a write index which describes the data when a write() or writev() is called +on the /sys/kernel/tracing/user_events_data file. The structures referenced in this document are contained within the /include/uapi/linux/user_events.h file in the source tree. @@ -41,23 +40,64 @@ DIAG_IOCSREG. This command takes a packed struct user_reg as an argument:: struct user_reg { - u32 size; - u64 name_args; - u32 status_bit; - u32 write_index; - }; + /* Input: Size of the user_reg structure being used */ + __u32 size; + + /* Input: Bit in enable address to use */ + __u8 enable_bit; + + /* Input: Enable size in bytes at address */ + __u8 enable_size; + + /* Input: Flags for future use, set to 0 */ + __u16 flags; + + /* Input: Address to update when enabled */ + __u64 enable_addr; + + /* Input: Pointer to string with event name, description and flags */ + __u64 name_args; + + /* Output: Index of the event to use when writing data */ + __u32 write_index; + } __attribute__((__packed__)); + +The struct user_reg requires all the above inputs to be set appropriately. + ++ size: This must be set to sizeof(struct user_reg). -The struct user_reg requires two inputs, the first is the size of the structure -to ensure forward and backward compatibility. The second is the command string -to issue for registering. Upon success two outputs are set, the status bit -and the write index. ++ enable_bit: The bit to reflect the event status at the address specified by + enable_addr. + ++ enable_size: The size of the value specified by enable_addr. + This must be 4 (32-bit) or 8 (64-bit). 64-bit values are only allowed to be + used on 64-bit kernels, however, 32-bit can be used on all kernels. + ++ flags: The flags to use, if any. For the initial version this must be 0. + Callers should first attempt to use flags and retry without flags to ensure + support for lower versions of the kernel. If a flag is not supported -EINVAL + is returned. + ++ enable_addr: The address of the value to use to reflect event status. This + must be naturally aligned and write accessible within the user program. + ++ name_args: The name and arguments to describe the event, see command format + for details. + +Upon successful registration the following is set. + ++ write_index: The index to use for this file descriptor that represents this + event when writing out data. The index is unique to this instance of the file + descriptor that was used for the registration. See writing data for details. User based events show up under tracefs like any other event under the subsystem named "user_events". This means tools that wish to attach to the events need to use /sys/kernel/tracing/events/user_events/[name]/enable or perf record -e user_events:[name] when attaching/recording. -**NOTE:** *The write_index returned is only valid for the FD that was used* +**NOTE:** The event subsystem name by default is "user_events". Callers should +not assume it will always be "user_events". Operators reserve the right in the +future to change the subsystem name per-process to accomodate event isolation. Command Format ^^^^^^^^^^^^^^ @@ -94,7 +134,7 @@ Would be represented by the following field:: struct mytype myname 20 Deleting ------------ +-------- Deleting an event from within a user process is done via ioctl() out to the /sys/kernel/tracing/user_events_data file. The command to issue is DIAG_IOCSDEL. @@ -104,92 +144,79 @@ its name. Delete will only succeed if there are no references left to the event (in both user and kernel space). User programs should use a separate file to request deletes than the one used for registration due to this. -Status ------- -When tools attach/record user based events the status of the event is updated -in realtime. This allows user programs to only incur the cost of the write() or -writev() calls when something is actively attached to the event. - -User programs call mmap() on /sys/kernel/tracing/user_events_status to -check the status for each event that is registered. The bit to check in the -file is given back after the register ioctl() via user_reg.status_bit. The bit -is always in little-endian format. Programs can check if the bit is set either -using a byte-wise index with a mask or a long-wise index with a little-endian -mask. +Unregistering +------------- +If after registering an event it is no longer wanted to be updated then it can +be disabled via ioctl() out to the /sys/kernel/tracing/user_events_data file. +The command to issue is DIAG_IOCSUNREG. This is different than deleting, where +deleting actually removes the event from the system. Unregistering simply tells +the kernel your process is no longer interested in updates to the event. -Currently the size of user_events_status is a single page, however, custom -kernel configurations can change this size to allow more user based events. In -all cases the size of the file is a multiple of a page size. +This command takes a packed struct user_unreg as an argument:: -For example, if the register ioctl() gives back a status_bit of 3 you would -check byte 0 (3 / 8) of the returned mmap data and then AND the result with 8 -(1 << (3 % 8)) to see if anything is attached to that event. + struct user_unreg { + /* Input: Size of the user_unreg structure being used */ + __u32 size; -A byte-wise index check is performed as follows:: + /* Input: Bit to unregister */ + __u8 disable_bit; - int index, mask; - char *status_page; + /* Input: Reserved, set to 0 */ + __u8 __reserved; - index = status_bit / 8; - mask = 1 << (status_bit % 8); - - ... + /* Input: Reserved, set to 0 */ + __u16 __reserved2; - if (status_page[index] & mask) { - /* Enabled */ - } + /* Input: Address to unregister */ + __u64 disable_addr; + } __attribute__((__packed__)); -A long-wise index check is performed as follows:: +The struct user_unreg requires all the above inputs to be set appropriately. - #include - #include ++ size: This must be set to sizeof(struct user_unreg). - #if __BITS_PER_LONG == 64 - #define endian_swap(x) htole64(x) - #else - #define endian_swap(x) htole32(x) - #endif ++ disable_bit: This must be set to the bit to disable (same bit that was + previously registered via enable_bit). - long index, mask, *status_page; ++ disable_addr: This must be set to the address to disable (same address that was + previously registered via enable_addr). - index = status_bit / __BITS_PER_LONG; - mask = 1L << (status_bit % __BITS_PER_LONG); - mask = endian_swap(mask); +**NOTE:** Events are automatically unregistered when execve() is invoked. During +fork() the registered events will be retained and must be unregistered manually +in each process if wanted. - ... +Status +------ +When tools attach/record user based events the status of the event is updated +in realtime. This allows user programs to only incur the cost of the write() or +writev() calls when something is actively attached to the event. - if (status_page[index] & mask) { - /* Enabled */ - } +The kernel will update the specified bit that was registered for the event as +tools attach/detach from the event. User programs simply check if the bit is set +to see if something is attached or not. Administrators can easily check the status of all registered events by reading the user_events_status file directly via a terminal. The output is as follows:: - Byte:Name [# Comments] + Name [# Comments] ... Active: ActiveCount Busy: BusyCount - Max: MaxCount For example, on a system that has a single event the output looks like this:: - 1:test + test Active: 1 Busy: 0 - Max: 32768 If a user enables the user event via ftrace, the output would change to this:: - 1:test # Used by ftrace + test # Used by ftrace Active: 1 Busy: 1 - Max: 32768 - -**NOTE:** *A status bit of 0 will never be returned. This allows user programs -to have a bit that can be used on error cases.* Writing Data ------------ @@ -217,7 +244,7 @@ For example, if I have a struct like this:: int src; int dst; int flags; - }; + } __attribute__((__packed__)); It's advised for user programs to do the following:: From patchwork Wed Mar 29 19:45:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76786 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp649811vqo; Wed, 29 Mar 2023 12:53:16 -0700 (PDT) X-Google-Smtp-Source: AKy350aaCwLbMIDv/f8du60gBye74YZz6rvt+1U1XMTuTJ2BcMd9kA1A+OGjA0fNEBCvcTvh4/2I X-Received: by 2002:a17:90b:4c07:b0:23d:35cf:44be with SMTP id na7-20020a17090b4c0700b0023d35cf44bemr3231213pjb.6.1680119596700; Wed, 29 Mar 2023 12:53:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119596; cv=none; d=google.com; s=arc-20160816; b=VFlMzE3P0qKLQK0J8PQbZ2mVanPcnDgTnNBBtwjEw+3PFDAxEzXrWF6TnAsNUENd2X 7WGSIc8Se3rdWcrmvlvwyHra26isUX/TETyZ8KE6feLpwdsdSLMt4e9H4xnQ3eM0+W2s hOXG0Un4gGduMn0XhGofBz5VvLQlsKFfIfSAocgYBPR7nNMFtX332iO5jEgmUdY7dmUX pbvpyqLkylnnJmgplVkd/awYOHxWVf1fbbtrEvACMY/oC8gPEalMOEUwdmPsj+fl6HVO Tz9CZcFdTuRdKaEINYVrjQn1CCmSOOKGHMCxYekFwyd49451Kr6YduOdrlU7lbz9MG31 q2QA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=l9xvm+MQzeAfPgm1b7p6/bRfPXJS7QRPC8lfXKziuak=; b=ws1UxdAjhl560S4vOM8KnpGgS9uowA7OJ1i26dOIG7iA6T85pxvBQ215YcneOKfZeA BrDuoYWRiCzyCQNycFjpamdonqX4XcwB11yL/BplMz1DrrVUAXz0wUvZ5LOcyKHLfwGQ 7mUcqdcUgpf8oIrr+7WGNJ0AVt1hcrm6ViJhcAH5pspr8rhb4aLq6ylcvnGcNDZ2p2Cd 4aSzJUgkr+3awahNBaKtYP8OHsV2qRKUO/jvhXPPt+sRoxkOW8HV+431DftPKjJwPcXO Yd1Mygrdv+HCldpQOew0vblrRienxyid6XwuQueXd/IWqOSRu3ZnU9w7t26hnKAhGqUA nWrw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x5-20020a17090a970500b0023ff9bc6dfcsi2027237pjo.56.2023.03.29.12.53.01; Wed, 29 Mar 2023 12:53:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229753AbjC2TrG (ORCPT + 99 others); Wed, 29 Mar 2023 15:47:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230150AbjC2Tp6 (ORCPT ); Wed, 29 Mar 2023 15:45:58 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E64776A4B for ; Wed, 29 Mar 2023 12:45:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 843C6B82438 for ; Wed, 29 Mar 2023 19:45:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B56BC433EF; Wed, 29 Mar 2023 19:45:54 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk1-002Rpy-1Y; Wed, 29 Mar 2023 15:45:53 -0400 Message-ID: <20230329194553.300997623@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:37 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 21/25] tracing/user_events: Charge event allocs to cgroups References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733086363418488?= X-GMAIL-MSGID: =?utf-8?q?1761733086363418488?= From: Beau Belgrave Operators need a way to limit how much memory cgroups use. User events need to be included into that accounting. Fix this by using GFP_KERNEL_ACCOUNT for allocations generated by user programs for user_event tracing. Link: https://lkml.kernel.org/r/20230328235219.203-11-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace_events_user.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index f88bab3f1fe1..3a01c2df4a90 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -442,7 +442,7 @@ static bool user_event_enabler_dup(struct user_event_enabler *orig, if (unlikely(test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(orig)))) return true; - enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT); + enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT | __GFP_ACCOUNT); if (!enabler) return false; @@ -502,7 +502,7 @@ static struct user_event_mm *user_event_mm_create(struct task_struct *t) struct user_event_mm *user_mm; unsigned long flags; - user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL); + user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL_ACCOUNT); if (!user_mm) return NULL; @@ -662,7 +662,7 @@ static struct user_event_enabler if (!user_mm) return NULL; - enabler = kzalloc(sizeof(*enabler), GFP_KERNEL); + enabler = kzalloc(sizeof(*enabler), GFP_KERNEL_ACCOUNT); if (!enabler) goto out; @@ -870,7 +870,7 @@ static int user_event_add_field(struct user_event *user, const char *type, struct ftrace_event_field *field; int validator_flags = 0; - field = kmalloc(sizeof(*field), GFP_KERNEL); + field = kmalloc(sizeof(*field), GFP_KERNEL_ACCOUNT); if (!field) return -ENOMEM; @@ -889,7 +889,7 @@ static int user_event_add_field(struct user_event *user, const char *type, if (strstr(type, "char") != NULL) validator_flags |= VALIDATOR_ENSURE_NULL; - validator = kmalloc(sizeof(*validator), GFP_KERNEL); + validator = kmalloc(sizeof(*validator), GFP_KERNEL_ACCOUNT); if (!validator) { kfree(field); @@ -1175,7 +1175,7 @@ static int user_event_create_print_fmt(struct user_event *user) len = user_event_set_print_fmt(user, NULL, 0); - print_fmt = kmalloc(len, GFP_KERNEL); + print_fmt = kmalloc(len, GFP_KERNEL_ACCOUNT); if (!print_fmt) return -ENOMEM; @@ -1508,7 +1508,7 @@ static int user_event_create(const char *raw_command) raw_command += USER_EVENTS_PREFIX_LEN; raw_command = skip_spaces(raw_command); - name = kstrdup(raw_command, GFP_KERNEL); + name = kstrdup(raw_command, GFP_KERNEL_ACCOUNT); if (!name) return -ENOMEM; @@ -1704,7 +1704,7 @@ static int user_event_parse(struct user_event_group *group, char *name, return 0; } - user = kzalloc(sizeof(*user), GFP_KERNEL); + user = kzalloc(sizeof(*user), GFP_KERNEL_ACCOUNT); if (!user) return -ENOMEM; @@ -1874,7 +1874,7 @@ static int user_events_open(struct inode *node, struct file *file) if (!group) return -ENOENT; - info = kzalloc(sizeof(*info), GFP_KERNEL); + info = kzalloc(sizeof(*info), GFP_KERNEL_ACCOUNT); if (!info) return -ENOMEM; @@ -1927,7 +1927,7 @@ static int user_events_ref_add(struct user_event_file_info *info, size = struct_size(refs, events, count + 1); - new_refs = kzalloc(size, GFP_KERNEL); + new_refs = kzalloc(size, GFP_KERNEL_ACCOUNT); if (!new_refs) return -ENOMEM; From patchwork Wed Mar 29 19:45:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76787 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp652231vqo; Wed, 29 Mar 2023 12:58:25 -0700 (PDT) X-Google-Smtp-Source: AKy350b3tsXOkX5NaMq5gy6L1nVrRFpTlqyXrE84f9iqDwNklbY3V5OaOSBHVulBVPEtSIIFWWoi X-Received: by 2002:a17:903:245:b0:19e:839e:49d8 with SMTP id j5-20020a170903024500b0019e839e49d8mr24477950plh.59.1680119905643; Wed, 29 Mar 2023 12:58:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119905; cv=none; d=google.com; s=arc-20160816; b=chcovmG8KL69cgFP73QyMxxhe+f/6H5Q+FcltdYwXXu1+RR+jfnFnvlCnxXCP11utm pTyTqCwQHGKoKBBN2REKy63TQwBqGq2GGctRoeG9qJMCjldZK6dpUbsoD/i17N0VYQSd pvOvhVKzCO0EBus1/bre2VHq+USpSSTDmhAI+ApLhM4O61Eidemn0UR89Wpkp2k9oYoN knqZS4taG3KpVn04awYbts2peHPYf+N1SN74995+pi4OcCNu/MCPCtd9nONGE0JqdVVp b1VzOo2aynvUtXAdwwvJ7YiCI7xKeQ4ihpkfiLuwTzPJXVPwmkysr9pD5prg2CiS9xcs Owfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=Iy85bl+SW2jzCjQypc9r9/L9Xg/jvBFWdyM62V6jnWA=; b=FQnOS7P26sJiHhdIsEK5NafGaBYPm1XAiu9vqNzHdbcWXbLoefXFdWZnUGT05g+P7S 2t16wtGZaQnAape7yqYNl1BKe9agG++Mabp2theFU157ip2qFiNpRt9J5UwSLgZXV83B aA5WqCr9PlKzz3VE30eFAq6AE1Ht90lENRahkrAlgbn41wZ8UiLX+nNgCyeELLr+oDa4 DmKnQ301jd4U6WHA/Ygeb5SvfMGOk0ZKEbpMVEAvaJ51HAFWuG6/XL6Lk9kizev6HuFe JEdgH3lGsezjwH/0iREWBTKaCarmm9ds390J/CiijbW1wJ+nBDmSCvmTEpyB3FXEGnXT tv7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bf4-20020a170902b90400b0019fe6800ed2si31396620plb.428.2023.03.29.12.58.10; Wed, 29 Mar 2023 12:58:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231147AbjC2Tqq (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230089AbjC2Tp5 (ORCPT ); Wed, 29 Mar 2023 15:45:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D15A8139 for ; Wed, 29 Mar 2023 12:45:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B223961E26 for ; Wed, 29 Mar 2023 19:45:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8ED5FC4339C; Wed, 29 Mar 2023 19:45:54 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk1-002RqW-2D; Wed, 29 Mar 2023 15:45:53 -0400 Message-ID: <20230329194553.504328919@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:38 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 22/25] tracing/user_events: Limit global user_event count References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733410038092955?= X-GMAIL-MSGID: =?utf-8?q?1761733410038092955?= From: Beau Belgrave Operators want to be able to ensure enough tracepoints exist on the system for kernel components as well as for user components. Since there are only up to 64K events, by default allow up to half to be used by user events. Add a kernel sysctl parameter (kernel.user_events_max) to set a global limit that is honored among all groups on the system. This ensures hard limits can be setup to prevent user processes from consuming all event IDs on the system. Link: https://lkml.kernel.org/r/20230328235219.203-12-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace_events_user.c | 47 ++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 3a01c2df4a90..9b43a02e1597 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include "trace.h" #include "trace_dynevent.h" @@ -61,6 +62,12 @@ struct user_event_group { /* Group for init_user_ns mapping, top-most group */ static struct user_event_group *init_group; +/* Max allowed events for the whole system */ +static unsigned int max_user_events = 32768; + +/* Current number of events on the whole system */ +static unsigned int current_user_events; + /* * Stores per-event properties, as users register events * within a file a user_event might be created if it does not @@ -1241,6 +1248,8 @@ static int destroy_user_event(struct user_event *user) { int ret = 0; + lockdep_assert_held(&event_mutex); + /* Must destroy fields before call removal */ user_event_destroy_fields(user); @@ -1257,6 +1266,11 @@ static int destroy_user_event(struct user_event *user) kfree(EVENT_NAME(user)); kfree(user); + if (current_user_events > 0) + current_user_events--; + else + pr_alert("BUG: Bad current_user_events\n"); + return ret; } @@ -1744,6 +1758,11 @@ static int user_event_parse(struct user_event_group *group, char *name, mutex_lock(&event_mutex); + if (current_user_events >= max_user_events) { + ret = -EMFILE; + goto put_user_lock; + } + ret = user_event_trace_register(user); if (ret) @@ -1755,6 +1774,7 @@ static int user_event_parse(struct user_event_group *group, char *name, dyn_event_init(&user->devent, &user_event_dops); dyn_event_add(&user->devent, &user->call); hash_add(group->register_table, &user->node, key); + current_user_events++; mutex_unlock(&event_mutex); @@ -2390,6 +2410,31 @@ static int create_user_tracefs(void) return -ENODEV; } +static int set_max_user_events_sysctl(struct ctl_table *table, int write, + void *buffer, size_t *lenp, loff_t *ppos) +{ + int ret; + + mutex_lock(&event_mutex); + + ret = proc_douintvec(table, write, buffer, lenp, ppos); + + mutex_unlock(&event_mutex); + + return ret; +} + +static struct ctl_table user_event_sysctls[] = { + { + .procname = "user_events_max", + .data = &max_user_events, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = set_max_user_events_sysctl, + }, + {} +}; + static int __init trace_events_user_init(void) { int ret; @@ -2419,6 +2464,8 @@ static int __init trace_events_user_init(void) if (dyn_event_register(&user_event_dops)) pr_warn("user_events could not register with dyn_events\n"); + register_sysctl_init("kernel", user_event_sysctls); + return 0; } From patchwork Wed Mar 29 19:45:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76806 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp662419vqo; Wed, 29 Mar 2023 13:15:23 -0700 (PDT) X-Google-Smtp-Source: AKy350bE5MUUtZy5aJYxYMG2cIAcKAaZ/9KUt1CA5anIkBDK9YKQl0uE9t9wkQBEsz1wGOqzF6ns X-Received: by 2002:aa7:d713:0:b0:501:d52d:7f88 with SMTP id t19-20020aa7d713000000b00501d52d7f88mr19859907edq.10.1680120923089; Wed, 29 Mar 2023 13:15:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120923; cv=none; d=google.com; s=arc-20160816; b=xqoknR1ynBCf/EpXPSJdc2ZNvghFu7lIw1U/+8jRgJIT56+7a6+xdGobVCnZbn/2NU ou2Ztm3h6uixeYe9/uHHbPDxQlkMq5fVWm/ik6v0c6+B+Ltbd86louHgxmAk5tcQxCxm HPX4HjKeMlFqXIRT18ms0O1BAswJurx9PoIQ6962Hf7ZGz0y6d36ugxYYJGYla+mHYU5 nAOXnNnP6vvzb+40hnVWMFjrquheLopZ15omVWbZ6wdi0CeUA1BlkcSlqCNwcn5gfNNg 2jved1ODQJMCK6Jrx8q/DKOCYlEjJ+V4uQjLq3BUvZ+Jzb74lTjHNZCjXi6k22s24gyE L54Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=XmSzQ4H+KzI3UcMx44FEVNOPdn4mzQmonXPRPsezn6w=; b=qdkPcokl0p2IMSHe2zxeZOi5JUenyV/qhnfhNmQwtEIIXkiu3sUzAQOGom6fEAE3KW AAVZYGKfjGs4xedzM6ih8F6VJNOkxyPuUlABqLbxzV23BSx4h5VrAith+AE+q2jZ7r4I vFWLVncWl0JAqWO4Xq68YqGJSRBWclIYh6/kqV3wVL48nIRTHGE4z/IePVAMDhL3psOy iDJ85fPl/GtinjINntG+gqHPlRxRQyLIvWc66JyYVsymsh/HslLD/MEzShS2xMzl/HVq pPKy4UmijuVJITtcv0bNve5w4juUz/c27WMhW8x/vKZ5n0Cyc6DbIUbARjh/sq9k5cYb OUxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m22-20020a50ef16000000b00501df8f316bsi21268759eds.308.2023.03.29.13.14.58; Wed, 29 Mar 2023 13:15:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231179AbjC2Tqw (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230104AbjC2Tp6 (ORCPT ); Wed, 29 Mar 2023 15:45:58 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AE741B8 for ; Wed, 29 Mar 2023 12:45:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E4E3261E2A for ; Wed, 29 Mar 2023 19:45:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C144EC433A0; Wed, 29 Mar 2023 19:45:54 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk1-002Rr4-2t; Wed, 29 Mar 2023 15:45:53 -0400 Message-ID: <20230329194553.709416963@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:39 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 23/25] tracing/user_events: Align structs with tabs for readability References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761734476561946980?= X-GMAIL-MSGID: =?utf-8?q?1761734476561946980?= From: Beau Belgrave Add tabs to make struct members easier to read and unify the style of the code. Link: https://lkml.kernel.org/r/20230328235219.203-13-beaub@linux.microsoft.com Signed-off-by: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- include/linux/user_events.h | 14 +++--- include/uapi/linux/user_events.h | 24 +++++----- kernel/trace/trace_events_user.c | 82 ++++++++++++++++---------------- 3 files changed, 60 insertions(+), 60 deletions(-) diff --git a/include/linux/user_events.h b/include/linux/user_events.h index 0120b3dd5b03..2847f5a18a86 100644 --- a/include/linux/user_events.h +++ b/include/linux/user_events.h @@ -17,13 +17,13 @@ #ifdef CONFIG_USER_EVENTS struct user_event_mm { - struct list_head link; - struct list_head enablers; - struct mm_struct *mm; - struct user_event_mm *next; - refcount_t refcnt; - refcount_t tasks; - struct rcu_work put_rwork; + struct list_head link; + struct list_head enablers; + struct mm_struct *mm; + struct user_event_mm *next; + refcount_t refcnt; + refcount_t tasks; + struct rcu_work put_rwork; }; extern void user_event_mm_dup(struct task_struct *t, diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 3e7275e3234a..2984aae4a2b4 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -25,25 +25,25 @@ struct user_reg { /* Input: Size of the user_reg structure being used */ - __u32 size; + __u32 size; /* Input: Bit in enable address to use */ - __u8 enable_bit; + __u8 enable_bit; /* Input: Enable size in bytes at address */ - __u8 enable_size; + __u8 enable_size; /* Input: Flags for future use, set to 0 */ - __u16 flags; + __u16 flags; /* Input: Address to update when enabled */ - __u64 enable_addr; + __u64 enable_addr; /* Input: Pointer to string with event name, description and flags */ - __u64 name_args; + __u64 name_args; /* Output: Index of the event to use when writing data */ - __u32 write_index; + __u32 write_index; } __attribute__((__packed__)); /* @@ -52,19 +52,19 @@ struct user_reg { */ struct user_unreg { /* Input: Size of the user_unreg structure being used */ - __u32 size; + __u32 size; /* Input: Bit to unregister */ - __u8 disable_bit; + __u8 disable_bit; /* Input: Reserved, set to 0 */ - __u8 __reserved; + __u8 __reserved; /* Input: Reserved, set to 0 */ - __u16 __reserved2; + __u16 __reserved2; /* Input: Address to unregister */ - __u64 disable_addr; + __u64 disable_addr; } __attribute__((__packed__)); #define DIAG_IOC_MAGIC '*' diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 9b43a02e1597..67cb7b53caf6 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -53,9 +53,9 @@ * allows isolation for events by various means. */ struct user_event_group { - char *system_name; - struct hlist_node node; - struct mutex reg_mutex; + char *system_name; + struct hlist_node node; + struct mutex reg_mutex; DECLARE_HASHTABLE(register_table, 8); }; @@ -76,17 +76,17 @@ static unsigned int current_user_events; * refcnt reaches one. */ struct user_event { - struct user_event_group *group; - struct tracepoint tracepoint; - struct trace_event_call call; - struct trace_event_class class; - struct dyn_event devent; - struct hlist_node node; - struct list_head fields; - struct list_head validators; - refcount_t refcnt; - int min_size; - char status; + struct user_event_group *group; + struct tracepoint tracepoint; + struct trace_event_call call; + struct trace_event_class class; + struct dyn_event devent; + struct hlist_node node; + struct list_head fields; + struct list_head validators; + refcount_t refcnt; + int min_size; + char status; }; /* @@ -95,12 +95,12 @@ struct user_event { * these to track enablement sites that are tied to an event. */ struct user_event_enabler { - struct list_head link; - struct user_event *event; - unsigned long addr; + struct list_head link; + struct user_event *event; + unsigned long addr; /* Track enable bit, flags, etc. Aligned for bitops. */ - unsigned int values; + unsigned int values; }; /* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ @@ -119,9 +119,9 @@ struct user_event_enabler { /* Used for asynchronous faulting in of pages */ struct user_event_enabler_fault { - struct work_struct work; - struct user_event_mm *mm; - struct user_event_enabler *enabler; + struct work_struct work; + struct user_event_mm *mm; + struct user_event_enabler *enabler; }; static struct kmem_cache *fault_cache; @@ -137,23 +137,23 @@ static DEFINE_SPINLOCK(user_event_mms_lock); * These are not shared and only accessible by the file that created it. */ struct user_event_refs { - struct rcu_head rcu; - int count; - struct user_event *events[]; + struct rcu_head rcu; + int count; + struct user_event *events[]; }; struct user_event_file_info { - struct user_event_group *group; - struct user_event_refs *refs; + struct user_event_group *group; + struct user_event_refs *refs; }; #define VALIDATOR_ENSURE_NULL (1 << 0) #define VALIDATOR_REL (1 << 1) struct user_event_validator { - struct list_head link; - int offset; - int flags; + struct list_head link; + int offset; + int flags; }; typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i, @@ -2276,11 +2276,11 @@ static int user_events_release(struct inode *node, struct file *file) } static const struct file_operations user_data_fops = { - .open = user_events_open, - .write = user_events_write, - .write_iter = user_events_write_iter, + .open = user_events_open, + .write = user_events_write, + .write_iter = user_events_write_iter, .unlocked_ioctl = user_events_ioctl, - .release = user_events_release, + .release = user_events_release, }; static void *user_seq_start(struct seq_file *m, loff_t *pos) @@ -2346,10 +2346,10 @@ static int user_seq_show(struct seq_file *m, void *p) } static const struct seq_operations user_seq_ops = { - .start = user_seq_start, - .next = user_seq_next, - .stop = user_seq_stop, - .show = user_seq_show, + .start = user_seq_start, + .next = user_seq_next, + .stop = user_seq_stop, + .show = user_seq_show, }; static int user_status_open(struct inode *node, struct file *file) @@ -2375,10 +2375,10 @@ static int user_status_open(struct inode *node, struct file *file) } static const struct file_operations user_status_fops = { - .open = user_status_open, - .read = seq_read, - .llseek = seq_lseek, - .release = seq_release, + .open = user_status_open, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, }; /* From patchwork Wed Mar 29 19:45:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76800 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp655545vqo; Wed, 29 Mar 2023 13:03:20 -0700 (PDT) X-Google-Smtp-Source: AKy350ZDq2mBOd695ha+WNicvT6CZNzrtv9UsAZgTE28PgoabDGE3OSbBUOqr89wahVtDNG5S0QF X-Received: by 2002:a17:90b:3847:b0:23d:77b3:907d with SMTP id nl7-20020a17090b384700b0023d77b3907dmr22108340pjb.22.1680120200428; Wed, 29 Mar 2023 13:03:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680120200; cv=none; d=google.com; s=arc-20160816; b=D8buyFtbqJJsUlvKdl7epR+SrQDT3ck/82lf3dK7XCU6bsafIv90+nIgl7u5dKuTaZ Hq0k4SHSvgi6nMtd6Hra2nGYLgJuq1TzKVaYsPLjlch8nH0VOld76ZaqTcVfJfNEFQmO hBg5gGTWC948VlCYmzXO4Zw7tORlgxCom2LUMB3Q3v1Yub1nv9CmpZKC4bGWAR1uVK0G UYIsOiYSdSi2PGFfOuzL712CJqa4sM2Lon9+Fsdprxmj0+FFDADplp+jYwo04UIoeXy6 COhzFzF2u9H6UKhkHgOY3HrSGb+xdynzxrO0XLUIGNcghO+Ezdak6qvJikjbEn4ss0et gDow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=YWWEff9DpAsaIw+0HODLJt9YrG5YC279hT4kPU1ZZPM=; b=FTTNsrsHxojkE+7XK5AFP0sEF1Sq2GQ+ISqPBNY7nnicfXEyWdioMUyE+SzOZsGDk7 o4Ot2A0GHGWEeL0EiT+zvWkSm9+frMYlqbdDyZ1YzlRMu5FUY2L6XJIzBiy47O7tDZWT L5WT4D+18H3TSxzv6ThYRcdcZ3ZD1EK82dAs2ASPKb7uUvYgpWZVmU+u4LEjLEGjljg6 hQgnaBVXuX6iI9h79PVIAcjaDJQvvzjkAON3qiwf2MgTLyfxPyFsGqStUhgAJ8DUOxRL NpIgEMysloiouRtGx6j9r6Ycae5dkyCl+XhUKhGYztnEtMwRZDR/j5/7TdmIOGPnSN2P 0V9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e5-20020a17090301c500b0019e4154578esi34344257plh.76.2023.03.29.13.02.58; Wed, 29 Mar 2023 13:03:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230335AbjC2Tqy (ORCPT + 99 others); Wed, 29 Mar 2023 15:46:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230111AbjC2Tp6 (ORCPT ); Wed, 29 Mar 2023 15:45:58 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A4F8193 for ; Wed, 29 Mar 2023 12:45:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 24E4E61E28 for ; Wed, 29 Mar 2023 19:45:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0087EC433A1; Wed, 29 Mar 2023 19:45:55 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk2-002Rrf-0N; Wed, 29 Mar 2023 15:45:54 -0400 Message-ID: <20230329194553.932013715@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:40 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Beau Belgrave Subject: [for-next][PATCH 24/25] tracing/user_events: Use print_format_fields() for trace output References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733719256019772?= X-GMAIL-MSGID: =?utf-8?q?1761733719256019772?= From: "Steven Rostedt (Google)" Currently, user events are shown using the "hex" output for "safety" reasons as one cannot trust user events behaving nicely. But the hex output is not the only utility for safe outputting of trace events. The print_event_fields() is just as safe and gives user readable output. Before: example-839 [001] ..... 43.222244: 00000000: b1 06 00 00 47 03 00 00 00 00 00 00 ....G....... example-839 [001] ..... 43.564433: 00000000: b1 06 00 00 47 03 00 00 01 00 00 00 ....G....... example-839 [001] ..... 43.763917: 00000000: b1 06 00 00 47 03 00 00 02 00 00 00 ....G....... example-839 [001] ..... 43.967929: 00000000: b1 06 00 00 47 03 00 00 03 00 00 00 ....G....... After: example-837 [006] ..... 55.739249: test: count=0x0 (0) example-837 [006] ..... 111.104784: test: count=0x1 (1) example-837 [006] ..... 111.268444: test: count=0x2 (2) example-837 [006] ..... 111.416533: test: count=0x3 (3) example-837 [006] ..... 111.542859: test: count=0x4 (4) Link: https://lore.kernel.org/linux-trace-kernel/20230328151413.4770b8d7@gandalf.local.home Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Beau Belgrave Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace_events_user.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 67cb7b53caf6..cc8c6d8b69b5 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -22,8 +22,9 @@ #include #include #include -#include "trace.h" #include "trace_dynevent.h" +#include "trace_output.h" +#include "trace.h" #define USER_EVENTS_PREFIX_LEN (sizeof(USER_EVENTS_PREFIX)-1) @@ -1198,11 +1199,7 @@ static enum print_line_t user_event_print_trace(struct trace_iterator *iter, int flags, struct trace_event *event) { - /* Unsafe to try to decode user provided print_fmt, use hex */ - trace_print_hex_dump_seq(&iter->seq, "", DUMP_PREFIX_OFFSET, 16, - 1, iter->ent, iter->ent_size, true); - - return trace_handle_return(&iter->seq); + return print_event_fields(iter, event); } static struct trace_event_functions user_event_funcs = { From patchwork Wed Mar 29 19:45:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 76785 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp649808vqo; Wed, 29 Mar 2023 12:53:16 -0700 (PDT) X-Google-Smtp-Source: AKy350aWCwIzbbt7Ie6rHfQubcL0JSNQ6sSiJLMpzW/k9EpTL4Et86FHdNuxuA12TroMvl3GLJrW X-Received: by 2002:a17:90b:17cb:b0:23f:5fe7:25a1 with SMTP id me11-20020a17090b17cb00b0023f5fe725a1mr22868737pjb.13.1680119596363; Wed, 29 Mar 2023 12:53:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680119596; cv=none; d=google.com; s=arc-20160816; b=fxw5SFwpIFY+dhTNt1wiPO9olcEo+GWS+B6FhpPW56M2+tGz4iso7vmN8GX2b6GQ09 y7bgfozvkVR47uZarc8Km8ut4Fc5FKD3e6526KjG9XOzdDLLevte6PdaHQLhhLlsOioF SHui1ruuWlZZtPeZW1F9E1nLGqf1cweOb0VCYR39X+t32HSY9jhJ52qjCc7gawpc8GAs v77c0Im9913gX/Ny4WdqOE98JMN/KrF5DPlKaer5BHgjx62c4ikeMwhIYz5jMHQnY556 Lz/Jxx4FxbK+oTAJfkzT5sSBuojffKl2DARA3Ii9zY4gjSey2bxNA4CQTbmGNJcfw5z/ n+iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=5sz5OCVsi17AHa9pq9KgWMqLXmFTfz7Vr6mvouTK4uY=; b=Y/pylonyewqWl8H3TA6kg2nniTwx/RvGcG0sWc2lGZFcV8OMKeltux7Iraj8izsDdS QYOWVyvS/j+DwbxW78kroG5dw6h/eLzpKmJHrpTcuxi7E4AQsFnMDKnIjFYns6KTKAgJ pj5P6Iy3hEVQQ4C1TIpZO1+rMNNpNFlu1JuiviznDrnZDBdTe9Wk5y+lj3nCs1yiNb+4 yTgSBv41sek/+pCLs2ZEh5CpueIFg9Bjbj9IYTiNRI7ZwxvBDD9FNSJo1cNNlY9PSGHX HYwkcOKNmMdFrMGONfAdOseC3heFfdWP0vLNcrCgwC8kFURH2tzrEjt7Tr8JQPmf+yZ2 tYHw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r85-20020a632b58000000b0050fa32db191si21916176pgr.40.2023.03.29.12.52.54; Wed, 29 Mar 2023 12:53:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231221AbjC2TrB (ORCPT + 99 others); Wed, 29 Mar 2023 15:47:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230114AbjC2Tp6 (ORCPT ); Wed, 29 Mar 2023 15:45:58 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C4864EEA for ; Wed, 29 Mar 2023 12:45:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6E13E61E1F for ; Wed, 29 Mar 2023 19:45:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4A573C433A7; Wed, 29 Mar 2023 19:45:55 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1phbk2-002RsE-12; Wed, 29 Mar 2023 15:45:54 -0400 Message-ID: <20230329194554.139185152@goodmis.org> User-Agent: quilt/0.66 Date: Wed, 29 Mar 2023 15:45:41 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Andrew Morton , Mathieu Desnoyers Subject: [for-next][PATCH 25/25] tracing: Unbreak user events References: <20230329194516.146147554@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761733085708263320?= X-GMAIL-MSGID: =?utf-8?q?1761733085708263320?= From: "Steven Rostedt (Google)" The user events was added a bit prematurely, and there were a few kernel developers that had issues with it. The API also needed a bit of work to make sure it would be stable. It was decided to make user events "broken" until this was settled. Now it has a new API that appears to be as stable as it will be without the use of a crystal ball. It's being used within Microsoft as is, which means the API has had some testing in real world use cases. It went through many discussions in the bi-weekly tracing meetings, and there's been no more comments about updates. I feel this is good to go. Cc: Mathieu Desnoyers Signed-off-by: Steven Rostedt (Google) --- kernel/trace/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index c7020e071bf9..8cf97fa4a4b3 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -792,7 +792,6 @@ config USER_EVENTS bool "User trace events" select TRACING select DYNAMIC_EVENTS - depends on BROKEN || COMPILE_TEST # API needs to be straighten out help User trace events are user-defined trace events that can be used like an existing kernel trace event. User trace