From patchwork Mon Nov 27 13:54:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170149 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3128910vqx; Mon, 27 Nov 2023 05:56:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IF0/DGRvtOxLqSZeBP59+K3tMn25zo+MyOsv3TA1uzbyA8HPUlvwHBSg43f7P/tU1wV/zA4 X-Received: by 2002:a05:6a20:6a0e:b0:18a:d7a8:5e5e with SMTP id p14-20020a056a206a0e00b0018ad7a85e5emr9436394pzk.41.1701093369875; Mon, 27 Nov 2023 05:56:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093369; cv=none; d=google.com; s=arc-20160816; b=P+qElrezstpPbuh4jrqynA1WFgQo5k+ejwrf43nDumISa9TQry0ChqGWcKoi9SykfI nnLrgjnLv3TcNfbn1JTsFOJi2b0e6+AX+SnqN4ox1yHCfhABsPF1nW1ejeUEKW5LAGsj aiRaiX2fV7lKzalTF8fJeU9rBR/OoLmP9eLVOYtsyQRW29dEcuUsuhtdUggeo/bHk0OA oFSaBEPGxd/bZiQn7Ow3VZWfHhmzt/Kk8SFm0U4rr1saaaY0QoK+zfa31AGHly3la+EB xvcnx9NrV6thKi7Ss82bdmjLOpDhx0hTi5WnDQZfZ0O6Gj922KePfNcg5REtrJd0mvY2 2Keg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=BKS9fUT4VtivkcruO+8vIPrD/eLeaRjrcB7hbsjRBQs=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=m0lXAeLsQYCrnJvVRLPcy1rSOSHKRWm28G6agH5odr6kInviaYfaH0bmodLFjbCniD 8X2RV6HTy6wwNS2qEmxD/F3jj1XGpuVL1n8kbyklxPfjDZq/eOk2Axa8ANT8zmFjeopr /i/kdkwrs1ecXREWYUqkHGSNhU9ideQKeIhsn8dFoy8aF8KlfrZoPWCZHVvy6MR5JwNo 5gnv5RiMAFq4YK3khF5x625MSIJXsKl5ZRs/gP7PrfyFt5H2zxbyPwpZ9G1Foqenufx6 Qap/hhED6IiTLe6DxKIsa8LD1bIkeo5cXbkoFmBTqbHhuwz27lI/vlLq9WbS4YT4kMdP Ra4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=f1YsI9ZU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id s3-20020a635243000000b005c1b2d93e52si7048136pgl.368.2023.11.27.05.56.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:56:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=f1YsI9ZU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id C4FE3801BAF1; Mon, 27 Nov 2023 05:56:05 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233276AbjK0Nzh (ORCPT + 99 others); Mon, 27 Nov 2023 08:55:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233635AbjK0NzD (ORCPT ); Mon, 27 Nov 2023 08:55:03 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B8181FD5 for ; Mon, 27 Nov 2023 05:54:40 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7157C433C8; Mon, 27 Nov 2023 13:54:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093279; bh=nRRPhz4LGfnAyoZaXqAlhWt3IaLLSNsOo0cAG6Q6RJE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f1YsI9ZU7dkEoyTRm4WwjIOPKUUNMYUmkdiNLDBr+ojVXTDoA05ksskf4CFbllwh4 L1sOuAqwaIX39Y9h5WMA/6pMT6lQ9BtM3xW1KwWPT4/fpKXQ5xRGyKG8ZszpS1TPyA ifm9uJahf9CNImFGk0VhH3T0BJtosJul6GEor2wD8EZddc87+bnCnhfGONcqhCU+h2 KX8hA6M5Xb/xlzMwIFRQxeqJqJC73IwPuKRR9//zqPd5VI5j9JZWXRMddz1wKyo4ah q1DmdJBPZUE5o3gGdSXz3qC/HonUXLtGYndHbNcb8Qqjd1TiXUsAcbq/Cfm+VFZ99u juQB3ptl8+O1Q== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 08/33] function_graph: Remove logic around ftrace_graph_entry and return Date: Mon, 27 Nov 2023 22:54:33 +0900 Message-Id: <170109327280.343914.16318101741773152293.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:56:05 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725681564256537 X-GMAIL-MSGID: 1783725681564256537 From: Steven Rostedt (VMware) The function pointers ftrace_graph_entry and ftrace_graph_return are no longer called via the function_graph tracer. Instead, an array structure is now used that will allow for multiple users of the function_graph infrastructure. The variables are still used by the architecture code for non dynamic ftrace configs, where a test is made against them to see if they point to the default stub function or not. This is how the static function tracing knows to call into the function graph tracer infrastructure or not. Two new stub functions are made. entry_run() and return_run(). The ftrace_graph_entry and ftrace_graph_return are set to them respectively when the function graph tracer is enabled, and this will trigger the architecture specific function graph code to be executed. This also requires checking the global_ops hash for all calls into the function_graph tracer. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Fix typo and make lines shorter than 76 chars in the description. - Remove unneeded return from return_run() function. --- kernel/trace/fgraph.c | 71 +++++++++++----------------------------- kernel/trace/ftrace.c | 2 - kernel/trace/ftrace_internal.h | 2 - 3 files changed, 19 insertions(+), 56 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 8aba93be11b2..97a9ffb8bb4c 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -128,6 +128,17 @@ static inline int get_fgraph_array(struct task_struct *t, int offset) FGRAPH_ARRAY_MASK; } +/* ftrace_graph_entry set to this to tell some archs to run function graph */ +static int entry_run(struct ftrace_graph_ent *trace) +{ + return 0; +} + +/* ftrace_graph_return set to this to tell some archs to run function graph */ +static void return_run(struct ftrace_graph_ret *trace) +{ +} + /* * @offset: The index into @t->ret_stack to find the ret_stack entry * @index: Where to place the index into @t->ret_stack of that entry @@ -323,6 +334,10 @@ int function_graph_enter(unsigned long ret, unsigned long func, ftrace_find_rec_direct(ret - MCOUNT_INSN_SIZE)) return -EBUSY; #endif + + if (!ftrace_ops_test(&global_ops, func, NULL)) + return -EBUSY; + trace.func = func; trace.depth = ++current->curr_ret_depth; @@ -664,7 +679,6 @@ extern void ftrace_stub_graph(struct ftrace_graph_ret *); /* The callbacks that hook a function */ trace_func_graph_ret_t ftrace_graph_return = ftrace_stub_graph; trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub; -static trace_func_graph_ent_t __ftrace_graph_entry = ftrace_graph_entry_stub; /* Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. */ static int alloc_retstack_tasklist(unsigned long **ret_stack_list) @@ -747,46 +761,6 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, } } -static int ftrace_graph_entry_test(struct ftrace_graph_ent *trace) -{ - if (!ftrace_ops_test(&global_ops, trace->func, NULL)) - return 0; - return __ftrace_graph_entry(trace); -} - -/* - * The function graph tracer should only trace the functions defined - * by set_ftrace_filter and set_ftrace_notrace. If another function - * tracer ops is registered, the graph tracer requires testing the - * function against the global ops, and not just trace any function - * that any ftrace_ops registered. - */ -void update_function_graph_func(void) -{ - struct ftrace_ops *op; - bool do_test = false; - - /* - * The graph and global ops share the same set of functions - * to test. If any other ops is on the list, then - * the graph tracing needs to test if its the function - * it should call. - */ - do_for_each_ftrace_op(op, ftrace_ops_list) { - if (op != &global_ops && op != &graph_ops && - op != &ftrace_list_end) { - do_test = true; - /* in double loop, break out with goto */ - goto out; - } - } while_for_each_ftrace_op(op); - out: - if (do_test) - ftrace_graph_entry = ftrace_graph_entry_test; - else - ftrace_graph_entry = __ftrace_graph_entry; -} - static DEFINE_PER_CPU(unsigned long *, idle_ret_stack); static void @@ -927,18 +901,12 @@ int register_ftrace_graph(struct fgraph_ops *gops) ftrace_graph_active--; goto out; } - - ftrace_graph_return = gops->retfunc; - /* - * Update the indirect function to the entryfunc, and the - * function that gets called to the entry_test first. Then - * call the update fgraph entry function to determine if - * the entryfunc should be called directly or not. + * Some archs just test to see if these are not + * the default function */ - __ftrace_graph_entry = gops->entryfunc; - ftrace_graph_entry = ftrace_graph_entry_test; - update_function_graph_func(); + ftrace_graph_return = return_run; + ftrace_graph_entry = entry_run; ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); } @@ -974,7 +942,6 @@ void unregister_ftrace_graph(struct fgraph_ops *gops) if (!ftrace_graph_active) { ftrace_graph_return = ftrace_stub_graph; ftrace_graph_entry = ftrace_graph_entry_stub; - __ftrace_graph_entry = ftrace_graph_entry_stub; ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); unregister_pm_notifier(&ftrace_suspend_notifier); unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 8de8bec5f366..fd64021ec52f 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -235,8 +235,6 @@ static void update_ftrace_function(void) func = ftrace_ops_list_func; } - update_function_graph_func(); - /* If there's no change, then do nothing more here */ if (ftrace_trace_function == func) return; diff --git a/kernel/trace/ftrace_internal.h b/kernel/trace/ftrace_internal.h index 5012c04f92c0..19eddcb91584 100644 --- a/kernel/trace/ftrace_internal.h +++ b/kernel/trace/ftrace_internal.h @@ -42,10 +42,8 @@ ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs) #ifdef CONFIG_FUNCTION_GRAPH_TRACER extern int ftrace_graph_active; -void update_function_graph_func(void); #else /* !CONFIG_FUNCTION_GRAPH_TRACER */ # define ftrace_graph_active 0 -static inline void update_function_graph_func(void) { } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #else /* !CONFIG_FUNCTION_TRACER */