From patchwork Wed Nov 8 14:27:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 163045 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:aa0b:0:b0:403:3b70:6f57 with SMTP id k11csp949535vqo; Wed, 8 Nov 2023 06:27:59 -0800 (PST) X-Google-Smtp-Source: AGHT+IFs/VZTYK34EkqF+KJLaRlvbegacHKg0iSmLp8AsrURNv6XQukWo7/JnbYptS5FGzY8SIN0 X-Received: by 2002:a17:90b:4a01:b0:280:682d:cb2 with SMTP id kk1-20020a17090b4a0100b00280682d0cb2mr1958029pjb.1.1699453679195; Wed, 08 Nov 2023 06:27:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699453679; cv=none; d=google.com; s=arc-20160816; b=AhFPv0kSn5Ho+fanyTmsgfHLLFhoa0MMBduLzFxsniqXADUiKRfpqRy1iPtj33pjjb tWfZAivowjUp+TCNERtoe69yo01mTZT6yFl8+Po9Fh4liTthvUk2gdShzM/mwNmeYZlN W605gSY8y0VP2D2ZqgIE13tVC6gdyaNOw+cMj+dHunBhQvqK9ZkbPvjMSgv9SIZTLPxr OlEYaCPURHyspBPOjq4PPxFwBp9s88rBspAlruw7a5KTwvGobNY2iqP1ppwI4I6RnLDU u5QiochWqba8rPMsVuj4Crje4ddvt1hi7xMwFB95g0+7PY0TINl8f1xZHKo7aMppkJYB q02g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Glc12OMQpuXGBrpuWjQfe4tNVNPWMlTUH5bfvqXzSpE=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=prgJvtayxia8x51mjRLBWjmamdI1fc7MWxjTjtb/1tkI2FyEDlHyyjPpxHGf/kRLtS tMThqKTFktyfM/BDKWtiK+kqTf4/3GhWREqRMWELBjyr/kO6AofZ61VdwNTU+vPmDQqJ k46GFIDyfI75X5s6vc/7VVwnecUB7ZVxDHJk0/lmfhv9JUOCbFbK0MG2bIhftM6VPHpQ aUGzrkKlfCMesaspeU6aAforwoaCh+JDbvsc+gWNawjtgomI9KiE1Q7kHWTqc21D1yt+ GUbq2Sa9m4Pixm4hjLzfmw1axj918yUeoZWwRCCFQziOXCl/IRY5QexRra60bDGuR+zw mTJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="iCCx/cta"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id lj13-20020a17090b344d00b00267ba1c43adsi2282366pjb.101.2023.11.08.06.27.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 06:27:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="iCCx/cta"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 4EC9B8050918; Wed, 8 Nov 2023 06:27:56 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343900AbjKHO1l (ORCPT + 32 others); Wed, 8 Nov 2023 09:27:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235399AbjKHO1M (ORCPT ); Wed, 8 Nov 2023 09:27:12 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBE07212B for ; Wed, 8 Nov 2023 06:27:09 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 868F3C433C7; Wed, 8 Nov 2023 14:27:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699453629; bh=fGXN2qZeLvCA8Erebbw3zTWQfZLdoNHyNwm1QJyPyZ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iCCx/cta6/+goovuazbQZpAUIUo/SxNVtoIfX2VjVwi4ZWueMW+MLKgTI+L8rMYuu e8999EOQdQZU+p61SayKteJcfs93lrpwv/NeT3eqwlRbeYRoyDckAkK5ZP/yDqIbvF wk5TjGbfsB4qrvC53kTNmkSgua58eXr2pV54C23QBpQlNi2b1JqHvj9oS1d6GnjpuZ PAK2do9SNmTWjXGKQ/aEfLqy72pLK+2xfVZSo3mQYwxFYQX/0GrDZBGq6f+NCkeUwh SSaHp6b9pM2G91HVf3qK4PKMZGQCTkwJQCHA0WRGWrtsNuRokYG50OCdx9q8H3Xky9 jpd1x1R4yOXnw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [RFC PATCH v2 14/31] function_graph: Move set_graph_function tests to shadow stack global var Date: Wed, 8 Nov 2023 23:27:04 +0900 Message-Id: <169945362406.55307.7939383981224458445.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <169945345785.55307.5003201137843449313.stgit@devnote2> References: <169945345785.55307.5003201137843449313.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Wed, 08 Nov 2023 06:27:56 -0800 (PST) X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782006341331145398 X-GMAIL-MSGID: 1782006341331145398 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the set_graph_funnction was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/trace_recursion.h | 5 +---- kernel/trace/trace.h | 32 +++++++++++++++++++++----------- kernel/trace/trace_functions_graph.c | 6 +++--- kernel/trace/trace_irqsoff.c | 4 ++-- kernel/trace/trace_sched_wakeup.c | 4 ++-- 5 files changed, 29 insertions(+), 22 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index d48cd92d2364..2efd5ec46d7f 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,9 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* Set if the function is in the set_graph_function file */ - TRACE_GRAPH_BIT, - /* * In the very unlikely case that an interrupt came in * at a start of graph tracing, and we want to trace @@ -60,7 +57,7 @@ enum { * that preempted a softirq start of a function that * preempted normal context!!!! Luckily, it can't be * greater than 3, so the next two bits are a mask - * of what the depth is when we set TRACE_GRAPH_BIT + * of what the depth is when we set TRACE_GRAPH_FL */ TRACE_GRAPH_DEPTH_START_BIT, diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index f77322e3b177..60d38709ab91 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -889,11 +889,16 @@ extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops); extern void free_fgraph_ops(struct trace_array *tr); +enum { + TRACE_GRAPH_FL = 1, +}; + #ifdef CONFIG_DYNAMIC_FTRACE extern struct ftrace_hash __rcu *ftrace_graph_hash; extern struct ftrace_hash __rcu *ftrace_graph_notrace_hash; -static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) +static inline int +ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) { unsigned long addr = trace->func; int ret = 0; @@ -915,12 +920,11 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) } if (ftrace_lookup_ip(hash, addr)) { - /* * This needs to be cleared on the return functions * when the depth is zero. */ - trace_recursion_set(TRACE_GRAPH_BIT); + *task_var |= TRACE_GRAPH_FL; trace_recursion_set_depth(trace->depth); /* @@ -940,11 +944,14 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) return ret; } -static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +static inline void +ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace) { - if (trace_recursion_test(TRACE_GRAPH_BIT) && + unsigned long *task_var = fgraph_get_task_var(gops); + + if ((*task_var & TRACE_GRAPH_FL) && trace->depth == trace_recursion_depth()) - trace_recursion_clear(TRACE_GRAPH_BIT); + *task_var &= ~TRACE_GRAPH_FL; } static inline int ftrace_graph_notrace_addr(unsigned long addr) @@ -971,7 +978,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) } #else -static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) +static inline int ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) { return 1; } @@ -980,17 +987,20 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) { return 0; } -static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +static inline void ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace) { } #endif /* CONFIG_DYNAMIC_FTRACE */ extern unsigned int fgraph_max_depth; -static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) +static inline bool +ftrace_graph_ignore_func(struct fgraph_ops *gops, struct ftrace_graph_ent *trace) { + unsigned long *task_var = fgraph_get_task_var(gops); + /* trace it when it is-nested-in or is a function enabled. */ - return !(trace_recursion_test(TRACE_GRAPH_BIT) || - ftrace_graph_addr(trace)) || + return !((*task_var & TRACE_GRAPH_FL) || + ftrace_graph_addr(task_var, trace)) || (trace->depth < 0) || (fgraph_max_depth && trace->depth >= fgraph_max_depth); } diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 7f30652f0e97..66cce73e94f8 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -160,7 +160,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, if (!ftrace_trace_task(tr)) return 0; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; if (ftrace_graph_ignore_irqs()) @@ -247,7 +247,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace, long disabled; int cpu; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); @@ -269,7 +269,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace, static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 5478f4c4f708..fce064e20570 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -184,7 +184,7 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace, unsigned int trace_ctx; int ret; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; /* * Do not trace a function if it's filtered by set_graph_notrace. @@ -214,7 +214,7 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace, unsigned long flags; unsigned int trace_ctx; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (!func_prolog_dec(tr, &data, &flags)) return; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 49bcc812652c..130ca7e7787e 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -120,7 +120,7 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace, unsigned int trace_ctx; int ret = 0; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; /* * Do not trace a function if it's filtered by set_graph_notrace. @@ -149,7 +149,7 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace, struct trace_array_cpu *data; unsigned int trace_ctx; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (!func_prolog_preempt_disable(tr, &data, &trace_ctx)) return;