From patchwork Mon Dec 18 13:14:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 180400 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp1235574dyi; Mon, 18 Dec 2023 05:24:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IEhMQoFN6mdiI5jKka/48ZPBM28sDQgkzLjyRceCv6oV1UWslEfbhPR0Ethhe7YiC0srGdN X-Received: by 2002:a05:6a00:4599:b0:6d8:9f56:7357 with SMTP id it25-20020a056a00459900b006d89f567357mr527089pfb.62.1702905851546; Mon, 18 Dec 2023 05:24:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702905851; cv=none; d=google.com; s=arc-20160816; b=JSJSLExG3cRA+cIvJwFujbCyWcZIt2FYSg3VYqg5xxqbViDe16CySfDplfxqa3KJkC aa5Td9307unR5da4eblrXYJttWVrP8ss+EK8j5aTFdH39zHjK2itM7O0S+1DjTb/XELS Z2pY7NKvi89JFhHTWssmSiWrVEBgfJnVC2JgRlv8Rm/oBBL3/dx0sIAqtzzRQ6B+HUaI p637LPWX/aeyKdwE+cSfq53K8PQ7BjyoWO1A1yeQdK83jHC2Yl/V34kFQfzyy+aTH3Iw PHo9Xc2/jm9lKkhOAA9hO/vxw2+6PWTuvt5D2922J03Mbq4kwTsXCPzEnf5HuJgsd6m9 2YAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=9U/KqGMOqrg0xoGTJe7+mvfqdxSl3TOL4RHNtdZdsUw=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=WbkSkWItiVS09Aupxk2gG3SA0E6bVNauSLIiHXd+d+UK4jtYCYhgaGKAYVsehNePRL SA3IfU/wzNDRrYITv8OcgYr8KQG0PzyWqY4aUh6lSAS0965EHIvBhtoi/vsRv3K6vpsw e0MwNVsMfaDXZtGxL/bxvba6XOwi6SAIIkgvInH4oKg/+J3HHnH7hxqFh0W2j0BHx4A+ SYTqyST/7V6/k4tzTV3b/Dv+8qqt3opswpu6hGrjUVZ22SzWviBLlZyvizhxTgCU9BE4 5sH6ucG/sjFVWjbYz4TbFZlscbsCoq6WLMsPS8Ilqln+saKhKXd4bxmzNqo9eEDDzb4q JKjA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="erWP+a/O"; spf=pass (google.com: domain of linux-kernel+bounces-3718-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-3718-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id p1-20020a056a000a0100b006cd82b3585esi12866597pfh.287.2023.12.18.05.24.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 05:24:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-3718-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="erWP+a/O"; spf=pass (google.com: domain of linux-kernel+bounces-3718-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-3718-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 9839B2862F3 for ; Mon, 18 Dec 2023 13:19:48 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 85F9B3786D; Mon, 18 Dec 2023 13:14:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="erWP+a/O" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C21C437870; Mon, 18 Dec 2023 13:14:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A947AC433C8; Mon, 18 Dec 2023 13:14:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702905269; bh=1xFk1PI8w6iqmqfcLM9Z1JDlLC9jWBDIZWbfYmgwqf8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=erWP+a/OUnjLmULYf9Zmwbk9p5aHOM2u/dGDqrSC9lRx8Oi81qORHblG+osb/KTfp cfqt43W7861JlMVSfGTynjtzOcf9LwK2xkpEXIvTx3F6J51rGmbJhWrKc3GBNWETrV Sx32YVTK4SCfsWxb9+u6gjEv2mQzvAm0TgcIga3QuB9jVc9yG2PPj5h0BhvSTsu2Ym Q1G/p/htttEndYXPIUTkEzkg/Q7KXnB4Ccax11QY6NK6Q8xD1eSiqdQA44i25d87EE cwpwAzA4TsSMyWQI2GeLbeVRXF4NJtEuCT6JQI9g3SGVZsQVlTU5od5k8TNcMhW9hG wEZnSWjdz0YRw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v5 14/34] function_graph: Move set_graph_function tests to shadow stack global var Date: Mon, 18 Dec 2023 22:14:22 +0900 Message-Id: <170290526206.220107.14232017944689708425.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170290509018.220107.1347127510564358608.stgit@devnote2> References: <170290509018.220107.1347127510564358608.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785626206286303272 X-GMAIL-MSGID: 1785626206286303272 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the set_graph_funnction was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/trace_recursion.h | 5 +---- kernel/trace/trace.h | 32 +++++++++++++++++++++----------- kernel/trace/trace_functions_graph.c | 6 +++--- kernel/trace/trace_irqsoff.c | 4 ++-- kernel/trace/trace_sched_wakeup.c | 4 ++-- 5 files changed, 29 insertions(+), 22 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index d48cd92d2364..2efd5ec46d7f 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,9 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* Set if the function is in the set_graph_function file */ - TRACE_GRAPH_BIT, - /* * In the very unlikely case that an interrupt came in * at a start of graph tracing, and we want to trace @@ -60,7 +57,7 @@ enum { * that preempted a softirq start of a function that * preempted normal context!!!! Luckily, it can't be * greater than 3, so the next two bits are a mask - * of what the depth is when we set TRACE_GRAPH_BIT + * of what the depth is when we set TRACE_GRAPH_FL */ TRACE_GRAPH_DEPTH_START_BIT, diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 02edfdb68933..3b9266aeb43b 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -894,11 +894,16 @@ extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops); extern void free_fgraph_ops(struct trace_array *tr); +enum { + TRACE_GRAPH_FL = 1, +}; + #ifdef CONFIG_DYNAMIC_FTRACE extern struct ftrace_hash __rcu *ftrace_graph_hash; extern struct ftrace_hash __rcu *ftrace_graph_notrace_hash; -static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) +static inline int +ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) { unsigned long addr = trace->func; int ret = 0; @@ -920,12 +925,11 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) } if (ftrace_lookup_ip(hash, addr)) { - /* * This needs to be cleared on the return functions * when the depth is zero. */ - trace_recursion_set(TRACE_GRAPH_BIT); + *task_var |= TRACE_GRAPH_FL; trace_recursion_set_depth(trace->depth); /* @@ -945,11 +949,14 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) return ret; } -static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +static inline void +ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace) { - if (trace_recursion_test(TRACE_GRAPH_BIT) && + unsigned long *task_var = fgraph_get_task_var(gops); + + if ((*task_var & TRACE_GRAPH_FL) && trace->depth == trace_recursion_depth()) - trace_recursion_clear(TRACE_GRAPH_BIT); + *task_var &= ~TRACE_GRAPH_FL; } static inline int ftrace_graph_notrace_addr(unsigned long addr) @@ -976,7 +983,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) } #else -static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) +static inline int ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) { return 1; } @@ -985,17 +992,20 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) { return 0; } -static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +static inline void ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace) { } #endif /* CONFIG_DYNAMIC_FTRACE */ extern unsigned int fgraph_max_depth; -static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) +static inline bool +ftrace_graph_ignore_func(struct fgraph_ops *gops, struct ftrace_graph_ent *trace) { + unsigned long *task_var = fgraph_get_task_var(gops); + /* trace it when it is-nested-in or is a function enabled. */ - return !(trace_recursion_test(TRACE_GRAPH_BIT) || - ftrace_graph_addr(trace)) || + return !((*task_var & TRACE_GRAPH_FL) || + ftrace_graph_addr(task_var, trace)) || (trace->depth < 0) || (fgraph_max_depth && trace->depth >= fgraph_max_depth); } diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 7f30652f0e97..66cce73e94f8 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -160,7 +160,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, if (!ftrace_trace_task(tr)) return 0; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; if (ftrace_graph_ignore_irqs()) @@ -247,7 +247,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace, long disabled; int cpu; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); @@ -269,7 +269,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace, static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 5478f4c4f708..fce064e20570 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -184,7 +184,7 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace, unsigned int trace_ctx; int ret; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; /* * Do not trace a function if it's filtered by set_graph_notrace. @@ -214,7 +214,7 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace, unsigned long flags; unsigned int trace_ctx; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (!func_prolog_dec(tr, &data, &flags)) return; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 49bcc812652c..130ca7e7787e 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -120,7 +120,7 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace, unsigned int trace_ctx; int ret = 0; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; /* * Do not trace a function if it's filtered by set_graph_notrace. @@ -149,7 +149,7 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace, struct trace_array_cpu *data; unsigned int trace_ctx; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (!func_prolog_preempt_disable(tr, &data, &trace_ctx)) return;