From patchwork Fri Jan 12 10:11:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187663 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp74943dyc; Fri, 12 Jan 2024 02:12:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IEx1Au+XaxpIb+6UYqlmWoq+bp9e1qASFVDXe+27FQ7NeSIteQac3gX/mq6m7/9OMEFNh94 X-Received: by 2002:a05:6a00:170b:b0:6da:c2cf:a0b1 with SMTP id h11-20020a056a00170b00b006dac2cfa0b1mr882256pfc.39.1705054320659; Fri, 12 Jan 2024 02:12:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054320; cv=none; d=google.com; s=arc-20160816; b=bYIxHfYHNBYwCWOwxWHESSxQs7hCdGoyy8CLJcKp36fuaAcxXLEPPtKE8C50s36Dn2 9WGeX3a+LY0B21ye+HWBaxeu2C+DkYpn5wb7h2+G9FUZTYSXNCW8bYdcbuyYoaKvrkFq bVeQ10BjgCgLCBk7AgAp1aRJXTCgnxBEEopvt5sTmFBKPyYBPcAzgrAubgf0/uzjDNM5 ZdusWRkYGq8AT6H5LMQEpePlmOOvZyWdLOXqSuBWAgM/h/7x4pvwSE3olLM71/rU94SS 7LmkPQcxW+8vhOgM2gckGDC5JSADWzLnlofWzLZ9sO+mH/d3N+ZxPYTlMTgxf0HOaQik BunA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=S3U6MEEbPsVEqlsbkJ9JEzF8j0Efbt31JB40r8LLec0=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=CZC072xkOFD8IrKumWd/QXJ114kD4x9IK4j4XH0IB0xIP0Lw0W9lFvXu03xN1/7eZO Qi+kd1tsZkip4PXlEYr2Kj5KDbEa/oiu3fQIK/We8y/wwlDLzU1DGgq3LVdei2KJuPtm O1R4/kG9YPt29MsvKhIwLb3evGe8RzRKeLkRSGO6bGnPmxJSCzu69sQtJyFYyBGDSnvq hjdkE99cmvWNQU/8dgs8ERi8bH4Wi6tpsO0PbPraCiodo9GPxy0PQeRf264nMWww/4uy A+tXA9zNo5w/+ncPZDwxgo4eerhUEsI1kHxXBoUeAYsUbnscvGJcOR/zkK2DYsOwWQlj /wPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@kernel.org header.s=k20201202 header.b=fcHkdHfm; spf=pass (google.com: domain of linux-kernel+bounces-24546-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24546-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id fj42-20020a056a003a2a00b006d9bf8de301si2989139pfb.293.2024.01.12.02.12.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:12:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24546-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@kernel.org header.s=k20201202 header.b=fcHkdHfm; spf=pass (google.com: domain of linux-kernel+bounces-24546-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24546-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 0815D287F4E for ; Fri, 12 Jan 2024 10:12:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 955815DF24; Fri, 12 Jan 2024 10:11:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fcHkdHfm" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF3925C8FE; Fri, 12 Jan 2024 10:11:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA1DEC433C7; Fri, 12 Jan 2024 10:11:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054268; bh=l4T04Izc/70L9UMANWu0UT3Aiu5pk/yJS8gqVOArAYs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fcHkdHfmth7YLv7ppjDQ8oPvDRFGePuG+EG2ufQTYsnw/sEIo9nIpEhgbRQEVcBA4 scNaXjwe4jFdHzCrWgPyB+UmzcWjN7NWOpgV/hnfgoge+LT1JU/bCRuMh7Z06jCKGJ JCnHGHHmwFdY1Ge5RRCqGOdWz6cqins3KTqoy88xqhsN1IOLsHXUnBcFO4Avkg92ok 78XfikJx4D3zhb88xgnIHNSHJjK6u7KlolOYQj+oAfIkiTcApZbrXc7Tv4cZcX0XDW 5yjfXzap2pbztRtE/en/40WXo1j6reokEplaycDPvQPUufdj5SAR57XOFyuIIEcUZM LQBWS45JlamBw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 01/36] ftrace: Fix DIRECT_CALLS to use SAVE_REGS by default Date: Fri, 12 Jan 2024 19:11:02 +0900 Message-Id: <170505426238.459169.6078729647487152980.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879039594274092 X-GMAIL-MSGID: 1787879039594274092 From: Masami Hiramatsu (Google) The commit 60c8971899f3 ("ftrace: Make DIRECT_CALLS work WITH_ARGS and !WITH_REGS") changed DIRECT_CALLS to use SAVE_ARGS when there are multiple ftrace_ops at the same function, but since the x86 only support to jump to direct_call from ftrace_regs_caller, when we set the function tracer on the same target function on x86, ftrace-direct does not work as below (this actually works on arm64.) At first, insmod ftrace-direct.ko to put a direct_call on 'wake_up_process()'. # insmod kernel/samples/ftrace/ftrace-direct.ko # less trace . -0 [006] ..s1. 564.686958: my_direct_func: waking up rcu_preempt-17 -0 [007] ..s1. 564.687836: my_direct_func: waking up kcompactd0-63 -0 [006] ..s1. 564.690926: my_direct_func: waking up rcu_preempt-17 -0 [006] ..s1. 564.696872: my_direct_func: waking up rcu_preempt-17 -0 [007] ..s1. 565.191982: my_direct_func: waking up kcompactd0-63 Setup a function filter to the 'wake_up_process' too, and enable it. # cd /sys/kernel/tracing/ # echo wake_up_process > set_ftrace_filter # echo function > current_tracer # less trace . -0 [006] ..s3. 686.180972: wake_up_process <-call_timer_fn -0 [006] ..s3. 686.186919: wake_up_process <-call_timer_fn -0 [002] ..s3. 686.264049: wake_up_process <-call_timer_fn -0 [002] d.h6. 686.515216: wake_up_process <-kick_pool -0 [002] d.h6. 686.691386: wake_up_process <-kick_pool Then, only function tracer is shown on x86. But if you enable 'kprobe on ftrace' event (which uses SAVE_REGS flag) on the same function, it is shown again. # echo 'p wake_up_process' >> dynamic_events # echo 1 > events/kprobes/p_wake_up_process_0/enable # echo > trace # less trace . -0 [006] ..s2. 2710.345919: p_wake_up_process_0: (wake_up_process+0x4/0x20) -0 [006] ..s3. 2710.345923: wake_up_process <-call_timer_fn -0 [006] ..s1. 2710.345928: my_direct_func: waking up rcu_preempt-17 -0 [006] ..s2. 2710.349931: p_wake_up_process_0: (wake_up_process+0x4/0x20) -0 [006] ..s3. 2710.349934: wake_up_process <-call_timer_fn -0 [006] ..s1. 2710.349937: my_direct_func: waking up rcu_preempt-17 To fix this issue, use SAVE_REGS flag for multiple ftrace_ops flag of direct_call by default. Link: https://lore.kernel.org/all/170484558617.178953.1590516949390270842.stgit@devnote2/ Fixes: 60c8971899f3 ("ftrace: Make DIRECT_CALLS work WITH_ARGS and !WITH_REGS") Cc: stable@vger.kernel.org Signed-off-by: Masami Hiramatsu (Google) Reviewed-by: Mark Rutland Tested-by: Mark Rutland [arm64] Acked-by: Jiri Olsa --- kernel/trace/ftrace.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index b01ae7d36021..c060d5b47910 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5325,7 +5325,17 @@ static LIST_HEAD(ftrace_direct_funcs); static int register_ftrace_function_nolock(struct ftrace_ops *ops); +/* + * If there are multiple ftrace_ops, use SAVE_REGS by default, so that direct + * call will be jumped from ftrace_regs_caller. Only if the architecture does + * not support ftrace_regs_caller but direct_call, use SAVE_ARGS so that it + * jumps from ftrace_caller for multiple ftrace_ops. + */ +#ifndef HAVE_DYNAMIC_FTRACE_WITH_REGS #define MULTI_FLAGS (FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_ARGS) +#else +#define MULTI_FLAGS (FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_REGS) +#endif static int check_direct_multi(struct ftrace_ops *ops) { From patchwork Fri Jan 12 10:11:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187664 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp75071dyc; Fri, 12 Jan 2024 02:12:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IFOtPoFTiH+Te6WScWLhWhrOgbFC6dIHSYZGNEbjRmik9oHPNcx6mkrKQ4vahZUythWpnJn X-Received: by 2002:a05:6214:c8d:b0:680:b44c:1a01 with SMTP id r13-20020a0562140c8d00b00680b44c1a01mr717008qvr.116.1705054336068; Fri, 12 Jan 2024 02:12:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054336; cv=none; d=google.com; s=arc-20160816; b=TuUHEsLeopcfNyjRRfVTIAn5c/N8mrTRgkdUN+jp1o/7xIokiujnuHn7EN2FfLjKdt gm9RKyhz5tnkDDY7Ett9CIBObpOGZ4VVhy7rK/MNgdYWfdCpVk5zq4CBRX6HKxpifJxw qYWycm/VLsKTIjbeYQC+37d+3Awl9M/KY+a+0z3CMsW+XC1o05Lmz2CpVmjIfPcfFGFp X4kJoqEFK92VfhKemZgB3cDPPdgOEeoqK5ATrRyiUCTHfn8l93gv2s/+Y66Z2cB2+7Qv KAcss3IOuGzR3vWJmYsGFPQiOj+XCP3VaeEILBSTTRTV0bkFDPPvx8I9gxgkdQ0wue51 t0ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=pkOupataBJGsCIvcHYS0aTNEYz0mzBxc/lo+KDKHOFs=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=z6ExhleRBW+uSCYJfHmlgAB+x7SQhA5/R7SggNQUeMDFASLQEdXMYnl8U6Nlfqgjv7 xC59M5R5svRuqNFEr2fdtPCgmXQeI0h+P5GeckiYiWaZYxouBxFL95wLK7ZPYstU/qwE uJvxofABXp6ofk3zFL2HpPLClOIbSqK4jIdT+2OIQGe4HU02E3vgn4ddaNSR21tmfEQB NGNAy64GlWxEZwIvuS9m82Hsdv5nVS7O8St3/RfH2WmOOt468Nq//KUXhFDLlVG0Mf1d FggiiQl92kBtCl5iKpDVcjzyjSeB7UrR7uf6EDO7Um51LQfRcQOC9pR4ObwtobXD2b3m a/Bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NxFC9jkP; spf=pass (google.com: domain of linux-kernel+bounces-24547-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24547-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id p12-20020a0c8c8c000000b0067f4f8cc986si2526679qvb.274.2024.01.12.02.12.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:12:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24547-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NxFC9jkP; spf=pass (google.com: domain of linux-kernel+bounces-24547-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24547-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id D88461C24C35 for ; Fri, 12 Jan 2024 10:12:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 114AB5EE80; Fri, 12 Jan 2024 10:11:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NxFC9jkP" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 665DD5DF2D; Fri, 12 Jan 2024 10:11:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13778C433C7; Fri, 12 Jan 2024 10:11:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054280; bh=fXbFH1CnbhIn3DpSugbxQNOb5BcqjkwOHd00xmP9WY0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NxFC9jkPlN7an4Xdrcilr85BFyWOBxS44YIJ5+Dt9J6JLNEDC3QdtLIHFR8IT3kQN G8liurMBGobnwfTC73+AbA3lQaMeqR2DczjfaLbTS0YoZkOyDdzTZWvAoXxTskexq6 AQnxsJ50Maai1qxPXo6HrC3PW49QHGnoJj3+uH8J6orHR6qSYdygSB9U0TGmouNY1k fpnTX6XlmcSuhGki25n7J2PrIkRQ3oP1CeZFfl3ZYjZXxdtKH6ZhO8TQ/PaLlcFIAT GR5fg4IaHuvRc901FQnjrt8oP5ZWyrXvX5mAY1uOL6jc1u1GzPCVYieyb2ZELqOSCj cThgFyL8QFX9A== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 02/36] tracing: Add a comment about ftrace_regs definition Date: Fri, 12 Jan 2024 19:11:13 +0900 Message-Id: <170505427384.459169.12870691493043411135.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879055762513928 X-GMAIL-MSGID: 1787879055762513928 From: Masami Hiramatsu (Google) To clarify what will be expected on ftrace_regs, add a comment to the architecture independent definition of the ftrace_regs. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Mark Rutland --- Changes in v3: - Add instruction pointer Changes in v2: - newly added. --- include/linux/ftrace.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index e8921871ef9a..8b48fc621ea0 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -118,6 +118,32 @@ extern int ftrace_enabled; #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS +/** + * ftrace_regs - ftrace partial/optimal register set + * + * ftrace_regs represents a group of registers which is used at the + * function entry and exit. There are three types of registers. + * + * - Registers for passing the parameters to callee, including the stack + * pointer. (e.g. rcx, rdx, rdi, rsi, r8, r9 and rsp on x86_64) + * - Registers for passing the return values to caller. + * (e.g. rax and rdx on x86_64) + * - Registers for hooking the function call and return including the + * frame pointer (the frame pointer is architecture/config dependent) + * (e.g. rip, rbp and rsp for x86_64) + * + * Also, architecture dependent fields can be used for internal process. + * (e.g. orig_ax on x86_64) + * + * On the function entry, those registers will be restored except for + * the stack pointer, so that user can change the function parameters + * and instruction pointer (e.g. live patching.) + * On the function exit, only registers which is used for return values + * are restored. + * + * NOTE: user *must not* access regs directly, only do it via APIs, because + * the member can be changed according to the architecture. + */ struct ftrace_regs { struct pt_regs regs; }; From patchwork Fri Jan 12 10:11:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187665 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp75376dyc; Fri, 12 Jan 2024 02:12:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IHGYOOitgIsYGq7a04c++nIB2OFc520HWns2nXiHAwfWljpwfj5zWmPkGzeWwnWFk6fFLTr X-Received: by 2002:a17:90b:4a92:b0:28c:717:e2a9 with SMTP id lp18-20020a17090b4a9200b0028c0717e2a9mr1662680pjb.36.1705054370115; Fri, 12 Jan 2024 02:12:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054370; cv=none; d=google.com; s=arc-20160816; b=wNRLH323PKN1oOaiI9Vs18wPOQWrjS0+xrlZ6FgW84dTs3I0dalQmbznRTp9KaHapO xmL1yVDAevPkdE30IBMPVN/dM6OPttg1RoEWcNlA5H2qa4FOm4pMrBT/dGFL83RVu3Vu eaYLpyUzwhO76XHDU0Tf1lu/1R8iwIWZn1FR6rExeJHE4QDuNwUapEISR7CfdV6wEzqR zTeVj663bGcbtFirbdj0CWdTFSCfaRPbRVC1rFr22bcbGWP65sA9p2fYCkGBy/0uTOzn pua5VFByiE9+hcpR0TsKIVKHH3n3paF1of7KtRj8tdnsHCC32M6aWx4XYBT4au3GCckD HdGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=C+YeYcERChqrM+mB6MJGnm6nSgQg+t3VAhT1c3xjbKg=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=ucX9qoalqt7t1VN0hI+2U4yQz+pYuIVwNvjzDd2LoC21vAVu9ZJXLl+0z45vRcP0j6 dzv+HpW/NNUfsBQauBb1xQoVMpHowIy2eLOqLwd6tDFT1DmRdWXUZMtmqL38Z+DJABEj got5uVXCegFvnYLKb06FuK/lUe6bAmuE+dG3a3HSZznAK9x6Q2oEmhDn8lEk3QEejh0N L/bM0uAGBp62hkTziOuFhWAtVG9FNkiYTn9DoNVREaJ2ZsZytl8vJI+m5W4fTOrp8DUf wJU4kM+rcCHqrtz1ZMkjF7ClFeadE7RyrHXRNx96kKI6Y7pgcQ6GIq9NoxxqFYowyich WANg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Z8v2r+nP; spf=pass (google.com: domain of linux-kernel+bounces-24549-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24549-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id a1-20020a17090ad80100b0028dfd6feb3csi1974464pjv.96.2024.01.12.02.12.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:12:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24549-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Z8v2r+nP; spf=pass (google.com: domain of linux-kernel+bounces-24549-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24549-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id DC2E5287AE0 for ; Fri, 12 Jan 2024 10:12:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F0E425FEF7; Fri, 12 Jan 2024 10:11:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Z8v2r+nP" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F8ED5FEE5; Fri, 12 Jan 2024 10:11:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 27B34C43143; Fri, 12 Jan 2024 10:11:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054292; bh=kNRd2hWrO7IhygJA2R89Qwn9iuPQTJYXm7k9t7rmDqE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z8v2r+nP9TFnq1PLR+msEt8/mCNMyB8v/Wp/YgsvJcG2kLx08LBfcQizVcSKVzdk6 9OtNsfm9wKy0CZwUksx1VqRQUdOA87vRsvCDYREvPG4yLz8NJMFQn1ADuv2D0uI66V j+un1ivNmKe1Vy0LfdC5Lt1/hyyki8o7DJwBBB0Kh+TjbIL6iwIYN5nb4ZHdTId2B7 c7gCVfH3ZV5dlNRu9/wRXVU/hculd+3Nmw27nmbCgGtdnJrrOSLk8XMyjtSFHFKdNI 7pMWKlS+i/b9g8FbCJw4Hdwx9FcnUxIlqq7e9jKAO+l0ubgm/Togt//bB461elZ6B4 1Cht5NRNbNvzQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 03/36] tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value Date: Fri, 12 Jan 2024 19:11:26 +0900 Message-Id: <170505428634.459169.3461513510942403752.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879091239171465 X-GMAIL-MSGID: 1787879091239171465 From: Masami Hiramatsu (Google) Rename ftrace_regs_return_value to ftrace_regs_get_return_value as same as other ftrace_regs_get/set_* APIs. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Mark Rutland --- Changes in v6: - Moved to top of the series. Changes in v3: - Newly added. --- arch/loongarch/include/asm/ftrace.h | 2 +- arch/powerpc/include/asm/ftrace.h | 2 +- arch/s390/include/asm/ftrace.h | 2 +- arch/x86/include/asm/ftrace.h | 2 +- include/linux/ftrace.h | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/loongarch/include/asm/ftrace.h b/arch/loongarch/include/asm/ftrace.h index a11996eb5892..a9c3d0f2f941 100644 --- a/arch/loongarch/include/asm/ftrace.h +++ b/arch/loongarch/include/asm/ftrace.h @@ -70,7 +70,7 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip) regs_get_kernel_argument(&(fregs)->regs, n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(&(fregs)->regs) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(&(fregs)->regs) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(&(fregs)->regs, ret) diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h index 9e5a39b6a311..7e138e0e3baf 100644 --- a/arch/powerpc/include/asm/ftrace.h +++ b/arch/powerpc/include/asm/ftrace.h @@ -69,7 +69,7 @@ ftrace_regs_get_instruction_pointer(struct ftrace_regs *fregs) regs_get_kernel_argument(&(fregs)->regs, n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(&(fregs)->regs) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(&(fregs)->regs) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(&(fregs)->regs, ret) diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index 5a82b08f03cd..01e775c98425 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -88,7 +88,7 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, regs_get_kernel_argument(&(fregs)->regs, n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(&(fregs)->regs) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(&(fregs)->regs) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(&(fregs)->regs, ret) diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index 897cf02c20b1..cf88cc8cc74d 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -58,7 +58,7 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) regs_get_kernel_argument(&(fregs)->regs, n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(&(fregs)->regs) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(&(fregs)->regs) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(&(fregs)->regs, ret) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 8b48fc621ea0..39ac1f3e8041 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -184,7 +184,7 @@ static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs) regs_get_kernel_argument(ftrace_get_regs(fregs), n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(ftrace_get_regs(fregs)) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(ftrace_get_regs(fregs)) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(ftrace_get_regs(fregs), ret) From patchwork Fri Jan 12 10:11:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187666 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp75533dyc; Fri, 12 Jan 2024 02:13:06 -0800 (PST) X-Google-Smtp-Source: AGHT+IEM68XugTKlNF7mFgBSe8uOt0QlKeDi2huPO3wNJran7z4IbJPeWrBajXayih8fYeolnbD2 X-Received: by 2002:a05:6870:648f:b0:203:b1d9:35d5 with SMTP id cz15-20020a056870648f00b00203b1d935d5mr1150303oab.83.1705054386069; Fri, 12 Jan 2024 02:13:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054386; cv=none; d=google.com; s=arc-20160816; b=SJa2Ce9E1rdDjfaJetnM+oHXod0puqs6DzIxVr25QO2lzVuNG0gUhTysPye6/jnhtX OnwTDQSfzP7jCXYxiRmwj3gEQq2XE8cU8Th7OacqLF9gxHZesHDDq/SkanU53c76W5T0 DY5fmmpHUQ8fRMXamCOoAzcn7zZrLQrWAoRiJf5zI9IvnNGQBLSHQ22sd55fGH+Er33d 33mImCAn638D19h223bxZE+cwc5vPgi3cQxllGv2p8/0eAcbUOARMDGQBJbJjBhDQI1P ngtMl9RIbtRHfttKk4jVshQAyG6pK83W6lIBlsu/DpHKQ7qtUbN1cYi2YULgypK0Axjl yKnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=6UWyW+HZQB9PkPidfS9kyGvOgCmjAVd9N756+zbkz8k=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=TAC8/aKBtprVi5A6NPY4Vgye9J5tOlBjalNkp4KQtOsNqZvwyl9bOXnkwLlBnFzzg8 BE6ETNpGhxy7obv+NJzOoa4hv5tq57ih7CLUwHsX3nms8TTrwlEJaGM5YNt2jKNMS1Ft +F9M5TaixF0hueBqAerz4CTUeQCaOCjNspgu0W7bTH4U5d8kJdbNUVt4hKJ3TazQDgfj cX23wBLMlVe7WksT0tLMyaaIe2qyCQOFIBoI66kX2Lby32adKdGp6G78WC3blaOAkaNX cTzhDSFxVl5tsb0MpqarBMasc0UMB/uqChW+mI+HJD8R7gPLSyOagFUJ7S6lwqzzBUa1 6jhQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=HF2671RH; spf=pass (google.com: domain of linux-kernel+bounces-24550-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24550-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id q15-20020a65684f000000b005ce030e060esi2884864pgt.859.2024.01.12.02.13.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:13:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24550-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=HF2671RH; spf=pass (google.com: domain of linux-kernel+bounces-24550-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24550-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id D0852288741 for ; Fri, 12 Jan 2024 10:13:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AC13C5FF05; Fri, 12 Jan 2024 10:11:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HF2671RH" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 14E1C5D8FD; Fri, 12 Jan 2024 10:11:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E54C4C433F1; Fri, 12 Jan 2024 10:11:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054304; bh=Ky/M8O0rm7AMIWcFwVKGUXTVtEFqfb+EU9Ru8C9PH5M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HF2671RHXzBZjyIFr2oK37ExQQxAEIZbEA5QLEKMwoqaAhZUDEKsZrW9jflnrSt15 YrKNbHKfN6wdUjafevCtXNHRY4CrjzEWLNhhnKpB3OIJJUMM7GluVvzSgjVojlcaXW kjW2HiyyV6sbtY1xyhMExWNpxeljWKg3Ld7gZ3uHDj1RG5kH9N1F4zMJBllj5p9EFl c3++IvgbbqG5KOozgPjd3mqMd3c3JEvdN9l9E6lLUf28i2QAnK1IHixw1+2ETD6vAP QdhZVIkOln/M4TcFpYe7lnbeYuhQkuu747zDUqt3v/BYzaQ0JHzl/H02+czPVwQbmG 2hvWBVAKnE41w== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 04/36] x86: tracing: Add ftrace_regs definition in the header Date: Fri, 12 Jan 2024 19:11:38 +0900 Message-Id: <170505429822.459169.14866081576830532227.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879107666171341 X-GMAIL-MSGID: 1787879107666171341 From: Masami Hiramatsu (Google) Add ftrace_regs definition for x86_64 in the ftrace header to clarify what register will be accessible from ftrace_regs. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Add rip to be saved. Changes in v2: - Newly added. --- arch/x86/include/asm/ftrace.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index cf88cc8cc74d..c88bf47f46da 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -36,6 +36,12 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) #ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS struct ftrace_regs { + /* + * On the x86_64, the ftrace_regs saves; + * rax, rcx, rdx, rdi, rsi, r8, r9, rbp, rip and rsp. + * Also orig_ax is used for passing direct trampoline address. + * x86_32 doesn't support ftrace_regs. + */ struct pt_regs regs; }; From patchwork Fri Jan 12 10:11:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187667 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp75661dyc; Fri, 12 Jan 2024 02:13:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IH51cDgHMwJa3ztYT5oCTGLU3wLv5tqLqvyQOeGK6Y5kukZi45ZxdDL952oejHMmxe5CABt X-Received: by 2002:a05:6e02:3201:b0:360:9055:4756 with SMTP id cd1-20020a056e02320100b0036090554756mr951964ilb.72.1705054403094; Fri, 12 Jan 2024 02:13:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054403; cv=none; d=google.com; s=arc-20160816; b=IHSU/BDH/hT8mVTqaU5jvSb26nCvSsm0i/eGtWVUIcP3dO52fnO2omfGUjFfnpUM+2 dp8sx6HZhckvs0tNsiDX43vVIgq6w0ji28sY3VePvQ5xvpWvx8c7AcC+hRpvxkqnd3jf 71bcaBHmpvlvNJNWBwG6WTkPexD3+uQydfWEgeB9ECdvUeWY8FtCOxAVX0AMsUFGOUyl WtOALt3omtwRVA1D51e0eWGTtBR0iHR2ffBvfgo1zjyIRkEKMW5xEkm9NcL5R9AN7O8Y shsw3h9fMOjzCpuQkHTX/9FVVMfTFpTyH8c5BBE9ldSbxIEzoT6wk4Otxo+ciECZimaS HzKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=QLbaEG1HkAbKkwHOtmfbGYNHbzuZtxri/SOYjg2nfyE=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=kFPtUA/PfHPZbdJx/vRW3pCQUkiHwTcv4TxOSUvGZM0EmHH4/ua05JqJhEjgZaso0J JN129dmD/1B5Wz9leylGcyfSWPkpK9ZT1mI0wfyb6Gei3zJ63i6gGpgCl87dIDYvneKu a8sJaO4tJVpMo+D0Ikqvzj/80SiWDt31wULAnNJGTwteRgVt2XSPQ1czo3b5Qcheokee gOzrHO0wo3E6bE5v9q4tGw2LQ7hE82JkR9tR2rniZlB6UfRBQ7egAqYrXJzzt1JQCuww HP8zYoXDpGtzz+fCB2UYtlks0FyXgWwIE9BOHJuirrPE0j5d8/AJ6ohzxbpXKyDX4NPI b4HA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="P0Ax1v/d"; spf=pass (google.com: domain of linux-kernel+bounces-24551-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24551-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id bs6-20020a632806000000b005c1b30b1b65si2839300pgb.648.2024.01.12.02.13.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:13:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24551-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="P0Ax1v/d"; spf=pass (google.com: domain of linux-kernel+bounces-24551-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24551-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id CE363287AE0 for ; Fri, 12 Jan 2024 10:13:22 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DBD5F5DF13; Fri, 12 Jan 2024 10:12:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="P0Ax1v/d" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA9CF5FF0D; Fri, 12 Jan 2024 10:11:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 958ABC433C7; Fri, 12 Jan 2024 10:11:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054316; bh=z+ddj5bbKkMJhDBQHhx2JzowUtNrB5GUOB5ij9v/Xkk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P0Ax1v/dvERxf0Ykf12/ySIzeKiBlMGzMBJDeFvWdRBQivBCLoNVUhz+YXbh2VYfg 79EE85ecLBFoCSnXJ/FmpqScLmDFVBnA0e/UFVRWpZ0Vmt7cEzDXKCF3E08ESmsghN U/8+AQuA6UbxC3Zk+ZBvwj/gVDGMzsxJFi0sw1DvXmJdEH2hrpK0+O2c793Q1z2AQc cOp75YHXaxVolQYFepYS88PsKxLI/okUdfxLg8bZg8wxp/X2dJH2CdhaR7tXqND5zj I9YUTc1jUalGS4nU70rit16+CfKovRfpP/j2zfhUvgeHYnRwelx/58DdpYxa4luxyr 8bbmbzQUsrvEQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 05/36] function_graph: Convert ret_stack to a series of longs Date: Fri, 12 Jan 2024 19:11:50 +0900 Message-Id: <170505430992.459169.14425481694961061545.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879125546639239 X-GMAIL-MSGID: 1787879125546639239 From: Steven Rostedt (VMware) In order to make it possible to have multiple callbacks registered with the function_graph tracer, the retstack needs to be converted from an array of ftrace_ret_stack structures to an array of longs. This will allow to store the list of callbacks on the stack for the return side of the functions. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/sched.h | 2 - kernel/trace/fgraph.c | 124 ++++++++++++++++++++++++++++--------------------- 2 files changed, 71 insertions(+), 55 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 292c31697248..4dab30f00211 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1390,7 +1390,7 @@ struct task_struct { int curr_ret_depth; /* Stack of return addresses for return function tracing: */ - struct ftrace_ret_stack *ret_stack; + unsigned long *ret_stack; /* Timestamp for last schedule: */ unsigned long long ftrace_timestamp; diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index c83c005e654e..30edeb6d4aa9 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -25,6 +25,18 @@ #define ASSIGN_OPS_HASH(opsname, val) #endif +#define FGRAPH_RET_SIZE sizeof(struct ftrace_ret_stack) +#define FGRAPH_RET_INDEX (ALIGN(FGRAPH_RET_SIZE, sizeof(long)) / sizeof(long)) +#define SHADOW_STACK_SIZE (PAGE_SIZE) +#define SHADOW_STACK_INDEX \ + (ALIGN(SHADOW_STACK_SIZE, sizeof(long)) / sizeof(long)) +/* Leave on a buffer at the end */ +#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - FGRAPH_RET_INDEX) + +#define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) +#define RET_STACK_INC(c) ({ c += FGRAPH_RET_INDEX; }) +#define RET_STACK_DEC(c) ({ c -= FGRAPH_RET_INDEX; }) + DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; @@ -69,6 +81,7 @@ static int ftrace_push_return_trace(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp) { + struct ftrace_ret_stack *ret_stack; unsigned long long calltime; int index; @@ -85,23 +98,25 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, smp_rmb(); /* The return trace stack is full */ - if (current->curr_ret_stack == FTRACE_RETFUNC_DEPTH - 1) { + if (current->curr_ret_stack >= SHADOW_STACK_MAX_INDEX) { atomic_inc(¤t->trace_overrun); return -EBUSY; } calltime = trace_clock_local(); - index = ++current->curr_ret_stack; + index = current->curr_ret_stack; + RET_STACK_INC(current->curr_ret_stack); + ret_stack = RET_STACK(current, index); barrier(); - current->ret_stack[index].ret = ret; - current->ret_stack[index].func = func; - current->ret_stack[index].calltime = calltime; + ret_stack->ret = ret; + ret_stack->func = func; + ret_stack->calltime = calltime; #ifdef HAVE_FUNCTION_GRAPH_FP_TEST - current->ret_stack[index].fp = frame_pointer; + ret_stack->fp = frame_pointer; #endif #ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR - current->ret_stack[index].retp = retp; + ret_stack->retp = retp; #endif return 0; } @@ -148,7 +163,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, return 0; out_ret: - current->curr_ret_stack--; + RET_STACK_DEC(current->curr_ret_stack); out: current->curr_ret_depth--; return -EBUSY; @@ -159,11 +174,13 @@ static void ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, unsigned long frame_pointer) { + struct ftrace_ret_stack *ret_stack; int index; index = current->curr_ret_stack; + RET_STACK_DEC(index); - if (unlikely(index < 0 || index >= FTRACE_RETFUNC_DEPTH)) { + if (unlikely(index < 0 || index > SHADOW_STACK_MAX_INDEX)) { ftrace_graph_stop(); WARN_ON(1); /* Might as well panic, otherwise we have no where to go */ @@ -171,6 +188,7 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, return; } + ret_stack = RET_STACK(current, index); #ifdef HAVE_FUNCTION_GRAPH_FP_TEST /* * The arch may choose to record the frame pointer used @@ -186,22 +204,22 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, * Note, -mfentry does not use frame pointers, and this test * is not needed if CC_USING_FENTRY is set. */ - if (unlikely(current->ret_stack[index].fp != frame_pointer)) { + if (unlikely(ret_stack->fp != frame_pointer)) { ftrace_graph_stop(); WARN(1, "Bad frame pointer: expected %lx, received %lx\n" " from func %ps return to %lx\n", current->ret_stack[index].fp, frame_pointer, - (void *)current->ret_stack[index].func, - current->ret_stack[index].ret); + (void *)ret_stack->func, + ret_stack->ret); *ret = (unsigned long)panic; return; } #endif - *ret = current->ret_stack[index].ret; - trace->func = current->ret_stack[index].func; - trace->calltime = current->ret_stack[index].calltime; + *ret = ret_stack->ret; + trace->func = ret_stack->func; + trace->calltime = ret_stack->calltime; trace->overrun = atomic_read(¤t->trace_overrun); trace->depth = current->curr_ret_depth--; /* @@ -262,7 +280,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs * curr_ret_stack is after that. */ barrier(); - current->curr_ret_stack--; + RET_STACK_DEC(current->curr_ret_stack); if (unlikely(!ret)) { ftrace_graph_stop(); @@ -305,12 +323,13 @@ unsigned long ftrace_return_to_handler(unsigned long frame_pointer) struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int idx) { - idx = task->curr_ret_stack - idx; + int index = task->curr_ret_stack; - if (idx >= 0 && idx <= task->curr_ret_stack) - return &task->ret_stack[idx]; + index -= FGRAPH_RET_INDEX * (idx + 1); + if (index < 0) + return NULL; - return NULL; + return RET_STACK(task, index); } /** @@ -332,18 +351,20 @@ ftrace_graph_get_ret_stack(struct task_struct *task, int idx) unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp) { + struct ftrace_ret_stack *ret_stack; int index = task->curr_ret_stack; int i; if (ret != (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) return ret; - if (index < 0) - return ret; + RET_STACK_DEC(index); - for (i = 0; i <= index; i++) - if (task->ret_stack[i].retp == retp) - return task->ret_stack[i].ret; + for (i = index; i >= 0; RET_STACK_DEC(i)) { + ret_stack = RET_STACK(task, i); + if (ret_stack->retp == retp) + return ret_stack->ret; + } return ret; } @@ -357,14 +378,15 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, return ret; task_idx = task->curr_ret_stack; + RET_STACK_DEC(task_idx); if (!task->ret_stack || task_idx < *idx) return ret; task_idx -= *idx; - (*idx)++; + RET_STACK_INC(*idx); - return task->ret_stack[task_idx].ret; + return RET_STACK(task, task_idx); } #endif /* HAVE_FUNCTION_GRAPH_RET_ADDR_PTR */ @@ -402,7 +424,7 @@ trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub; static trace_func_graph_ent_t __ftrace_graph_entry = ftrace_graph_entry_stub; /* Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. */ -static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) +static int alloc_retstack_tasklist(unsigned long **ret_stack_list) { int i; int ret = 0; @@ -410,10 +432,7 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) struct task_struct *g, *t; for (i = 0; i < FTRACE_RETSTACK_ALLOC_SIZE; i++) { - ret_stack_list[i] = - kmalloc_array(FTRACE_RETFUNC_DEPTH, - sizeof(struct ftrace_ret_stack), - GFP_KERNEL); + ret_stack_list[i] = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); if (!ret_stack_list[i]) { start = 0; end = i; @@ -431,9 +450,9 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) if (t->ret_stack == NULL) { atomic_set(&t->trace_overrun, 0); - t->curr_ret_stack = -1; + t->curr_ret_stack = 0; t->curr_ret_depth = -1; - /* Make sure the tasks see the -1 first: */ + /* Make sure the tasks see the 0 first: */ smp_wmb(); t->ret_stack = ret_stack_list[start++]; } @@ -453,6 +472,7 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, struct task_struct *next, unsigned int prev_state) { + struct ftrace_ret_stack *ret_stack; unsigned long long timestamp; int index; @@ -477,8 +497,11 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, */ timestamp -= next->ftrace_timestamp; - for (index = next->curr_ret_stack; index >= 0; index--) - next->ret_stack[index].calltime += timestamp; + for (index = next->curr_ret_stack - FGRAPH_RET_INDEX; index >= 0; ) { + ret_stack = RET_STACK(next, index); + ret_stack->calltime += timestamp; + index -= FGRAPH_RET_INDEX; + } } static int ftrace_graph_entry_test(struct ftrace_graph_ent *trace) @@ -521,10 +544,10 @@ void update_function_graph_func(void) ftrace_graph_entry = __ftrace_graph_entry; } -static DEFINE_PER_CPU(struct ftrace_ret_stack *, idle_ret_stack); +static DEFINE_PER_CPU(unsigned long *, idle_ret_stack); static void -graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack) +graph_init_task(struct task_struct *t, unsigned long *ret_stack) { atomic_set(&t->trace_overrun, 0); t->ftrace_timestamp = 0; @@ -539,7 +562,7 @@ graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack) */ void ftrace_graph_init_idle_task(struct task_struct *t, int cpu) { - t->curr_ret_stack = -1; + t->curr_ret_stack = 0; t->curr_ret_depth = -1; /* * The idle task has no parent, it either has its own @@ -549,14 +572,11 @@ void ftrace_graph_init_idle_task(struct task_struct *t, int cpu) WARN_ON(t->ret_stack != per_cpu(idle_ret_stack, cpu)); if (ftrace_graph_active) { - struct ftrace_ret_stack *ret_stack; + unsigned long *ret_stack; ret_stack = per_cpu(idle_ret_stack, cpu); if (!ret_stack) { - ret_stack = - kmalloc_array(FTRACE_RETFUNC_DEPTH, - sizeof(struct ftrace_ret_stack), - GFP_KERNEL); + ret_stack = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); if (!ret_stack) return; per_cpu(idle_ret_stack, cpu) = ret_stack; @@ -570,15 +590,13 @@ void ftrace_graph_init_task(struct task_struct *t) { /* Make sure we do not use the parent ret_stack */ t->ret_stack = NULL; - t->curr_ret_stack = -1; + t->curr_ret_stack = 0; t->curr_ret_depth = -1; if (ftrace_graph_active) { - struct ftrace_ret_stack *ret_stack; + unsigned long *ret_stack; - ret_stack = kmalloc_array(FTRACE_RETFUNC_DEPTH, - sizeof(struct ftrace_ret_stack), - GFP_KERNEL); + ret_stack = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); if (!ret_stack) return; graph_init_task(t, ret_stack); @@ -587,7 +605,7 @@ void ftrace_graph_init_task(struct task_struct *t) void ftrace_graph_exit_task(struct task_struct *t) { - struct ftrace_ret_stack *ret_stack = t->ret_stack; + unsigned long *ret_stack = t->ret_stack; t->ret_stack = NULL; /* NULL must become visible to IRQs before we free it: */ @@ -599,12 +617,10 @@ void ftrace_graph_exit_task(struct task_struct *t) /* Allocate a return stack for each task */ static int start_graph_tracing(void) { - struct ftrace_ret_stack **ret_stack_list; + unsigned long **ret_stack_list; int ret, cpu; - ret_stack_list = kmalloc_array(FTRACE_RETSTACK_ALLOC_SIZE, - sizeof(struct ftrace_ret_stack *), - GFP_KERNEL); + ret_stack_list = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); if (!ret_stack_list) return -ENOMEM; From patchwork Fri Jan 12 10:12:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187668 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp75796dyc; Fri, 12 Jan 2024 02:13:41 -0800 (PST) X-Google-Smtp-Source: AGHT+IFI3R9lBvtxclyPc85GbQw1CFQ+pvwQtof5rQUvD90u7v6b6AUhWhIv+heaj7iVOmC1u1LD X-Received: by 2002:a17:906:3943:b0:a26:ee0d:1c02 with SMTP id g3-20020a170906394300b00a26ee0d1c02mr296161eje.39.1705054420859; Fri, 12 Jan 2024 02:13:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054420; cv=none; d=google.com; s=arc-20160816; b=ZUwtozo27ys/Wp6xmRJWwV1i6Sm3LgfrX18besKP7YKknKfRdSsRvOYZJ6UTHVD4rv Pf8USzX9+VLTFwLrkVU1eQf5R/9krxE8QRVX9oPPooWYW0+NHF0smCXR9bwtQzKOW9Fx 6lb+4KDl5Ul5LAIIiV6KbaqhfR5+69Vp7KCZPmyhjciu6DkPsrmpaaDpHUGa+A42XIs/ /RaSwXolHUJgP81/DCnGXqAUBn/LwtQwVyjJS8Z5NMam6F0u6JJnSUFQ8ikEBVaR1fk7 exNR1xy5Um2WSX3Uoar1EluP3Z3K+HAV22yuST3vdWpNmgnF0JC9fZVq+A7HrgxPB1Tq TbCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=OESyYZVxbmArs5V4zMGrIL0/uPgjACK9BAVI1aKMbU0=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=KqKaTMVZSdhvPPCpJKf4Uke4Yqsp3KVdeoVgdoQGvCS/YbtS4t47cGQR02oKficM2I V72XjXeCzBSesPFF+Z/6p68WRFzuLjYT9BeVUibR3lVfdCRQVKVN4smjOs8W4UcKrby4 RoWPlWtk7FaJoibca6MIhZVSWX9SAhfUYyHpiMOn9rJly9bwjCxPqdf0sk/1K1z6hdNQ iFFQt2waKT/MOiOXQfCAPaKRSNRyar3qp6EHM/LTkCahWoLJV8swxVH1pk2Y07XKXxQB c9v4Edz6Qo7WnL1CoJOThTq/UGPIEc3mgb6H+1keRW32KpldsTQXD3EgxobDmlS1xyQq Ru+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="FNHbNiB/"; spf=pass (google.com: domain of linux-kernel+bounces-24552-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24552-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id lr21-20020a170906fb9500b00a28cf59203bsi1220582ejb.62.2024.01.12.02.13.40 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:13:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24552-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="FNHbNiB/"; spf=pass (google.com: domain of linux-kernel+bounces-24552-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24552-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 722A01F268E2 for ; Fri, 12 Jan 2024 10:13:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6594B5FF1D; Fri, 12 Jan 2024 10:12:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FNHbNiB/" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C07B85FF0D; Fri, 12 Jan 2024 10:12:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C9E9C433C7; Fri, 12 Jan 2024 10:12:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054327; bh=BJTb9S/48BMXgKg9JdebJFjIf4uCQaHDkO4DGbu+/d0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FNHbNiB/xSg/7ehU4MKZbWNtREQbJVpwYTkt3lrjrf9jZ/RGJgfpwJYwNNGQQz0tv OcBZQpGkcvYEDgRzYy7m/aYTaSC2aCJbQmy/Fu17w12A3t01/lcABpnPXCN8/u82Ld fK/gfmREyDHlo6WpgUbIRTTcFRZK39T4IQ6hI+Iw+2Lcc1oKpDOZR0Q6OMinpglSmA efv2X9NjXZ5HOUZZzrPrU1iiq2TXs7j+YGVydXLAKXnzP6bs40mAYqRa+e/pa2wTDG O+RftnN4z83jMnNxuw5cGMrZbcW3I/C1B44oMNOPLYRre/Z7cuNKOARteHrRm0wxSf csl00e7pnRNGA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 06/36] fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long Date: Fri, 12 Jan 2024 19:12:01 +0900 Message-Id: <170505432176.459169.10224406939710504027.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879144337639275 X-GMAIL-MSGID: 1787879144337639275 From: Steven Rostedt (VMware) Instead of using "ALIGN()", use BUILD_BUG_ON() as the structures should always be divisible by sizeof(long). Link: http://lkml.kernel.org/r/20190524111144.GI2589@hirez.programming.kicks-ass.net Suggested-by: Peter Zijlstra Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- kernel/trace/fgraph.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 30edeb6d4aa9..837daf929d2a 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -26,10 +26,9 @@ #endif #define FGRAPH_RET_SIZE sizeof(struct ftrace_ret_stack) -#define FGRAPH_RET_INDEX (ALIGN(FGRAPH_RET_SIZE, sizeof(long)) / sizeof(long)) +#define FGRAPH_RET_INDEX (FGRAPH_RET_SIZE / sizeof(long)) #define SHADOW_STACK_SIZE (PAGE_SIZE) -#define SHADOW_STACK_INDEX \ - (ALIGN(SHADOW_STACK_SIZE, sizeof(long)) / sizeof(long)) +#define SHADOW_STACK_INDEX (SHADOW_STACK_SIZE / sizeof(long)) /* Leave on a buffer at the end */ #define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - FGRAPH_RET_INDEX) @@ -91,6 +90,8 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, if (!current->ret_stack) return -EBUSY; + BUILD_BUG_ON(SHADOW_STACK_SIZE % sizeof(long)); + /* * We must make sure the ret_stack is tested before we read * anything else. @@ -325,6 +326,8 @@ ftrace_graph_get_ret_stack(struct task_struct *task, int idx) { int index = task->curr_ret_stack; + BUILD_BUG_ON(FGRAPH_RET_SIZE % sizeof(long)); + index -= FGRAPH_RET_INDEX * (idx + 1); if (index < 0) return NULL; From patchwork Fri Jan 12 10:12:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187669 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp75910dyc; Fri, 12 Jan 2024 02:13:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IFLnsoS4pu6g2XMGxY1Ghd/FGfYYSJHrdgnwUGiiPtiikKMPWpipSXdlDQXUUfkd1M9gIyu X-Received: by 2002:a05:6214:1312:b0:67f:1f0c:b30e with SMTP id pn18-20020a056214131200b0067f1f0cb30emr870852qvb.32.1705054436311; Fri, 12 Jan 2024 02:13:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054436; cv=none; d=google.com; s=arc-20160816; b=gfM3v0EuMuQsANi0fv6gsT+YBaPOrllML3UXcs5mY7+PQxf/nSCzweuzO8KB7IjHGz 3RfATb/PnDlaWsm7YrGfpr5fQPnxl2sPLrc+A0X1qWL0vSN3lxRTMXnsrBB6se+NRrgG NLfS8vduXM7EDqNdzre6m9J+tlEmaY6073xz6Vjd2YxmWjVixMElA/wnlBBaseH5BOxi ym88rjFmTgqQRPUxDxIkNndin+DYbkd6FLYVObVpSIQy70eB0Mv8FVAuAgmHU9duvB13 dK7SKze6UbSb2qSYR17Rl/X8fPK8rxi+XzvkdudU+1Wk1SOoq5ZI2Eh4fP2m7L1nrWEi CFxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=USeyGjolvInMzVJ66tLimsGzxCbXQfhEBdfvr5j4pqU=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=hnX/s6nt84VQ7ClUravim2KUlq+Azk6mMT2esdxMcc91wYHHB+4nhDFN83Ut/kIhzA ITlkP+BqtWQV+pS+9XVFg4t9xybUA7DgK5A1HIFnirhHenmsjXf3KuqQOwlVw9edMvtN 9cFCuXcl5JzGshruSIQYex1/tE0wiVPCyqL7WOoRBDsyGeZuCGkzJ/Qk2ATfgUYa8iSQ K7S6H5ol68hFIBeBxdCd98Lay60yj2JqF2xpdqDPmhZc2oiXjU5qUk1+wLlGV9v+KRAT rAVdnc1dJUcq0J6nOOsVLLMczO11hU4VYpMBFLgZ9+sHu7rpmqW5NTZmP9/dtXwTwoMO PF0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=idNjdh0N; spf=pass (google.com: domain of linux-kernel+bounces-24553-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24553-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id k12-20020a0cabcc000000b0067f8cab3dbasi2576827qvb.158.2024.01.12.02.13.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:13:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24553-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=idNjdh0N; spf=pass (google.com: domain of linux-kernel+bounces-24553-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24553-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 104011C24BA8 for ; Fri, 12 Jan 2024 10:13:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4EBB260B84; Fri, 12 Jan 2024 10:12:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="idNjdh0N" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46C075DF30; Fri, 12 Jan 2024 10:12:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5AA45C433F1; Fri, 12 Jan 2024 10:12:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054340; bh=51Cq89Jg5fqfgwiJmEALH7euiYhrlJJ1/IPjogh9mj0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=idNjdh0Nk1hhlqSCKPzTMhVTYkqAQp2orKE+QPWemJZmykq+F+wSOzycHTt5VOH4+ 8pRX0ahUGSfAFd2gl2vE7CMh71UU8xnogO2bs9cBjbU0txUX1bodNiaPmO4zEycQNg Y8P/E2RIJzer3v/s3mGY9OsXG1cvOmff7ohCVEr/jTeI83MVosLpcb8gq3Pi/cM6/X +NcRvmIwuXy+mWNe3mg9sOL2z3Oj+LqswuhzaUHGGfWHonSQcZDhfLjwK02MzPh7LF bTJ4JnwhzEQbp9DRYkaVeSdwERVZzCDujnU27BpuNW9yo/caxEElNHqdwFJuZbTj+G NFT86oDtRjHQw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 07/36] function_graph: Add an array structure that will allow multiple callbacks Date: Fri, 12 Jan 2024 19:12:13 +0900 Message-Id: <170505433303.459169.17712536568437440421.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879160636984001 X-GMAIL-MSGID: 1787879160636984001 From: Steven Rostedt (VMware) Add an array structure that will eventually allow the function graph tracer to have up to 16 simultaneous callbacks attached. It's an array of 16 fgraph_ops pointers, that is assigned when one is registered. On entry of a function the entry of the first item in the array is called, and if it returns zero, then the callback returns non zero if it wants the return callback to be called on exit of the function. The array will simplify the process of having more than one callback attached to the same function, as its index into the array can be stored on the shadow stack. We need to only save the index, because this will allow the fgraph_ops to be freed before the function returns (which may happen if the function call schedule for a long time). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Remove unneeded brace. --- kernel/trace/fgraph.c | 114 +++++++++++++++++++++++++++++++++++-------------- 1 file changed, 81 insertions(+), 33 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 837daf929d2a..86df3ca6964f 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -39,6 +39,11 @@ DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; +static int fgraph_array_cnt; +#define FGRAPH_ARRAY_SIZE 16 + +static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; + /* Both enabled by default (can be cleared by function_graph tracer flags */ static bool fgraph_sleep_time = true; @@ -62,6 +67,20 @@ int __weak ftrace_disable_ftrace_graph_caller(void) } #endif +int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) +{ + return 0; +} + +static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace) +{ +} + +static struct fgraph_ops fgraph_stub = { + .entryfunc = ftrace_graph_entry_stub, + .retfunc = ftrace_graph_ret_stub, +}; + /** * ftrace_graph_stop - set to permanently disable function graph tracing * @@ -159,7 +178,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, goto out; /* Only trace if the calling function expects to */ - if (!ftrace_graph_entry(&trace)) + if (!fgraph_array[0]->entryfunc(&trace)) goto out_ret; return 0; @@ -274,7 +293,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs trace.retval = fgraph_ret_regs_return_value(ret_regs); #endif trace.rettime = trace_clock_local(); - ftrace_graph_return(&trace); + fgraph_array[0]->retfunc(&trace); /* * The ftrace_graph_return() may still access the current * ret_stack structure, we need to make sure the update of @@ -410,11 +429,6 @@ void ftrace_graph_sleep_time_control(bool enable) fgraph_sleep_time = enable; } -int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) -{ - return 0; -} - /* * Simply points to ftrace_stub, but with the proper protocol. * Defined by the linker script in linux/vmlinux.lds.h @@ -652,37 +666,54 @@ static int start_graph_tracing(void) int register_ftrace_graph(struct fgraph_ops *gops) { int ret = 0; + int i; mutex_lock(&ftrace_lock); - /* we currently allow only one tracer registered at a time */ - if (ftrace_graph_active) { + if (!fgraph_array[0]) { + /* The array must always have real data on it */ + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) + fgraph_array[i] = &fgraph_stub; + } + + /* Look for an available spot */ + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { + if (fgraph_array[i] == &fgraph_stub) + break; + } + if (i >= FGRAPH_ARRAY_SIZE) { ret = -EBUSY; goto out; } - register_pm_notifier(&ftrace_suspend_notifier); + fgraph_array[i] = gops; + if (i + 1 > fgraph_array_cnt) + fgraph_array_cnt = i + 1; ftrace_graph_active++; - ret = start_graph_tracing(); - if (ret) { - ftrace_graph_active--; - goto out; - } - ftrace_graph_return = gops->retfunc; + if (ftrace_graph_active == 1) { + register_pm_notifier(&ftrace_suspend_notifier); + ret = start_graph_tracing(); + if (ret) { + ftrace_graph_active--; + goto out; + } + + ftrace_graph_return = gops->retfunc; - /* - * Update the indirect function to the entryfunc, and the - * function that gets called to the entry_test first. Then - * call the update fgraph entry function to determine if - * the entryfunc should be called directly or not. - */ - __ftrace_graph_entry = gops->entryfunc; - ftrace_graph_entry = ftrace_graph_entry_test; - update_function_graph_func(); + /* + * Update the indirect function to the entryfunc, and the + * function that gets called to the entry_test first. Then + * call the update fgraph entry function to determine if + * the entryfunc should be called directly or not. + */ + __ftrace_graph_entry = gops->entryfunc; + ftrace_graph_entry = ftrace_graph_entry_test; + update_function_graph_func(); - ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); + ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); + } out: mutex_unlock(&ftrace_lock); return ret; @@ -690,19 +721,36 @@ int register_ftrace_graph(struct fgraph_ops *gops) void unregister_ftrace_graph(struct fgraph_ops *gops) { + int i; + mutex_lock(&ftrace_lock); if (unlikely(!ftrace_graph_active)) goto out; - ftrace_graph_active--; - ftrace_graph_return = ftrace_stub_graph; - ftrace_graph_entry = ftrace_graph_entry_stub; - __ftrace_graph_entry = ftrace_graph_entry_stub; - ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); - unregister_pm_notifier(&ftrace_suspend_notifier); - unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); + for (i = 0; i < fgraph_array_cnt; i++) + if (gops == fgraph_array[i]) + break; + if (i >= fgraph_array_cnt) + goto out; + fgraph_array[i] = &fgraph_stub; + if (i + 1 == fgraph_array_cnt) { + for (; i >= 0; i--) + if (fgraph_array[i] != &fgraph_stub) + break; + fgraph_array_cnt = i + 1; + } + + ftrace_graph_active--; + if (!ftrace_graph_active) { + ftrace_graph_return = ftrace_stub_graph; + ftrace_graph_entry = ftrace_graph_entry_stub; + __ftrace_graph_entry = ftrace_graph_entry_stub; + ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); + unregister_pm_notifier(&ftrace_suspend_notifier); + unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); + } out: mutex_unlock(&ftrace_lock); } From patchwork Fri Jan 12 10:12:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187670 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp76047dyc; Fri, 12 Jan 2024 02:14:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IH3tgVE8bUXBtHqC00354fHnnk+0pujQbI59D3qhC9elKH4lZJu3GH8n7NNpogIdRUOfPNO X-Received: by 2002:a05:622a:49:b0:429:c790:ebda with SMTP id y9-20020a05622a004900b00429c790ebdamr1247257qtw.57.1705054453796; Fri, 12 Jan 2024 02:14:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054453; cv=none; d=google.com; s=arc-20160816; b=knA6bYWkNAR1xSoTh/lKwa4N8cw4Ox+OQsfNy3i869oOSk+eqhzZaggZ6kEzlOaTjM jccvVvs38QUNzlO6LBCCiYcfx002mLeGVPyuTuMlGsDOsWEa0qAlqQXaCKxoSElomosm Za7sa3LuvoWWbOYehBU3h0YNlURJ+cc7tH+5T/MQ1UCahHA/a94keWBsMD589j684oIL JR4idtcqL6gnAhuDOufOwxzOkHeFAVMFoulcZDfB97G5wKJ+cRn64MtPYV+5MP1xYcl7 rjg7+9WAGJVkAmvjcdBIDFIyyV01NZuPaL8mTl3GX2TFdoJHclE7p6dQqZdn6L6VKYB/ E0Ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=VhRYc1ru/7AODdorQn5peZB/+EM5J4hdBj5iHP1ncXY=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=RuwWe1KCf29thXJwWSE+okYuj4Sew+Er8pSulzZNXEShR21jmeFrXAobuYea4RxheO io62IfKYHjW2dIUAGdHOzz2N2ueQ5hSepkRH8ggTzy4DuI2IT1EDfCXStacJk7zuCnNK Y9agJ1aZvqwCS/gx1rVkXvSa9ZYNTDYAPaJ+pXjrS7ZCsWp0Emh1Ykb/fOLva7ezFwLr wgUJzvnSBZ4W1Fgv12HZPSfvfIuMF+tqrUxnJovgZSonnZCzSHN/s0IwsfsUnYPYn2jZ aVH3vWRYzX2m0UE1PVkLDl6kWy30v6RtmB9rhUwE/4728qQu1LTRHtBh+RFXLkVcJe10 z9Yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=RwZ+QlmH; spf=pass (google.com: domain of linux-kernel+bounces-24554-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24554-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id c11-20020ac85a8b000000b0042831bea150si2608078qtc.534.2024.01.12.02.14.13 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:14:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24554-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=RwZ+QlmH; spf=pass (google.com: domain of linux-kernel+bounces-24554-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24554-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 8636F1C24BDD for ; Fri, 12 Jan 2024 10:14:13 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EF00960B8D; Fri, 12 Jan 2024 10:12:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RwZ+QlmH" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB29560B8B; Fri, 12 Jan 2024 10:12:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A534C433C7; Fri, 12 Jan 2024 10:12:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054351; bh=APQJdEZ6pt+ekGviwAieGT/uClX2gyf+ahHvsmx9rEQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RwZ+QlmH1KOY0kWR2FVvQE5WDSPzR6iyfcykSM2fvMlzq+HEubCKl7Zcf6LY2hrXD fogKGQdd0ry/5i7b9f8dZlyA1h6bLHMwuyZnOxqu7s70NObFdygr4UFDtzfyViOKRA FxfFjpIsEgk5AmXKT6X6EarhiBK7ZL6hF1x2DJxOBXLb9iVlv8TI2CE4kaxu11nBjM 0KeVnmTQdfIbC4IdLqpLaBnmPiAp0POKf2MgsHzY2ZVzr0nARPbLr69S3UnP/ZZ6y6 PdAZDF/HOBLi2kerWMugeW4Z6yLlSmsZP83pgNwauFAJHVxOOtmhQ/ZHUP1Kc3SVPx aWPqxApaaj87w== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 08/36] function_graph: Allow multiple users to attach to function graph Date: Fri, 12 Jan 2024 19:12:25 +0900 Message-Id: <170505434552.459169.18324871638352953716.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879179097158179 X-GMAIL-MSGID: 1787879179097158179 From: Steven Rostedt (VMware) Allow for multiple users to attach to function graph tracer at the same time. Only 16 simultaneous users can attach to the tracer. This is because there's an array that stores the pointers to the attached fgraph_ops. When a function being traced is entered, each of the ftrace_ops entryfunc is called and if it returns non zero, its index into the array will be added to the shadow stack. On exit of the function being traced, the shadow stack will contain the indexes of the ftrace_ops on the array that want their retfunc to be called. Because a function may sleep for a long time (if a task sleeps itself), the return of the function may be literally days later. If the ftrace_ops is removed, its place on the array is replaced with a ftrace_ops that contains the stub functions and that will be called when the function finally returns. If another ftrace_ops is added that happens to get the same index into the array, its return function may be called. But that's actually the way things current work with the old function graph tracer. If one tracer is removed and another is added, the new one will get the return calls of the function traced by the previous one, thus this is not a regression. This can be fixed by adding a counter to each time the array item is updated and save that on the shadow stack as well, such that it won't be called if the index saved does not match the index on the array. Note, being able to filter functions when both are called is not completely handled yet, but that shouldn't be too hard to manage. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Check return value of the ftrace_pop_return_trace() instead of 'ret' since 'ret' is set to the address of panic(). - Fix typo and make lines shorter than 76 chars in description. --- kernel/trace/fgraph.c | 332 +++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 280 insertions(+), 52 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 86df3ca6964f..8aba93be11b2 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -27,23 +27,144 @@ #define FGRAPH_RET_SIZE sizeof(struct ftrace_ret_stack) #define FGRAPH_RET_INDEX (FGRAPH_RET_SIZE / sizeof(long)) + +/* + * On entry to a function (via function_graph_enter()), a new ftrace_ret_stack + * is allocated on the task's ret_stack, then each fgraph_ops on the + * fgraph_array[]'s entryfunc is called and if that returns non-zero, the + * index into the fgraph_array[] for that fgraph_ops is added to the ret_stack. + * As the associated ftrace_ret_stack saved for those fgraph_ops needs to + * be found, the index to it is also added to the ret_stack along with the + * index of the fgraph_array[] to each fgraph_ops that needs their retfunc + * called. + * + * The top of the ret_stack (when not empty) will always have a reference + * to the last ftrace_ret_stack saved. All references to the + * ftrace_ret_stack has the format of: + * + * bits: 0 - 13 Index in words from the previous ftrace_ret_stack + * bits: 14 - 15 Type of storage + * 0 - reserved + * 1 - fgraph_array index + * For fgraph_array_index: + * bits: 16 - 23 The fgraph_ops fgraph_array index + * + * That is, at the end of function_graph_enter, if the first and forth + * fgraph_ops on the fgraph_array[] (index 0 and 3) needs their retfunc called + * on the return of the function being traced, this is what will be on the + * task's shadow ret_stack: (the stack grows upward) + * + * | | <- task->curr_ret_stack + * +----------------------------------+ + * | (3 << FGRAPH_ARRAY_SHIFT)|(2) | ( 3 for index of fourth fgraph_ops) + * +----------------------------------+ + * | (0 << FGRAPH_ARRAY_SHIFT)|(1) | ( 0 for index of first fgraph_ops) + * +----------------------------------+ + * | struct ftrace_ret_stack | + * | (stores the saved ret pointer) | + * +----------------------------------+ + * | (X) | (N) | ( N words away from previous ret_stack) + * | | + * + * If a backtrace is required, and the real return pointer needs to be + * fetched, then it looks at the task's curr_ret_stack index, if it + * is greater than zero, it would subtact one, and then mask the value + * on the ret_stack by FGRAPH_RET_INDEX_MASK and subtract FGRAPH_RET_INDEX + * from that, to get the index of the ftrace_ret_stack structure stored + * on the shadow stack. + */ + +#define FGRAPH_RET_INDEX_SIZE 14 +#define FGRAPH_RET_INDEX_MASK ((1 << FGRAPH_RET_INDEX_SIZE) - 1) + + +#define FGRAPH_TYPE_SIZE 2 +#define FGRAPH_TYPE_MASK ((1 << FGRAPH_TYPE_SIZE) - 1) +#define FGRAPH_TYPE_SHIFT FGRAPH_RET_INDEX_SIZE + +enum { + FGRAPH_TYPE_RESERVED = 0, + FGRAPH_TYPE_ARRAY = 1, +}; + +#define FGRAPH_ARRAY_SIZE 16 +#define FGRAPH_ARRAY_MASK ((1 << FGRAPH_ARRAY_SIZE) - 1) +#define FGRAPH_ARRAY_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) + +/* Currently the max stack index can't be more than register callers */ +#define FGRAPH_MAX_INDEX FGRAPH_ARRAY_SIZE + +#define FGRAPH_FRAME_SIZE (FGRAPH_RET_SIZE + FGRAPH_ARRAY_SIZE * (sizeof(long))) +#define FGRAPH_FRAME_INDEX (ALIGN(FGRAPH_FRAME_SIZE, \ + sizeof(long)) / sizeof(long)) #define SHADOW_STACK_SIZE (PAGE_SIZE) #define SHADOW_STACK_INDEX (SHADOW_STACK_SIZE / sizeof(long)) /* Leave on a buffer at the end */ -#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - FGRAPH_RET_INDEX) +#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - (FGRAPH_RET_INDEX + 1)) #define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) -#define RET_STACK_INC(c) ({ c += FGRAPH_RET_INDEX; }) -#define RET_STACK_DEC(c) ({ c -= FGRAPH_RET_INDEX; }) DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; static int fgraph_array_cnt; -#define FGRAPH_ARRAY_SIZE 16 static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; +static inline int get_ret_stack_index(struct task_struct *t, int offset) +{ + return current->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; +} + +static inline int get_fgraph_type(struct task_struct *t, int offset) +{ + return (current->ret_stack[offset] >> FGRAPH_TYPE_SHIFT) & + FGRAPH_TYPE_MASK; +} + +static inline int get_fgraph_array(struct task_struct *t, int offset) +{ + return (current->ret_stack[offset] >> FGRAPH_ARRAY_SHIFT) & + FGRAPH_ARRAY_MASK; +} + +/* + * @offset: The index into @t->ret_stack to find the ret_stack entry + * @index: Where to place the index into @t->ret_stack of that entry + * + * Calling this with: + * + * offset = task->curr_ret_stack; + * do { + * ret_stack = get_ret_stack(task, offset, &offset); + * } while (ret_stack); + * + * Will iterate through all the ret_stack entries from curr_ret_stack + * down to the first one. + */ +static inline struct ftrace_ret_stack * +get_ret_stack(struct task_struct *t, int offset, int *index) +{ + int idx; + + BUILD_BUG_ON(FGRAPH_RET_SIZE % sizeof(long)); + + if (offset <= 0) + return NULL; + + idx = get_ret_stack_index(t, offset - 1); + + if (idx <= 0 || idx > FGRAPH_MAX_INDEX) + return NULL; + + offset -= idx + FGRAPH_RET_INDEX; + if (offset < 0) + return NULL; + + *index = offset; + return RET_STACK(t, offset); +} + /* Both enabled by default (can be cleared by function_graph tracer flags */ static bool fgraph_sleep_time = true; @@ -126,9 +247,34 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, calltime = trace_clock_local(); index = current->curr_ret_stack; - RET_STACK_INC(current->curr_ret_stack); + /* ret offset = 1 ; type = reserved */ + current->ret_stack[index + FGRAPH_RET_INDEX] = 1; ret_stack = RET_STACK(current, index); + ret_stack->ret = ret; + /* + * The unwinders expect curr_ret_stack to point to either zero + * or an index where to find the next ret_stack. Even though the + * ret stack might be bogus, we want to write the ret and the + * index to find the ret_stack before we increment the stack point. + * If an interrupt comes in now before we increment the curr_ret_stack + * it may blow away what we wrote. But that's fine, because the + * index will still be correct (even though the 'ret' won't be). + * What we worry about is the index being correct after we increment + * the curr_ret_stack and before we update that index, as if an + * interrupt comes in and does an unwind stack dump, it will need + * at least a correct index! + */ barrier(); + current->curr_ret_stack += FGRAPH_RET_INDEX + 1; + /* + * This next barrier is to ensure that an interrupt coming in + * will not corrupt what we are about to write. + */ + barrier(); + + /* Still keep it reserved even if an interrupt came in */ + current->ret_stack[index + FGRAPH_RET_INDEX] = 1; + ret_stack->ret = ret; ret_stack->func = func; ret_stack->calltime = calltime; @@ -159,6 +305,12 @@ int function_graph_enter(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp) { struct ftrace_graph_ent trace; + int offset; + int start; + int type; + int val; + int cnt = 0; + int i; #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS /* @@ -177,38 +329,87 @@ int function_graph_enter(unsigned long ret, unsigned long func, if (ftrace_push_return_trace(ret, func, frame_pointer, retp)) goto out; - /* Only trace if the calling function expects to */ - if (!fgraph_array[0]->entryfunc(&trace)) + /* Use start for the distance to ret_stack (skipping over reserve) */ + start = offset = current->curr_ret_stack - 2; + + for (i = 0; i < fgraph_array_cnt; i++) { + struct fgraph_ops *gops = fgraph_array[i]; + + if (gops == &fgraph_stub) + continue; + + if ((offset == start) && + (current->curr_ret_stack >= SHADOW_STACK_INDEX - 1)) { + atomic_inc(¤t->trace_overrun); + break; + } + if (fgraph_array[i]->entryfunc(&trace)) { + offset = current->curr_ret_stack; + /* Check the top level stored word */ + type = get_fgraph_type(current, offset - 1); + + val = (i << FGRAPH_ARRAY_SHIFT) | + (FGRAPH_TYPE_ARRAY << FGRAPH_TYPE_SHIFT) | + ((offset - start) - 1); + + /* We can reuse the top word if it is reserved */ + if (type == FGRAPH_TYPE_RESERVED) { + current->ret_stack[offset - 1] = val; + cnt++; + continue; + } + val++; + + current->ret_stack[offset] = val; + /* + * Write the value before we increment, so that + * if an interrupt comes in after we increment + * it will still see the value and skip over + * this. + */ + barrier(); + current->curr_ret_stack++; + /* + * Have to write again, in case an interrupt + * came in before the increment and after we + * wrote the value. + */ + barrier(); + current->ret_stack[offset] = val; + cnt++; + } + } + + if (!cnt) goto out_ret; return 0; out_ret: - RET_STACK_DEC(current->curr_ret_stack); + current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; out: current->curr_ret_depth--; return -EBUSY; } /* Retrieve a function return address to the trace stack on thread info.*/ -static void +static struct ftrace_ret_stack * ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, unsigned long frame_pointer) { struct ftrace_ret_stack *ret_stack; int index; - index = current->curr_ret_stack; - RET_STACK_DEC(index); + ret_stack = get_ret_stack(current, current->curr_ret_stack, &index); - if (unlikely(index < 0 || index > SHADOW_STACK_MAX_INDEX)) { + if (unlikely(!ret_stack)) { ftrace_graph_stop(); - WARN_ON(1); + WARN(1, "Bad function graph ret_stack pointer: %d", + current->curr_ret_stack); /* Might as well panic, otherwise we have no where to go */ *ret = (unsigned long)panic; - return; + return NULL; } - ret_stack = RET_STACK(current, index); #ifdef HAVE_FUNCTION_GRAPH_FP_TEST /* * The arch may choose to record the frame pointer used @@ -228,12 +429,12 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, ftrace_graph_stop(); WARN(1, "Bad frame pointer: expected %lx, received %lx\n" " from func %ps return to %lx\n", - current->ret_stack[index].fp, + ret_stack->fp, frame_pointer, (void *)ret_stack->func, ret_stack->ret); *ret = (unsigned long)panic; - return; + return NULL; } #endif @@ -241,13 +442,15 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, trace->func = ret_stack->func; trace->calltime = ret_stack->calltime; trace->overrun = atomic_read(¤t->trace_overrun); - trace->depth = current->curr_ret_depth--; + trace->depth = current->curr_ret_depth; /* * We still want to trace interrupts coming in if * max_depth is set to 1. Make sure the decrement is * seen before ftrace_graph_return. */ barrier(); + + return ret_stack; } /* @@ -285,30 +488,47 @@ struct fgraph_ret_regs; static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs, unsigned long frame_pointer) { + struct ftrace_ret_stack *ret_stack; struct ftrace_graph_ret trace; unsigned long ret; + int offset; + int index; + int idx; + int i; + + ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer); + + if (unlikely(!ret_stack)) { + ftrace_graph_stop(); + WARN_ON(1); + /* Might as well panic. What else to do? */ + return (unsigned long)panic; + } - ftrace_pop_return_trace(&trace, &ret, frame_pointer); + trace.rettime = trace_clock_local(); #ifdef CONFIG_FUNCTION_GRAPH_RETVAL trace.retval = fgraph_ret_regs_return_value(ret_regs); #endif - trace.rettime = trace_clock_local(); - fgraph_array[0]->retfunc(&trace); + + offset = current->curr_ret_stack - 1; + index = get_ret_stack_index(current, offset); + + /* index has to be at least one! Optimize for it */ + i = 0; + do { + idx = get_fgraph_array(current, offset - i); + fgraph_array[idx]->retfunc(&trace); + i++; + } while (i < index); + /* * The ftrace_graph_return() may still access the current * ret_stack structure, we need to make sure the update of * curr_ret_stack is after that. */ barrier(); - RET_STACK_DEC(current->curr_ret_stack); - - if (unlikely(!ret)) { - ftrace_graph_stop(); - WARN_ON(1); - /* Might as well panic. What else to do? */ - ret = (unsigned long)panic; - } - + current->curr_ret_stack -= index + FGRAPH_RET_INDEX; + current->curr_ret_depth--; return ret; } @@ -343,15 +563,17 @@ unsigned long ftrace_return_to_handler(unsigned long frame_pointer) struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int idx) { + struct ftrace_ret_stack *ret_stack = NULL; int index = task->curr_ret_stack; - BUILD_BUG_ON(FGRAPH_RET_SIZE % sizeof(long)); - - index -= FGRAPH_RET_INDEX * (idx + 1); if (index < 0) return NULL; - return RET_STACK(task, index); + do { + ret_stack = get_ret_stack(task, index, &index); + } while (ret_stack && --idx >= 0); + + return ret_stack; } /** @@ -374,16 +596,15 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp) { struct ftrace_ret_stack *ret_stack; - int index = task->curr_ret_stack; - int i; + int i = task->curr_ret_stack; if (ret != (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) return ret; - RET_STACK_DEC(index); - - for (i = index; i >= 0; RET_STACK_DEC(i)) { - ret_stack = RET_STACK(task, i); + while (i > 0) { + ret_stack = get_ret_stack(current, i, &i); + if (!ret_stack) + break; if (ret_stack->retp == retp) return ret_stack->ret; } @@ -394,21 +615,26 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp) { - int task_idx; + struct ftrace_ret_stack *ret_stack; + int task_idx = task->curr_ret_stack; + int i; if (ret != (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) return ret; - task_idx = task->curr_ret_stack; - RET_STACK_DEC(task_idx); - - if (!task->ret_stack || task_idx < *idx) + if (!idx) return ret; - task_idx -= *idx; - RET_STACK_INC(*idx); + i = *idx; + do { + ret_stack = get_ret_stack(task, task_idx, &task_idx); + i--; + } while (i >= 0 && ret_stack); + + if (ret_stack) + return ret_stack->ret; - return RET_STACK(task, task_idx); + return ret; } #endif /* HAVE_FUNCTION_GRAPH_RET_ADDR_PTR */ @@ -514,10 +740,10 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, */ timestamp -= next->ftrace_timestamp; - for (index = next->curr_ret_stack - FGRAPH_RET_INDEX; index >= 0; ) { - ret_stack = RET_STACK(next, index); - ret_stack->calltime += timestamp; - index -= FGRAPH_RET_INDEX; + for (index = next->curr_ret_stack; index > 0; ) { + ret_stack = get_ret_stack(next, index, &index); + if (ret_stack) + ret_stack->calltime += timestamp; } } @@ -568,6 +794,8 @@ graph_init_task(struct task_struct *t, unsigned long *ret_stack) { atomic_set(&t->trace_overrun, 0); t->ftrace_timestamp = 0; + t->curr_ret_stack = 0; + t->curr_ret_depth = -1; /* make curr_ret_stack visible before we add the ret_stack */ smp_wmb(); t->ret_stack = ret_stack; From patchwork Fri Jan 12 10:12:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187671 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp76184dyc; Fri, 12 Jan 2024 02:14:32 -0800 (PST) X-Google-Smtp-Source: AGHT+IENpCbvfsUsPCcCqFgdcjo9Se4N8gJoFV95ZfLXsKMhXM9YIZZez4DtKbb9ea7BVHCgDBft X-Received: by 2002:a05:6902:908:b0:db7:dacf:59de with SMTP id bu8-20020a056902090800b00db7dacf59demr549454ybb.82.1705054471770; Fri, 12 Jan 2024 02:14:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054471; cv=none; d=google.com; s=arc-20160816; b=RwNTO1QEWXAbhxHkXsQGrW5VXN1GsVfzKBtSFJDHndWDxW06jZS1adH0dl9i6PAKW9 a+4FVbnDWfNiSg0S+q6VqPwWdR/fI6+lpbToOEfUWyGaUdyIr2zhxtKo27ug6wY25VqA TKf7WT/BKkEnW8uSPgrNa1Nj+cK5wFlAmpj/0W7BNa3+hzbvwY9+RANAwrlCaSHgd1LV sFW9QjMWfR6qIIVgO4pb/cmbMyYul6L8RYfmGlCmw21++s2esuh9djnh5RR3d3OKZSdy pM0KEqOjL0VV9prYIXRxQm/jGwnIhS6GsuFF3YiSIhibhdX0PYS+P+9U3ycqbjeEJiI6 96aA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=5M2FwlvBIVARRpDpDl8PSygpZJaIoHKkX8hRDVFImDU=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=BfSAcxG0oYverJtYObQLpQjGelZeUc2SoG0oU+lKyV8OUZUYMtPownSpXK4pa5nWr1 0/QyHQmkGoS3pQ37rIusSaHHky+D77kp1/maj5TwiwVCGB076ieWHhj/W8TOiPi8fxTg 94M4pz4oiEoordPuXZZnvgrg2XWGktVP2HfvjPWZGRaDvibZXDGMtgDzrgp8q3PIpS6d qjNY3nTEwA6SUjDnkViofMJ5JlyHGy0KLWjtSxsByCJ/2SkmxnYAJ3WXF6FzjNrD7Dch u38wPAarcYA7j95wC19eAiZNFx9/zUHRUH3JtD0QuL+wsa7dM0hZAZzMcxBdsAwlooa3 opCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZLtVATPo; spf=pass (google.com: domain of linux-kernel+bounces-24555-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24555-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id hf2-20020a05622a608200b00429c6bc50a4si2261794qtb.528.2024.01.12.02.14.31 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:14:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24555-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZLtVATPo; spf=pass (google.com: domain of linux-kernel+bounces-24555-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24555-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 805901C251D3 for ; Fri, 12 Jan 2024 10:14:31 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7965B60B9A; Fri, 12 Jan 2024 10:12:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZLtVATPo" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F44160B93; Fri, 12 Jan 2024 10:12:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A5BFC433C7; Fri, 12 Jan 2024 10:12:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054364; bh=81teVRkEOQmLT1f+py8yeysYcy/asIzDLhsSeR8Lg0Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZLtVATPogJrQntaXP8GpSa/9LFHDRgnYiaayANndQ+/yNUI/0a/iJh3kS/8+FteqD tvD8AsKJItng/vAYy//JD67g/0EhF2wH4GSi7EmuN3O6LvkeBKjskRR0UBL7WJLXR7 lgOvuYvbgoPBcrPt279Bzk6iJl3lOxA154rewKG+fl7bgtTdlEhTh24GICyL2UyZWf qaY+DjfCYKE0lMExX7Cq/xweTv/uh3GGjJQVgvTsNo6lE5sblDzCoAp/Ac1O+SrIJI nRy0dpffMQWqm3UOkWvlAqY83cMozjUg+8Qtpgmr3isG26jnbW5vxaAYfoVlKMqD8C 7PGS6wpQWpZEQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 09/36] function_graph: Remove logic around ftrace_graph_entry and return Date: Fri, 12 Jan 2024 19:12:37 +0900 Message-Id: <170505435730.459169.9550497963602539213.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879197924928978 X-GMAIL-MSGID: 1787879197924928978 From: Steven Rostedt (VMware) The function pointers ftrace_graph_entry and ftrace_graph_return are no longer called via the function_graph tracer. Instead, an array structure is now used that will allow for multiple users of the function_graph infrastructure. The variables are still used by the architecture code for non dynamic ftrace configs, where a test is made against them to see if they point to the default stub function or not. This is how the static function tracing knows to call into the function graph tracer infrastructure or not. Two new stub functions are made. entry_run() and return_run(). The ftrace_graph_entry and ftrace_graph_return are set to them respectively when the function graph tracer is enabled, and this will trigger the architecture specific function graph code to be executed. This also requires checking the global_ops hash for all calls into the function_graph tracer. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Fix typo and make lines shorter than 76 chars in the description. - Remove unneeded return from return_run() function. --- kernel/trace/fgraph.c | 71 +++++++++++----------------------------- kernel/trace/ftrace.c | 2 - kernel/trace/ftrace_internal.h | 2 - 3 files changed, 19 insertions(+), 56 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 8aba93be11b2..97a9ffb8bb4c 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -128,6 +128,17 @@ static inline int get_fgraph_array(struct task_struct *t, int offset) FGRAPH_ARRAY_MASK; } +/* ftrace_graph_entry set to this to tell some archs to run function graph */ +static int entry_run(struct ftrace_graph_ent *trace) +{ + return 0; +} + +/* ftrace_graph_return set to this to tell some archs to run function graph */ +static void return_run(struct ftrace_graph_ret *trace) +{ +} + /* * @offset: The index into @t->ret_stack to find the ret_stack entry * @index: Where to place the index into @t->ret_stack of that entry @@ -323,6 +334,10 @@ int function_graph_enter(unsigned long ret, unsigned long func, ftrace_find_rec_direct(ret - MCOUNT_INSN_SIZE)) return -EBUSY; #endif + + if (!ftrace_ops_test(&global_ops, func, NULL)) + return -EBUSY; + trace.func = func; trace.depth = ++current->curr_ret_depth; @@ -664,7 +679,6 @@ extern void ftrace_stub_graph(struct ftrace_graph_ret *); /* The callbacks that hook a function */ trace_func_graph_ret_t ftrace_graph_return = ftrace_stub_graph; trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub; -static trace_func_graph_ent_t __ftrace_graph_entry = ftrace_graph_entry_stub; /* Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. */ static int alloc_retstack_tasklist(unsigned long **ret_stack_list) @@ -747,46 +761,6 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, } } -static int ftrace_graph_entry_test(struct ftrace_graph_ent *trace) -{ - if (!ftrace_ops_test(&global_ops, trace->func, NULL)) - return 0; - return __ftrace_graph_entry(trace); -} - -/* - * The function graph tracer should only trace the functions defined - * by set_ftrace_filter and set_ftrace_notrace. If another function - * tracer ops is registered, the graph tracer requires testing the - * function against the global ops, and not just trace any function - * that any ftrace_ops registered. - */ -void update_function_graph_func(void) -{ - struct ftrace_ops *op; - bool do_test = false; - - /* - * The graph and global ops share the same set of functions - * to test. If any other ops is on the list, then - * the graph tracing needs to test if its the function - * it should call. - */ - do_for_each_ftrace_op(op, ftrace_ops_list) { - if (op != &global_ops && op != &graph_ops && - op != &ftrace_list_end) { - do_test = true; - /* in double loop, break out with goto */ - goto out; - } - } while_for_each_ftrace_op(op); - out: - if (do_test) - ftrace_graph_entry = ftrace_graph_entry_test; - else - ftrace_graph_entry = __ftrace_graph_entry; -} - static DEFINE_PER_CPU(unsigned long *, idle_ret_stack); static void @@ -927,18 +901,12 @@ int register_ftrace_graph(struct fgraph_ops *gops) ftrace_graph_active--; goto out; } - - ftrace_graph_return = gops->retfunc; - /* - * Update the indirect function to the entryfunc, and the - * function that gets called to the entry_test first. Then - * call the update fgraph entry function to determine if - * the entryfunc should be called directly or not. + * Some archs just test to see if these are not + * the default function */ - __ftrace_graph_entry = gops->entryfunc; - ftrace_graph_entry = ftrace_graph_entry_test; - update_function_graph_func(); + ftrace_graph_return = return_run; + ftrace_graph_entry = entry_run; ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); } @@ -974,7 +942,6 @@ void unregister_ftrace_graph(struct fgraph_ops *gops) if (!ftrace_graph_active) { ftrace_graph_return = ftrace_stub_graph; ftrace_graph_entry = ftrace_graph_entry_stub; - __ftrace_graph_entry = ftrace_graph_entry_stub; ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); unregister_pm_notifier(&ftrace_suspend_notifier); unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index c060d5b47910..11aac697d40f 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -235,8 +235,6 @@ static void update_ftrace_function(void) func = ftrace_ops_list_func; } - update_function_graph_func(); - /* If there's no change, then do nothing more here */ if (ftrace_trace_function == func) return; diff --git a/kernel/trace/ftrace_internal.h b/kernel/trace/ftrace_internal.h index 5012c04f92c0..19eddcb91584 100644 --- a/kernel/trace/ftrace_internal.h +++ b/kernel/trace/ftrace_internal.h @@ -42,10 +42,8 @@ ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs) #ifdef CONFIG_FUNCTION_GRAPH_TRACER extern int ftrace_graph_active; -void update_function_graph_func(void); #else /* !CONFIG_FUNCTION_GRAPH_TRACER */ # define ftrace_graph_active 0 -static inline void update_function_graph_func(void) { } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #else /* !CONFIG_FUNCTION_TRACER */ From patchwork Fri Jan 12 10:12:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187672 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp76319dyc; Fri, 12 Jan 2024 02:14:51 -0800 (PST) X-Google-Smtp-Source: AGHT+IERXuzMZQOLHtECj8Hszc24vIA1VAX3jZUv0wNaQ4YtNTL+vI5ZgFFsaWYaO61+2hXonXUg X-Received: by 2002:aa7:d486:0:b0:553:671f:5caf with SMTP id b6-20020aa7d486000000b00553671f5cafmr1240328edr.16.1705054490920; Fri, 12 Jan 2024 02:14:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054490; cv=none; d=google.com; s=arc-20160816; b=CSvydcxmEbwg1xxlljC+n5bUK7zVBEYLuuXJ8rQiEQW36UwAWfOvtluFiB+tRRbtGn jDuVj+FGqSi/pjrv/wOzWlKX2JLz82wWXsHkl1zRkr27I1V5DiWfVnD0MBV4/JcVLvjL R9DbW6Opb25sPgO5u5eBoECudQyWur+tc4Yh91L/THWqPqmDbiJzTUWTE2eY0xfcKNAY d9euhRWG8LDUUv9gI08lW+WYdSmmFROyMWJSgLmIHPDny9t/m7t+T6KnyliLLWT79/Lm lQxILx22EUWvKKoF9QJoHYWjNCMJrnzn7QKpARNbn/pKktuZ0e99amf5ZqW6Q3Atl+4p EyDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=oS2kCSvrkBk3KjkuKpOT6ug+qysSUx1NBdsjtBfAomo=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=TKmR8zLXHPybJ/LPjuOQNqeyZMh0lC5hfC4SczQfoCpEbosr0jqZajQyVHyxZKK0Lr yl0T3Q6CY4tYqLFe5FYsDhtT4mmh9OW6T+awzjzP42gX3Fy4M2kKkjFff8KUWIkft+sW wxphCnQjDd8tILY8sVkHZ0c/ZUQgVOxy+6s46g+SGEkZm0HjSAvcl/K7v3jktHqCjkgL 7piaMoKjeN5Rk+CBopHc9HPDSEO8vKTpC2irtAEzJoydmGJli3onxoBKUM/gn+sJ+Fqh hbEPucLTnRluYf/s2dCT8EUIAEDlyOiL+5vHZzdiKcyG7TQmZFV9WZUXyigEckjhv9g+ PrWQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Fv8sCjd9; spf=pass (google.com: domain of linux-kernel+bounces-24556-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24556-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id k19-20020aa7c393000000b005548f2eb019si1269615edq.544.2024.01.12.02.14.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:14:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24556-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Fv8sCjd9; spf=pass (google.com: domain of linux-kernel+bounces-24556-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24556-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 592521F22DCA for ; Fri, 12 Jan 2024 10:14:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5A7095D916; Fri, 12 Jan 2024 10:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Fv8sCjd9" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F82B60B9F; Fri, 12 Jan 2024 10:12:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4BF7AC433C7; Fri, 12 Jan 2024 10:12:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054376; bh=0AANPKEgN03NwBAAfHz+sP3dxJcTc/V4NNlVvj1odV4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Fv8sCjd9brPZM0a/CTdlUtF2urbpyb9t9AcTpX5wHRrJOEYDU0F7fdQLy5vBD3tWW JG+YGbQ06HAuFzZGKVIzpYzk/avVLv90j59ZcSX4V/7yoIPwzbRRPpGMNdF3iSUjqs /73waU8FIFaKQs1GFk4N1vf8U3xDTldOBbSc0s1244KNOLaaoMLq4KmU6lFSBlLHR6 IANetVeYjAfiHOOcE0BAq+ygJHieODNcV+BRc4ZP8TwnosUxahat8zmYQbqv1eqAeO om6FO9rcXAfRuzCoaQfYlf+b2nlfDxj5p/O7RunRaOAeHqQUXva/J9yhPuHMB0xeCn bRXJ1ZaaOg0HA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 10/36] ftrace/function_graph: Pass fgraph_ops to function graph callbacks Date: Fri, 12 Jan 2024 19:12:49 +0900 Message-Id: <170505436970.459169.3967671777862667053.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879217677298157 X-GMAIL-MSGID: 1787879217677298157 From: Steven Rostedt (VMware) Pass the fgraph_ops structure to the function graph callbacks. This will allow callbacks to add a descriptor to a fgraph_ops private field that wil be added in the future and use it for the callbacks. This will be useful when more than one callback can be registered to the function graph tracer. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - cleanup to set argument name on function prototype. --- include/linux/ftrace.h | 10 +++++++--- kernel/trace/fgraph.c | 16 +++++++++------- kernel/trace/ftrace.c | 6 ++++-- kernel/trace/trace.h | 4 ++-- kernel/trace/trace_functions_graph.c | 11 +++++++---- kernel/trace/trace_irqsoff.c | 6 ++++-- kernel/trace/trace_sched_wakeup.c | 6 ++++-- kernel/trace/trace_selftest.c | 5 +++-- 8 files changed, 40 insertions(+), 24 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 39ac1f3e8041..d173270352c3 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1055,11 +1055,15 @@ struct ftrace_graph_ret { unsigned long long rettime; } __packed; +struct fgraph_ops; + /* Type of the callback handlers for tracing function graph*/ -typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return */ -typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *); /* entry */ +typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *, + struct fgraph_ops *); /* return */ +typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *, + struct fgraph_ops *); /* entry */ -extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace); +extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); #ifdef CONFIG_FUNCTION_GRAPH_TRACER diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 97a9ffb8bb4c..62c35d6d95f9 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -129,13 +129,13 @@ static inline int get_fgraph_array(struct task_struct *t, int offset) } /* ftrace_graph_entry set to this to tell some archs to run function graph */ -static int entry_run(struct ftrace_graph_ent *trace) +static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops) { return 0; } /* ftrace_graph_return set to this to tell some archs to run function graph */ -static void return_run(struct ftrace_graph_ret *trace) +static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops) { } @@ -199,12 +199,14 @@ int __weak ftrace_disable_ftrace_graph_caller(void) } #endif -int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) +int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { return 0; } -static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace) +static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { } @@ -358,7 +360,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, atomic_inc(¤t->trace_overrun); break; } - if (fgraph_array[i]->entryfunc(&trace)) { + if (fgraph_array[i]->entryfunc(&trace, fgraph_array[i])) { offset = current->curr_ret_stack; /* Check the top level stored word */ type = get_fgraph_type(current, offset - 1); @@ -532,7 +534,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs i = 0; do { idx = get_fgraph_array(current, offset - i); - fgraph_array[idx]->retfunc(&trace); + fgraph_array[idx]->retfunc(&trace, fgraph_array[idx]); i++; } while (i < index); @@ -674,7 +676,7 @@ void ftrace_graph_sleep_time_control(bool enable) * Simply points to ftrace_stub, but with the proper protocol. * Defined by the linker script in linux/vmlinux.lds.h */ -extern void ftrace_stub_graph(struct ftrace_graph_ret *); +void ftrace_stub_graph(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); /* The callbacks that hook a function */ trace_func_graph_ret_t ftrace_graph_return = ftrace_stub_graph; diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 11aac697d40f..b063ab2d2b1f 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -815,7 +815,8 @@ void ftrace_graph_graph_time_control(bool enable) fgraph_graph_time = enable; } -static int profile_graph_entry(struct ftrace_graph_ent *trace) +static int profile_graph_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { struct ftrace_ret_stack *ret_stack; @@ -832,7 +833,8 @@ static int profile_graph_entry(struct ftrace_graph_ent *trace) return 1; } -static void profile_graph_return(struct ftrace_graph_ret *trace) +static void profile_graph_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { struct ftrace_ret_stack *ret_stack; struct ftrace_profile_stat *stat; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 0489e72c8169..b04a18af71e4 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -676,8 +676,8 @@ void trace_latency_header(struct seq_file *m); void trace_default_header(struct seq_file *m); void print_trace_header(struct seq_file *m, struct trace_iterator *iter); -void trace_graph_return(struct ftrace_graph_ret *trace); -int trace_graph_entry(struct ftrace_graph_ent *trace); +void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); +int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); void set_graph_array(struct trace_array *tr); void tracing_start_cmdline_record(void); diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index c35fbaab2a47..b7b142b65299 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -129,7 +129,8 @@ static inline int ftrace_graph_ignore_irqs(void) return in_hardirq(); } -int trace_graph_entry(struct ftrace_graph_ent *trace) +int trace_graph_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { struct trace_array *tr = graph_array; struct trace_array_cpu *data; @@ -238,7 +239,8 @@ void __trace_graph_return(struct trace_array *tr, trace_buffer_unlock_commit_nostack(buffer, event); } -void trace_graph_return(struct ftrace_graph_ret *trace) +void trace_graph_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { struct trace_array *tr = graph_array; struct trace_array_cpu *data; @@ -275,7 +277,8 @@ void set_graph_array(struct trace_array *tr) smp_mb(); } -static void trace_graph_thresh_return(struct ftrace_graph_ret *trace) +static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { ftrace_graph_addr_finish(trace); @@ -288,7 +291,7 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace) (trace->rettime - trace->calltime < tracing_thresh)) return; else - trace_graph_return(trace); + trace_graph_return(trace, gops); } static struct fgraph_ops funcgraph_thresh_ops = { diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index ba37f768e2f2..5478f4c4f708 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -175,7 +175,8 @@ static int irqsoff_display_graph(struct trace_array *tr, int set) return start_irqsoff_tracer(irqsoff_trace, set); } -static int irqsoff_graph_entry(struct ftrace_graph_ent *trace) +static int irqsoff_graph_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { struct trace_array *tr = irqsoff_trace; struct trace_array_cpu *data; @@ -205,7 +206,8 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace) return ret; } -static void irqsoff_graph_return(struct ftrace_graph_ret *trace) +static void irqsoff_graph_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { struct trace_array *tr = irqsoff_trace; struct trace_array_cpu *data; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 0469a04a355f..49bcc812652c 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -112,7 +112,8 @@ static int wakeup_display_graph(struct trace_array *tr, int set) return start_func_tracer(tr, set); } -static int wakeup_graph_entry(struct ftrace_graph_ent *trace) +static int wakeup_graph_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { struct trace_array *tr = wakeup_trace; struct trace_array_cpu *data; @@ -141,7 +142,8 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace) return ret; } -static void wakeup_graph_return(struct ftrace_graph_ret *trace) +static void wakeup_graph_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { struct trace_array *tr = wakeup_trace; struct trace_array_cpu *data; diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 529590499b1f..914331d8242c 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -762,7 +762,8 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr) static unsigned int graph_hang_thresh; /* Wrap the real function entry probe to avoid possible hanging */ -static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace) +static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { /* This is harmlessly racy, we want to approximately detect a hang */ if (unlikely(++graph_hang_thresh > GRAPH_MAX_FUNC_TEST)) { @@ -776,7 +777,7 @@ static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace) return 0; } - return trace_graph_entry(trace); + return trace_graph_entry(trace, gops); } static struct fgraph_ops fgraph_ops __initdata = { From patchwork Fri Jan 12 10:13:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187673 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp76426dyc; Fri, 12 Jan 2024 02:15:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IHrUORU9O2GpGl9wALUdYclASS3Ij8Bq0I7aRrYOwlfz38atsHR8Xc5RvK1YRnh921RM7rM X-Received: by 2002:a17:906:2359:b0:a2b:804a:4192 with SMTP id m25-20020a170906235900b00a2b804a4192mr564630eja.57.1705054508632; Fri, 12 Jan 2024 02:15:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054508; cv=none; d=google.com; s=arc-20160816; b=lQK2spccE81wT1k0zfY24ykNYD16900tKdDpZ0eEPu1yca/N/5iTgQ7pOtFPPhjdOg 5mI+JRd/BJWr/69CrCOPqX11R/laTETbAEA1D0nvIFXSWSDdcgWQRuBi6DMjn9ZVigDS 55/v4PI3htL4RlAssSB2KRXslKseXYTB9RVOgrpXscZLK8lTfpgSXoLHW8dQ6+88kLQ7 tCtYkcpYy7lmrpSWfqxEs3a5rOJcgdxpYHguGqRMpl5BLpvyVBhVxkWH7c5fO+LNfnkF rxYgIaQ8KTbCyldbOYSquFvXzIVC41WUDz2yqdZVHE76pb5XS/GhNBJJ/I62R4QWqqoW +sOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=14cbAjsCW5P2RqJPk6fUIIWteenjz2gToK1g1Ed7/Bc=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=mHUdl0n+GqSzIbxUHRgpVlvlnEjbNR3a5d2OC46gIJQ1mqmHm3NMRxYGFjSJOSJDEK EukhffRbLCZRjiPa7sD6nEZDMEVjpL+utdjIzFSGYYtFFi3X5sB4ntVCWsA8quiUqo6O 5Kn5h2DA8G0qqduJNtiwkj/L1xEflIMxMqWoNVyTSr6E6TFEZpLbUH3QAgRniFMB9Sfi w4i6a17Ogq1C/kGnO5HBhIa+SXVi0Unq6GXS51KzapsFksR8Ggku3Xt+8XCac6xVt1ed w0F/066txyGZ9TTGOKmx1dlooMlQ+IC6L1GfMlbrEgEa3XXo+s5nlWK4gznFU0sY5Qcz 8RLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=gCGy8G8N; spf=pass (google.com: domain of linux-kernel+bounces-24557-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24557-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id l8-20020a170906a40800b00a26965f7fcbsi1303081ejz.810.2024.01.12.02.15.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:15:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24557-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=gCGy8G8N; spf=pass (google.com: domain of linux-kernel+bounces-24557-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24557-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 118CC1F26DC6 for ; Fri, 12 Jan 2024 10:15:08 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 471915DF07; Fri, 12 Jan 2024 10:13:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gCGy8G8N" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BDC865D91C; Fri, 12 Jan 2024 10:13:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9469CC433C7; Fri, 12 Jan 2024 10:13:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054388; bh=3KtQ6xuniH+/KSvmf9jB46ULXGxeZpaBXCup0Gi8jCE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gCGy8G8NP6jJ/9EseOOT1aCYT9eqknSzf4yHuI1OpEU4tDykkLeEsbz1k9xos+bqW 0TtuqnH4YkbcsTKBE8YWvhw/pzDqHB21sae+Kim8YNqixJ3/0s56im1NdW5bFRmPre otkgV6NZ8LT6bMOb9qnhck3kFoEwRNIGpPS32ufKeGa0XS66UJv7dTxd2NgMdZ9Eym WeZVlpCpKSzkl9DamPOxRfL32bvhBZCGBTcItmfW/bGh085sorHxaobWht/Z9ozVnm cUwlGqidH2LHqiiUUeTAkP0SH9uM+T3eQKd9qClEzE0qPTPphCu8FqOL+lXW91k5vw /HMQH6pLSTBkQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 11/36] ftrace: Allow function_graph tracer to be enabled in instances Date: Fri, 12 Jan 2024 19:13:01 +0900 Message-Id: <170505438157.459169.6484982360827324036.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879236676966620 X-GMAIL-MSGID: 1787879236676966620 From: Steven Rostedt (VMware) Now that function graph tracing can handle more than one user, allow it to be enabled in the ftrace instances. Note, the filtering of the functions is still joined by the top level set_ftrace_filter and friends, as well as the graph and nograph files. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Fix to remove set_graph_array() completely. --- include/linux/ftrace.h | 1 + kernel/trace/ftrace.c | 1 + kernel/trace/trace.h | 13 ++++++- kernel/trace/trace_functions.c | 8 ++++ kernel/trace/trace_functions_graph.c | 65 +++++++++++++++++++++------------- kernel/trace/trace_selftest.c | 4 +- 6 files changed, 64 insertions(+), 28 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index d173270352c3..4df3f44043b8 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1070,6 +1070,7 @@ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph struct fgraph_ops { trace_func_graph_ent_t entryfunc; trace_func_graph_ret_t retfunc; + void *private; }; /* diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index b063ab2d2b1f..a720dd7cf290 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -7323,6 +7323,7 @@ __init void ftrace_init_global_array_ops(struct trace_array *tr) tr->ops = &global_ops; tr->ops->private = tr; ftrace_init_trace_array(tr); + init_array_fgraph_ops(tr); } void ftrace_init_array_ops(struct trace_array *tr, ftrace_func_t func) diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index b04a18af71e4..b11e4cf4f72e 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -395,6 +395,9 @@ struct trace_array { struct ftrace_ops *ops; struct trace_pid_list __rcu *function_pids; struct trace_pid_list __rcu *function_no_pids; +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + struct fgraph_ops *gops; +#endif #ifdef CONFIG_DYNAMIC_FTRACE /* All of these are protected by the ftrace_lock */ struct list_head func_probes; @@ -678,7 +681,6 @@ void print_trace_header(struct seq_file *m, struct trace_iterator *iter); void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); -void set_graph_array(struct trace_array *tr); void tracing_start_cmdline_record(void); void tracing_stop_cmdline_record(void); @@ -889,6 +891,9 @@ extern int __trace_graph_entry(struct trace_array *tr, extern void __trace_graph_return(struct trace_array *tr, struct ftrace_graph_ret *trace, unsigned int trace_ctx); +extern void init_array_fgraph_ops(struct trace_array *tr); +extern int allocate_fgraph_ops(struct trace_array *tr); +extern void free_fgraph_ops(struct trace_array *tr); #ifdef CONFIG_DYNAMIC_FTRACE extern struct ftrace_hash __rcu *ftrace_graph_hash; @@ -1001,6 +1006,12 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags) { return TRACE_TYPE_UNHANDLED; } +static inline void init_array_fgraph_ops(struct trace_array *tr) { } +static inline int allocate_fgraph_ops(struct trace_array *tr) +{ + return 0; +} +static inline void free_fgraph_ops(struct trace_array *tr) { } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ extern struct list_head ftrace_pids; diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c index 9f1bfbe105e8..8e8da0d0ee52 100644 --- a/kernel/trace/trace_functions.c +++ b/kernel/trace/trace_functions.c @@ -80,6 +80,7 @@ void ftrace_free_ftrace_ops(struct trace_array *tr) int ftrace_create_function_files(struct trace_array *tr, struct dentry *parent) { + int ret; /* * The top level array uses the "global_ops", and the files are * created on boot up. @@ -90,6 +91,12 @@ int ftrace_create_function_files(struct trace_array *tr, if (!tr->ops) return -EINVAL; + ret = allocate_fgraph_ops(tr); + if (ret) { + kfree(tr->ops); + return ret; + } + ftrace_create_filter_files(tr->ops, parent); return 0; @@ -99,6 +106,7 @@ void ftrace_destroy_function_files(struct trace_array *tr) { ftrace_destroy_filter_files(tr->ops); ftrace_free_ftrace_ops(tr); + free_fgraph_ops(tr); } static ftrace_func_t select_trace_function(u32 flags_val) diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index b7b142b65299..9ccc904a7703 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -83,8 +83,6 @@ static struct tracer_flags tracer_flags = { .opts = trace_opts }; -static struct trace_array *graph_array; - /* * DURATION column is being also used to display IRQ signs, * following values are used by print_graph_irq and others @@ -132,7 +130,7 @@ static inline int ftrace_graph_ignore_irqs(void) int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops) { - struct trace_array *tr = graph_array; + struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; unsigned int trace_ctx; @@ -242,7 +240,7 @@ void __trace_graph_return(struct trace_array *tr, void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { - struct trace_array *tr = graph_array; + struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; unsigned int trace_ctx; @@ -268,15 +266,6 @@ void trace_graph_return(struct ftrace_graph_ret *trace, local_irq_restore(flags); } -void set_graph_array(struct trace_array *tr) -{ - graph_array = tr; - - /* Make graph_array visible before we start tracing */ - - smp_mb(); -} - static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { @@ -294,25 +283,53 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, trace_graph_return(trace, gops); } -static struct fgraph_ops funcgraph_thresh_ops = { - .entryfunc = &trace_graph_entry, - .retfunc = &trace_graph_thresh_return, -}; - static struct fgraph_ops funcgraph_ops = { .entryfunc = &trace_graph_entry, .retfunc = &trace_graph_return, }; +int allocate_fgraph_ops(struct trace_array *tr) +{ + struct fgraph_ops *gops; + + gops = kzalloc(sizeof(*gops), GFP_KERNEL); + if (!gops) + return -ENOMEM; + + gops->entryfunc = &trace_graph_entry; + gops->retfunc = &trace_graph_return; + + tr->gops = gops; + gops->private = tr; + return 0; +} + +void free_fgraph_ops(struct trace_array *tr) +{ + kfree(tr->gops); +} + +__init void init_array_fgraph_ops(struct trace_array *tr) +{ + tr->gops = &funcgraph_ops; + funcgraph_ops.private = tr; +} + static int graph_trace_init(struct trace_array *tr) { int ret; - set_graph_array(tr); + tr->gops->entryfunc = trace_graph_entry; + if (tracing_thresh) - ret = register_ftrace_graph(&funcgraph_thresh_ops); + tr->gops->retfunc = trace_graph_thresh_return; else - ret = register_ftrace_graph(&funcgraph_ops); + tr->gops->retfunc = trace_graph_return; + + /* Make gops functions are visible before we start tracing */ + smp_mb(); + + ret = register_ftrace_graph(tr->gops); if (ret) return ret; tracing_start_cmdline_record(); @@ -323,10 +340,7 @@ static int graph_trace_init(struct trace_array *tr) static void graph_trace_reset(struct trace_array *tr) { tracing_stop_cmdline_record(); - if (tracing_thresh) - unregister_ftrace_graph(&funcgraph_thresh_ops); - else - unregister_ftrace_graph(&funcgraph_ops); + unregister_ftrace_graph(tr->gops); } static int graph_trace_update_thresh(struct trace_array *tr) @@ -1365,6 +1379,7 @@ static struct tracer graph_trace __tracer_data = { .print_header = print_graph_headers, .flags = &tracer_flags, .set_flag = func_graph_set_flag, + .allow_instances = true, #ifdef CONFIG_FTRACE_SELFTEST .selftest = trace_selftest_startup_function_graph, #endif diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 914331d8242c..f0758afa2f7d 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -813,7 +813,7 @@ trace_selftest_startup_function_graph(struct tracer *trace, * to detect and recover from possible hangs */ tracing_reset_online_cpus(&tr->array_buffer); - set_graph_array(tr); + fgraph_ops.private = tr; ret = register_ftrace_graph(&fgraph_ops); if (ret) { warn_failed_init_tracer(trace, ret); @@ -856,7 +856,7 @@ trace_selftest_startup_function_graph(struct tracer *trace, cond_resched(); tracing_reset_online_cpus(&tr->array_buffer); - set_graph_array(tr); + fgraph_ops.private = tr; /* * Some archs *cough*PowerPC*cough* add characters to the From patchwork Fri Jan 12 10:13:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187674 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp76911dyc; Fri, 12 Jan 2024 02:16:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IHyke1oPY+J7NLdv3uqNFQnwuAGjYTYJIPvXTOmpr8Pc35+5H03Z1BNC4S+AbCT7qovA3OL X-Received: by 2002:a05:6e02:10c5:b0:360:780c:d2bc with SMTP id s5-20020a056e0210c500b00360780cd2bcmr714633ilj.10.1705054571187; Fri, 12 Jan 2024 02:16:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054571; cv=none; d=google.com; s=arc-20160816; b=us3FZZp6fInQAZv0N1pVIwbumYkqGts/uvnB91JyA8VQl0rN6C2sG/PPgD+thosj4e laT36bfOiYcn5wSwJ3Rh6zrutGVdha3YYVkltCSt2dSDRzxP7nnhh4mqSNo0nZTQzXcn fDa9//C5YdKf8AIfAvAM5ZyDnncvY0/Jj2bvyJrXWyAMYjMgdQgzSj9rZhqG9HQEkDFj 0/InXWafMSp1tJ6+Hd5ko8cVlsngUi+pCso8WfRz0xBvP0l78aVhv6LgYU6f3mAEWNdc 7fRShXTJQEeD98oDnKBD9Q7znYVAJi+++BVk4A9nYx/ARKE4379Fsbi0f/wZoSuQUfIl W/AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=e19aMjyBcj8YvUj06W0kkVs70sAQ8Rm0yENW21CdRuk=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=o5Sr46EDqr0J2wP5fkSHCcZukEqVzTs/yBTRFFK/B+5YKsCmZIxc7iQ40QSsSg/ycC SiuaHyWmNtrt0zoLTL+hD9QZ5aA5fMIsr0UNKNuKJPx5kvg8SIZvqNUbqoIo5POWwBl1 iGcZu4Fdx1WW1EAhhOSqGnad1VPJd+yi/rV/Zi8Gcib0xxVu9YmfZL+eSa7NCoNA6qz0 IfDyp6MDfBmnxnNWrC1ctsaPUHr7d8IwULvc72Zp8Df3cyd9bnjbq+cVNE1cow2GA5ww 6krMsyNc0PMCtWr9rUR+gwk9RKYVUY/vJkFISOiodECli9V6V6INlQDiygywPW0jBEbM Rrhw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TywDYLX7; spf=pass (google.com: domain of linux-kernel+bounces-24559-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24559-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id l23-20020a635b57000000b005ceef6e1c1csi3048752pgm.708.2024.01.12.02.16.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:16:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24559-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TywDYLX7; spf=pass (google.com: domain of linux-kernel+bounces-24559-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24559-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id E087128A29A for ; Fri, 12 Jan 2024 10:15:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0F39460EDF; Fri, 12 Jan 2024 10:13:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TywDYLX7" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6735760BBB; Fri, 12 Jan 2024 10:13:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3CB87C433F1; Fri, 12 Jan 2024 10:13:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054400; bh=tceqhLe4cJhIgfqnJRj1OF3FBNK4igZzNJJ28B0Dd+g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TywDYLX7N/QOqXQe/m78liBkfVdgieGCigbUxXSPTMT4C7R73bCnwycvrhos29fPZ unr1rWBYcN3Zc2Ega07ocRFoxy0CwCcF9DXq5L9XGzrZ+QekDiqaQ7M6ZxPjfaLP5W eNr8ZQc/AAA0Qy/RdZihJpHL42f54YBtWCc7lCcUCqrYfVuidOCvjOrg9ozCynl6qn hr2kHu5ArvYhi4j1PZCkmNNHI8lHqfbDMa8TbLyCFwQtSWKwjETKyexmfwEjoGdp5p cK8Z+abiy9JcdyV8FXbLU2wcilKxSkuHmsuXQQ2g61Y4DxXJNieyBN5cNePaQufrSn NDC34ZMC4XW0Q== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 12/36] ftrace: Allow ftrace startup flags exist without dynamic ftrace Date: Fri, 12 Jan 2024 19:13:13 +0900 Message-Id: <170505439363.459169.10485331447276132041.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879301850975811 X-GMAIL-MSGID: 1787879301850975811 From: Steven Rostedt (VMware) Some of the flags for ftrace_startup() may be exposed even when CONFIG_DYNAMIC_FTRACE is not configured in. This is fine as the difference between dynamic ftrace and static ftrace is done within the internals of ftrace itself. No need to have use cases fail to compile because dynamic ftrace is disabled. This change is needed to move some of the logic of what is passed to ftrace_startup() out of the parameters of ftrace_startup(). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/ftrace.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 4df3f44043b8..c385ded1875f 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -538,6 +538,15 @@ static inline void stack_tracer_disable(void) { } static inline void stack_tracer_enable(void) { } #endif +enum { + FTRACE_UPDATE_CALLS = (1 << 0), + FTRACE_DISABLE_CALLS = (1 << 1), + FTRACE_UPDATE_TRACE_FUNC = (1 << 2), + FTRACE_START_FUNC_RET = (1 << 3), + FTRACE_STOP_FUNC_RET = (1 << 4), + FTRACE_MAY_SLEEP = (1 << 5), +}; + #ifdef CONFIG_DYNAMIC_FTRACE void ftrace_arch_code_modify_prepare(void); @@ -632,15 +641,6 @@ void ftrace_set_global_notrace(unsigned char *buf, int len, int reset); void ftrace_free_filter(struct ftrace_ops *ops); void ftrace_ops_set_global_filter(struct ftrace_ops *ops); -enum { - FTRACE_UPDATE_CALLS = (1 << 0), - FTRACE_DISABLE_CALLS = (1 << 1), - FTRACE_UPDATE_TRACE_FUNC = (1 << 2), - FTRACE_START_FUNC_RET = (1 << 3), - FTRACE_STOP_FUNC_RET = (1 << 4), - FTRACE_MAY_SLEEP = (1 << 5), -}; - /* * The FTRACE_UPDATE_* enum is used to pass information back * from the ftrace_update_record() and ftrace_test_record() From patchwork Fri Jan 12 10:13:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187677 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp77447dyc; Fri, 12 Jan 2024 02:17:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IGlnkf7Dlv/dUsusvnu30zgY11T3tXmP2Pcp63MgoqceACfEkjXaHQMjJUYHJlF7WnhGXkR X-Received: by 2002:a17:902:a38e:b0:1d4:6274:b4fb with SMTP id x14-20020a170902a38e00b001d46274b4fbmr467037pla.20.1705054650757; Fri, 12 Jan 2024 02:17:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054650; cv=none; d=google.com; s=arc-20160816; b=Qwj8PWXZM7lb33WmwFZ4WDYwZ6/5tXs6ufa4BlbT4yUNSrb50+biqbWqnwYyY+CO60 NoowIJ5RTltg1E7p3ddu3xQ7uXI3lBsP2eIk49B6KXrj7K0zftwbTRfdsFyeQFqFUTkH THqP/bhCx+ObKzBOtNxCOwlN+p73PWrE1pbP46CiMnmBFZ9wPVHYzX4XUo+IX9ebQKRG 5IDfKju1Toj+e/aXXV0xYZgfTqRMdFY8lX9InQqGq+tz+M9UBFDw9ktuzd0Gv02F3dC/ IQc/jDH233L2e6poazqXUkrwtKnft+aw6Jvx1Mk2KEzQdotc9HDTn08OXlw+TZRFz/FX njXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=D2nn7Sbk78K2FLCid+tt9Uj8Ggv19rOCwQ4jy0/agek=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=fwNxAVYl3L30etNz/vT5kmrFJBkCp2Il0ls5tQraHN9TbcmxDNu7nPopoyTnO7Q9oy OdvUaP5LNZQWDQ7UlmyQEj+pQidj/TvBDKJ00y/W2rTMLqhiISbTq6aSRtpjBJMf1WsE 4vpxOVrqaQxaKIM7G3/LtteEG5ITiBFF/EaIqeABlCdzbTABJ4djoig8q4VFewBZ9xE7 iH1HIX9++pPh1tiFizzxXjJng1ScbYWpxjmsDp/Ak7oaQ1to1IkI8k4A3trQECxIfh5V naBkwGzHwwmCngR3m6aNNgq3rm9KcoYhPJaw96FnOL+ZzmABkUB69SfhsDPfB0OEyssU 4h8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=EiNXJFiv; spf=pass (google.com: domain of linux-kernel+bounces-24560-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24560-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id z20-20020a1709028f9400b001d4e310b382si2887339plo.540.2024.01.12.02.17.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:17:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24560-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=EiNXJFiv; spf=pass (google.com: domain of linux-kernel+bounces-24560-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24560-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 5C2DEB247A8 for ; Fri, 12 Jan 2024 10:16:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8F0A85FF13; Fri, 12 Jan 2024 10:13:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EiNXJFiv" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CFB360EEF; Fri, 12 Jan 2024 10:13:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AFB89C433C7; Fri, 12 Jan 2024 10:13:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054411; bh=OrVnAsM2Ft1hr9fsr3iyDOaZmzHG5h5KzDx8/E7nXS0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EiNXJFivEfm0EMwHwRf2HXcQevK/J3/Pnii8clax+X8hx2ZtR+5LrGFBYM3Ymz8Wf Ynx2ZbwXaIKEgUn5XD4n9otynw3Mtz4ejdEnVuBpjQDEklp3F1qwQobyJRndHL+0tO gSUj5Crzr6D0YrMArDZslfx+wpJ6HbXVxEYk0fmQJWX/dakrgQlioqTBgnfupPQI30 ekOimvXxjg+PmgFcF/li+mWRKhq6q5AXhUi0P1RaCnhLuWFl7xOFIEz4JIyrRavqV0 /1m7VN0pjhCRrKKOgAt9Y8kyubYopvwBYkTiXUQSZFQp0Y64Wajm2bNc/pg/v4Sqvz AnnUnReks26LQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 13/36] function_graph: Have the instances use their own ftrace_ops for filtering Date: Fri, 12 Jan 2024 19:13:25 +0900 Message-Id: <170505440540.459169.9150547754808415151.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879385435742651 X-GMAIL-MSGID: 1787879385435742651 From: Steven Rostedt (VMware) Allow for instances to have their own ftrace_ops part of the fgraph_ops that makes the funtion_graph tracer filter on the set_ftrace_filter file of the instance and not the top instance. This also change how the function_graph handles multiple instances on the shadow stack. Previously we use ARRAY type entries to record which one is enabled, and this makes it a bitmap of the fgraph_array's indexes. Previous function_graph_enter() expects calling back from prepare_ftrace_return() function which is called back only once if it is enabled. But this introduces different ftrace_ops for each fgraph instance and those are called from ftrace_graph_func() one by one. Thus we can not loop on the fgraph_array(), and need to reuse the ret_stack pushed by the previous instance. Finding the ret_stack is easy because we can check the ret_stack->func. But that is not enough for the self- recursive tail-call case. Thus fgraph uses the bitmap entry to find it is already set (this means that entry is for previous tail call). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v6: - Fix to check whether the fgraph_ops is already unregistered in function_graph_enter_ops(). - Fix stack unwinder error on arm64 because of passing wrong value as retp. Thanks Mark! Changes in v4: - Simplify get_ret_stack() sanity check and use WARN_ON_ONCE() for obviously wrong value. - Do not check ret == return_to_handler but always read the previous ret_stack in ftrace_push_return_trace() to check it is reusable. - Set the bit 0 of the bitmap entry always in function_graph_enter() because it uses bit 0 to check re-usability. - Fix to ensure the ret_stack entry is bitmap type when checking the bitmap. Changes in v3: - Pass current fgraph_ops to the new entry handler (function_graph_enter_ops) if fgraph use ftrace. - Add fgraph_ops::idx in this patch. - Replace the array type with the bitmap type so that it can record which fgraph is called. - Fix some helper function to use passed task_struct instead of current. - Reduce the ret-index size to 1024 words. - Make the ret-index directly points the ret_stack. - Fix ftrace_graph_ret_addr() to handle tail-call case correctly. Changes in v2: - Use ftrace_graph_func and FTRACE_OPS_GRAPH_STUB instead of ftrace_stub and FTRACE_OPS_FL_STUB for new ftrace based fgraph. --- arch/arm64/kernel/ftrace.c | 21 ++ arch/x86/kernel/ftrace.c | 19 ++ include/linux/ftrace.h | 7 + kernel/trace/fgraph.c | 372 ++++++++++++++++++++-------------- kernel/trace/ftrace.c | 6 - kernel/trace/trace.h | 16 + kernel/trace/trace_functions.c | 2 kernel/trace/trace_functions_graph.c | 8 + 8 files changed, 282 insertions(+), 169 deletions(-) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index a650f5e11fc5..b96740829798 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -481,7 +481,26 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { - prepare_ftrace_return(ip, &fregs->lr, fregs->fp); + struct fgraph_ops *gops = container_of(op, struct fgraph_ops, ops); + unsigned long frame_pointer = fregs->fp; + unsigned long *parent = &fregs->lr; + int bit; + + if (unlikely(ftrace_graph_is_dead())) + return; + + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + + bit = ftrace_test_recursion_trylock(ip, *parent); + if (bit < 0) + return; + + if (!function_graph_enter_ops(*parent, ip, frame_pointer, + (void *)frame_pointer, gops)) + *parent = (unsigned long)&return_to_handler; + + ftrace_test_recursion_unlock(bit); } #else /* diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 12df54ff0e81..845e29b4254f 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -657,9 +657,24 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { struct pt_regs *regs = &fregs->regs; - unsigned long *stack = (unsigned long *)kernel_stack_pointer(regs); + unsigned long *parent = (unsigned long *)kernel_stack_pointer(regs); + struct fgraph_ops *gops = container_of(op, struct fgraph_ops, ops); + int bit; + + if (unlikely(ftrace_graph_is_dead())) + return; + + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; - prepare_ftrace_return(ip, (unsigned long *)stack, 0); + bit = ftrace_test_recursion_trylock(ip, *parent); + if (bit < 0) + return; + + if (!function_graph_enter_ops(*parent, ip, 0, parent, gops)) + *parent = (unsigned long)&return_to_handler; + + ftrace_test_recursion_unlock(bit); } #endif diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index c385ded1875f..3d9e74ea6065 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1070,7 +1070,9 @@ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph struct fgraph_ops { trace_func_graph_ent_t entryfunc; trace_func_graph_ret_t retfunc; + struct ftrace_ops ops; /* for the hash lists */ void *private; + int idx; }; /* @@ -1104,6 +1106,11 @@ extern int function_graph_enter(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp); +extern int +function_graph_enter_ops(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct fgraph_ops *gops); + struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int idx); diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 62c35d6d95f9..5724062846f7 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -7,6 +7,7 @@ * * Highly modified by Steven Rostedt (VMware). */ +#include #include #include #include @@ -17,22 +18,15 @@ #include "ftrace_internal.h" #include "trace.h" -#ifdef CONFIG_DYNAMIC_FTRACE -#define ASSIGN_OPS_HASH(opsname, val) \ - .func_hash = val, \ - .local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock), -#else -#define ASSIGN_OPS_HASH(opsname, val) -#endif - #define FGRAPH_RET_SIZE sizeof(struct ftrace_ret_stack) #define FGRAPH_RET_INDEX (FGRAPH_RET_SIZE / sizeof(long)) /* * On entry to a function (via function_graph_enter()), a new ftrace_ret_stack - * is allocated on the task's ret_stack, then each fgraph_ops on the - * fgraph_array[]'s entryfunc is called and if that returns non-zero, the - * index into the fgraph_array[] for that fgraph_ops is added to the ret_stack. + * is allocated on the task's ret_stack with indexes entry, then each + * fgraph_ops on the fgraph_array[]'s entryfunc is called and if that returns + * non-zero, the index into the fgraph_array[] for that fgraph_ops is recorded + * on the indexes entry as a bit flag. * As the associated ftrace_ret_stack saved for those fgraph_ops needs to * be found, the index to it is also added to the ret_stack along with the * index of the fgraph_array[] to each fgraph_ops that needs their retfunc @@ -42,61 +36,59 @@ * to the last ftrace_ret_stack saved. All references to the * ftrace_ret_stack has the format of: * - * bits: 0 - 13 Index in words from the previous ftrace_ret_stack - * bits: 14 - 15 Type of storage + * bits: 0 - 9 offset in words from the previous ftrace_ret_stack + * (bitmap type should have FGRAPH_RET_INDEX always) + * bits: 10 - 11 Type of storage * 0 - reserved - * 1 - fgraph_array index - * For fgraph_array_index: - * bits: 16 - 23 The fgraph_ops fgraph_array index + * 1 - bitmap of fgraph_array index + * + * For bitmap of fgraph_array index + * bits: 12 - 27 The bitmap of fgraph_ops fgraph_array index * * That is, at the end of function_graph_enter, if the first and forth * fgraph_ops on the fgraph_array[] (index 0 and 3) needs their retfunc called * on the return of the function being traced, this is what will be on the * task's shadow ret_stack: (the stack grows upward) * - * | | <- task->curr_ret_stack - * +----------------------------------+ - * | (3 << FGRAPH_ARRAY_SHIFT)|(2) | ( 3 for index of fourth fgraph_ops) - * +----------------------------------+ - * | (0 << FGRAPH_ARRAY_SHIFT)|(1) | ( 0 for index of first fgraph_ops) - * +----------------------------------+ - * | struct ftrace_ret_stack | - * | (stores the saved ret pointer) | - * +----------------------------------+ - * | (X) | (N) | ( N words away from previous ret_stack) - * | | + * | | <- task->curr_ret_stack + * +--------------------------------------------+ + * | bitmap_type(bitmap:(BIT(3)|BIT(0)), | + * | offset:FGRAPH_RET_INDEX) | <- the offset is from here + * +--------------------------------------------+ + * | struct ftrace_ret_stack | + * | (stores the saved ret pointer) | <- the offset points here + * +--------------------------------------------+ + * | (X) | (N) | ( N words away from + * | | previous ret_stack) * * If a backtrace is required, and the real return pointer needs to be * fetched, then it looks at the task's curr_ret_stack index, if it - * is greater than zero, it would subtact one, and then mask the value - * on the ret_stack by FGRAPH_RET_INDEX_MASK and subtract FGRAPH_RET_INDEX - * from that, to get the index of the ftrace_ret_stack structure stored - * on the shadow stack. + * is greater than zero (reserved, or right before poped), it would mask + * the value by FGRAPH_RET_INDEX_MASK to get the offset index of the + * ftrace_ret_stack structure stored on the shadow stack. */ -#define FGRAPH_RET_INDEX_SIZE 14 -#define FGRAPH_RET_INDEX_MASK ((1 << FGRAPH_RET_INDEX_SIZE) - 1) - +#define FGRAPH_RET_INDEX_SIZE 10 +#define FGRAPH_RET_INDEX_MASK GENMASK(FGRAPH_RET_INDEX_SIZE - 1, 0) #define FGRAPH_TYPE_SIZE 2 -#define FGRAPH_TYPE_MASK ((1 << FGRAPH_TYPE_SIZE) - 1) +#define FGRAPH_TYPE_MASK GENMASK(FGRAPH_TYPE_SIZE - 1, 0) #define FGRAPH_TYPE_SHIFT FGRAPH_RET_INDEX_SIZE enum { FGRAPH_TYPE_RESERVED = 0, - FGRAPH_TYPE_ARRAY = 1, + FGRAPH_TYPE_BITMAP = 1, }; -#define FGRAPH_ARRAY_SIZE 16 -#define FGRAPH_ARRAY_MASK ((1 << FGRAPH_ARRAY_SIZE) - 1) -#define FGRAPH_ARRAY_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) +#define FGRAPH_INDEX_SIZE 16 +#define FGRAPH_INDEX_MASK GENMASK(FGRAPH_INDEX_SIZE - 1, 0) +#define FGRAPH_INDEX_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) /* Currently the max stack index can't be more than register callers */ -#define FGRAPH_MAX_INDEX FGRAPH_ARRAY_SIZE +#define FGRAPH_MAX_INDEX (FGRAPH_INDEX_SIZE + FGRAPH_RET_INDEX) + +#define FGRAPH_ARRAY_SIZE FGRAPH_INDEX_SIZE -#define FGRAPH_FRAME_SIZE (FGRAPH_RET_SIZE + FGRAPH_ARRAY_SIZE * (sizeof(long))) -#define FGRAPH_FRAME_INDEX (ALIGN(FGRAPH_FRAME_SIZE, \ - sizeof(long)) / sizeof(long)) #define SHADOW_STACK_SIZE (PAGE_SIZE) #define SHADOW_STACK_INDEX (SHADOW_STACK_SIZE / sizeof(long)) /* Leave on a buffer at the end */ @@ -113,19 +105,36 @@ static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; static inline int get_ret_stack_index(struct task_struct *t, int offset) { - return current->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; + return t->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; } static inline int get_fgraph_type(struct task_struct *t, int offset) { - return (current->ret_stack[offset] >> FGRAPH_TYPE_SHIFT) & - FGRAPH_TYPE_MASK; + return (t->ret_stack[offset] >> FGRAPH_TYPE_SHIFT) & FGRAPH_TYPE_MASK; +} + +static inline unsigned long +get_fgraph_index_bitmap(struct task_struct *t, int offset) +{ + return (t->ret_stack[offset] >> FGRAPH_INDEX_SHIFT) & FGRAPH_INDEX_MASK; } -static inline int get_fgraph_array(struct task_struct *t, int offset) +static inline void +set_fgraph_index_bitmap(struct task_struct *t, int offset, unsigned long bitmap) { - return (current->ret_stack[offset] >> FGRAPH_ARRAY_SHIFT) & - FGRAPH_ARRAY_MASK; + t->ret_stack[offset] = (bitmap << FGRAPH_INDEX_SHIFT) | + (FGRAPH_TYPE_BITMAP << FGRAPH_TYPE_SHIFT) | FGRAPH_RET_INDEX; +} + +static inline bool is_fgraph_index_set(struct task_struct *t, int offset, int idx) +{ + return !!(get_fgraph_index_bitmap(t, offset) & BIT(idx)); +} + +static inline void +add_fgraph_index_bitmap(struct task_struct *t, int offset, unsigned long bitmap) +{ + t->ret_stack[offset] |= (bitmap << FGRAPH_INDEX_SHIFT); } /* ftrace_graph_entry set to this to tell some archs to run function graph */ @@ -160,17 +169,14 @@ get_ret_stack(struct task_struct *t, int offset, int *index) BUILD_BUG_ON(FGRAPH_RET_SIZE % sizeof(long)); - if (offset <= 0) + if (unlikely(offset <= 0)) return NULL; - idx = get_ret_stack_index(t, offset - 1); - - if (idx <= 0 || idx > FGRAPH_MAX_INDEX) + idx = get_ret_stack_index(t, --offset); + if (WARN_ON_ONCE(idx <= 0 || idx > offset)) return NULL; - offset -= idx + FGRAPH_RET_INDEX; - if (offset < 0) - return NULL; + offset -= idx; *index = offset; return RET_STACK(t, offset); @@ -231,10 +237,12 @@ void ftrace_graph_stop(void) /* Add a function return address to the trace stack on thread info.*/ static int ftrace_push_return_trace(unsigned long ret, unsigned long func, - unsigned long frame_pointer, unsigned long *retp) + unsigned long frame_pointer, unsigned long *retp, + int fgraph_idx) { struct ftrace_ret_stack *ret_stack; unsigned long long calltime; + unsigned long val; int index; if (unlikely(ftrace_graph_is_dead())) @@ -243,6 +251,21 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, if (!current->ret_stack) return -EBUSY; + /* + * At first, check whether the previous fgraph callback is pushed by + * the fgraph on the same function entry. + * But if @func is the self tail-call function, we also need to ensure + * the ret_stack is not for the previous call by checking whether the + * bit of @fgraph_idx is set or not. + */ + ret_stack = get_ret_stack(current, current->curr_ret_stack, &index); + if (ret_stack && ret_stack->func == func && + get_fgraph_type(current, index + FGRAPH_RET_INDEX) == FGRAPH_TYPE_BITMAP && + !is_fgraph_index_set(current, index + FGRAPH_RET_INDEX, fgraph_idx)) + return index + FGRAPH_RET_INDEX; + + val = (FGRAPH_TYPE_RESERVED << FGRAPH_TYPE_SHIFT) | FGRAPH_RET_INDEX; + BUILD_BUG_ON(SHADOW_STACK_SIZE % sizeof(long)); /* @@ -252,17 +275,19 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, smp_rmb(); /* The return trace stack is full */ - if (current->curr_ret_stack >= SHADOW_STACK_MAX_INDEX) { + if (current->curr_ret_stack + FGRAPH_RET_INDEX >= SHADOW_STACK_MAX_INDEX) { atomic_inc(¤t->trace_overrun); return -EBUSY; } calltime = trace_clock_local(); - index = current->curr_ret_stack; - /* ret offset = 1 ; type = reserved */ - current->ret_stack[index + FGRAPH_RET_INDEX] = 1; + index = READ_ONCE(current->curr_ret_stack); ret_stack = RET_STACK(current, index); + index += FGRAPH_RET_INDEX; + + /* ret offset = FGRAPH_RET_INDEX ; type = reserved */ + current->ret_stack[index] = val; ret_stack->ret = ret; /* * The unwinders expect curr_ret_stack to point to either zero @@ -278,7 +303,7 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, * at least a correct index! */ barrier(); - current->curr_ret_stack += FGRAPH_RET_INDEX + 1; + current->curr_ret_stack = index + 1; /* * This next barrier is to ensure that an interrupt coming in * will not corrupt what we are about to write. @@ -286,7 +311,7 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, barrier(); /* Still keep it reserved even if an interrupt came in */ - current->ret_stack[index + FGRAPH_RET_INDEX] = 1; + current->ret_stack[index] = val; ret_stack->ret = ret; ret_stack->func = func; @@ -297,7 +322,7 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, #ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR ret_stack->retp = retp; #endif - return 0; + return index; } /* @@ -314,15 +339,13 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, # define MCOUNT_INSN_SIZE 0 #endif +/* If the caller does not use ftrace, call this function. */ int function_graph_enter(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp) { struct ftrace_graph_ent trace; - int offset; - int start; - int type; - int val; - int cnt = 0; + unsigned long bitmap = 0; + int index; int i; #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS @@ -337,69 +360,33 @@ int function_graph_enter(unsigned long ret, unsigned long func, return -EBUSY; #endif - if (!ftrace_ops_test(&global_ops, func, NULL)) - return -EBUSY; - trace.func = func; trace.depth = ++current->curr_ret_depth; - if (ftrace_push_return_trace(ret, func, frame_pointer, retp)) + index = ftrace_push_return_trace(ret, func, frame_pointer, retp, 0); + if (index < 0) goto out; - /* Use start for the distance to ret_stack (skipping over reserve) */ - start = offset = current->curr_ret_stack - 2; - for (i = 0; i < fgraph_array_cnt; i++) { struct fgraph_ops *gops = fgraph_array[i]; if (gops == &fgraph_stub) continue; - if ((offset == start) && - (current->curr_ret_stack >= SHADOW_STACK_INDEX - 1)) { - atomic_inc(¤t->trace_overrun); - break; - } - if (fgraph_array[i]->entryfunc(&trace, fgraph_array[i])) { - offset = current->curr_ret_stack; - /* Check the top level stored word */ - type = get_fgraph_type(current, offset - 1); - - val = (i << FGRAPH_ARRAY_SHIFT) | - (FGRAPH_TYPE_ARRAY << FGRAPH_TYPE_SHIFT) | - ((offset - start) - 1); - - /* We can reuse the top word if it is reserved */ - if (type == FGRAPH_TYPE_RESERVED) { - current->ret_stack[offset - 1] = val; - cnt++; - continue; - } - val++; - - current->ret_stack[offset] = val; - /* - * Write the value before we increment, so that - * if an interrupt comes in after we increment - * it will still see the value and skip over - * this. - */ - barrier(); - current->curr_ret_stack++; - /* - * Have to write again, in case an interrupt - * came in before the increment and after we - * wrote the value. - */ - barrier(); - current->ret_stack[offset] = val; - cnt++; - } + if (ftrace_ops_test(&gops->ops, func, NULL) && + gops->entryfunc(&trace, gops)) + bitmap |= BIT(i); } - if (!cnt) + if (!bitmap) goto out_ret; + /* + * Since this function uses fgraph_idx = 0 as a tail-call checking + * flag, set that bit always. + */ + set_fgraph_index_bitmap(current, index, bitmap | BIT(0)); + return 0; out_ret: current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; @@ -408,15 +395,54 @@ int function_graph_enter(unsigned long ret, unsigned long func, return -EBUSY; } +/* This is called from ftrace_graph_func() via ftrace */ +int function_graph_enter_ops(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct fgraph_ops *gops) +{ + struct ftrace_graph_ent trace; + int index; + int type; + + /* Check whether the fgraph_ops is unregistered. */ + if (unlikely(fgraph_array[gops->idx] == &fgraph_stub)) + return -ENODEV; + + /* Use start for the distance to ret_stack (skipping over reserve) */ + index = ftrace_push_return_trace(ret, func, frame_pointer, retp, gops->idx); + if (index < 0) + return index; + type = get_fgraph_type(current, index); + + /* This is the first ret_stack for this fentry */ + if (type == FGRAPH_TYPE_RESERVED) + ++current->curr_ret_depth; + + trace.func = func; + trace.depth = current->curr_ret_depth; + if (gops->entryfunc(&trace, gops)) { + if (type == FGRAPH_TYPE_RESERVED) + set_fgraph_index_bitmap(current, index, BIT(gops->idx)); + else + add_fgraph_index_bitmap(current, index, BIT(gops->idx)); + return 0; + } + + if (type == FGRAPH_TYPE_RESERVED) { + current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; + current->curr_ret_depth--; + } + return -EBUSY; +} + /* Retrieve a function return address to the trace stack on thread info.*/ static struct ftrace_ret_stack * ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, - unsigned long frame_pointer) + unsigned long frame_pointer, int *index) { struct ftrace_ret_stack *ret_stack; - int index; - ret_stack = get_ret_stack(current, current->curr_ret_stack, &index); + ret_stack = get_ret_stack(current, current->curr_ret_stack, index); if (unlikely(!ret_stack)) { ftrace_graph_stop(); @@ -455,6 +481,7 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, } #endif + *index += FGRAPH_RET_INDEX; *ret = ret_stack->ret; trace->func = ret_stack->func; trace->calltime = ret_stack->calltime; @@ -507,13 +534,12 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs { struct ftrace_ret_stack *ret_stack; struct ftrace_graph_ret trace; + unsigned long bitmap; unsigned long ret; - int offset; int index; - int idx; int i; - ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer); + ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer, &index); if (unlikely(!ret_stack)) { ftrace_graph_stop(); @@ -527,16 +553,17 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs trace.retval = fgraph_ret_regs_return_value(ret_regs); #endif - offset = current->curr_ret_stack - 1; - index = get_ret_stack_index(current, offset); + bitmap = get_fgraph_index_bitmap(current, index); + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { + struct fgraph_ops *gops = fgraph_array[i]; - /* index has to be at least one! Optimize for it */ - i = 0; - do { - idx = get_fgraph_array(current, offset - i); - fgraph_array[idx]->retfunc(&trace, fgraph_array[idx]); - i++; - } while (i < index); + if (!(bitmap & BIT(i))) + continue; + if (gops == &fgraph_stub) + continue; + + gops->retfunc(&trace, gops); + } /* * The ftrace_graph_return() may still access the current @@ -544,7 +571,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs * curr_ret_stack is after that. */ barrier(); - current->curr_ret_stack -= index + FGRAPH_RET_INDEX; + current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; current->curr_ret_depth--; return ret; } @@ -622,7 +649,17 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, ret_stack = get_ret_stack(current, i, &i); if (!ret_stack) break; - if (ret_stack->retp == retp) + /* + * For the tail-call, there would be 2 or more ftrace_ret_stacks on + * the ret_stack, which records "return_to_handler" as the return + * address excpt for the last one. + * But on the real stack, there should be 1 entry because tail-call + * reuses the return address on the stack and jump to the next function. + * Thus we will continue to find real return address. + */ + if (ret_stack->retp == retp && + ret_stack->ret != + (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) return ret_stack->ret; } @@ -645,6 +682,9 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, i = *idx; do { ret_stack = get_ret_stack(task, task_idx, &task_idx); + if (ret_stack && ret_stack->ret == + (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) + continue; i--; } while (i >= 0 && ret_stack); @@ -655,17 +695,25 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, } #endif /* HAVE_FUNCTION_GRAPH_RET_ADDR_PTR */ -static struct ftrace_ops graph_ops = { - .func = ftrace_graph_func, - .flags = FTRACE_OPS_FL_INITIALIZED | - FTRACE_OPS_FL_PID | - FTRACE_OPS_GRAPH_STUB, +void fgraph_init_ops(struct ftrace_ops *dst_ops, + struct ftrace_ops *src_ops) +{ + dst_ops->func = ftrace_graph_func; + dst_ops->flags = FTRACE_OPS_FL_PID | FTRACE_OPS_GRAPH_STUB; + #ifdef FTRACE_GRAPH_TRAMP_ADDR - .trampoline = FTRACE_GRAPH_TRAMP_ADDR, + dst_ops->trampoline = FTRACE_GRAPH_TRAMP_ADDR; /* trampoline_size is only needed for dynamically allocated tramps */ #endif - ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash) -}; + +#ifdef CONFIG_DYNAMIC_FTRACE + if (src_ops) { + dst_ops->func_hash = &src_ops->local_hash; + mutex_init(&dst_ops->local_hash.regex_lock); + dst_ops->flags |= FTRACE_OPS_FL_INITIALIZED; + } +#endif +} void ftrace_graph_sleep_time_control(bool enable) { @@ -869,11 +917,20 @@ static int start_graph_tracing(void) int register_ftrace_graph(struct fgraph_ops *gops) { + int command = 0; int ret = 0; int i; mutex_lock(&ftrace_lock); + if (!gops->ops.func) { + gops->ops.flags |= FTRACE_OPS_GRAPH_STUB; + gops->ops.func = ftrace_graph_func; +#ifdef FTRACE_GRAPH_TRAMP_ADDR + gops->ops.trampoline = FTRACE_GRAPH_TRAMP_ADDR; +#endif + } + if (!fgraph_array[0]) { /* The array must always have real data on it */ for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) @@ -893,6 +950,7 @@ int register_ftrace_graph(struct fgraph_ops *gops) fgraph_array[i] = gops; if (i + 1 > fgraph_array_cnt) fgraph_array_cnt = i + 1; + gops->idx = i; ftrace_graph_active++; @@ -909,9 +967,10 @@ int register_ftrace_graph(struct fgraph_ops *gops) */ ftrace_graph_return = return_run; ftrace_graph_entry = entry_run; - - ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); + command = FTRACE_START_FUNC_RET; } + + ret = ftrace_startup(&gops->ops, command); out: mutex_unlock(&ftrace_lock); return ret; @@ -919,6 +978,7 @@ int register_ftrace_graph(struct fgraph_ops *gops) void unregister_ftrace_graph(struct fgraph_ops *gops) { + int command = 0; int i; mutex_lock(&ftrace_lock); @@ -926,25 +986,29 @@ void unregister_ftrace_graph(struct fgraph_ops *gops) if (unlikely(!ftrace_graph_active)) goto out; - for (i = 0; i < fgraph_array_cnt; i++) - if (gops == fgraph_array[i]) - break; - if (i >= fgraph_array_cnt) + if (unlikely(gops->idx < 0 || gops->idx >= fgraph_array_cnt)) goto out; - fgraph_array[i] = &fgraph_stub; - if (i + 1 == fgraph_array_cnt) { - for (; i >= 0; i--) - if (fgraph_array[i] != &fgraph_stub) - break; + WARN_ON_ONCE(fgraph_array[gops->idx] != gops); + + fgraph_array[gops->idx] = &fgraph_stub; + if (gops->idx + 1 == fgraph_array_cnt) { + i = gops->idx; + while (i >= 0 && fgraph_array[i] == &fgraph_stub) + i--; fgraph_array_cnt = i + 1; } ftrace_graph_active--; + + if (!ftrace_graph_active) + command = FTRACE_STOP_FUNC_RET; + + ftrace_shutdown(&gops->ops, command); + if (!ftrace_graph_active) { ftrace_graph_return = ftrace_stub_graph; ftrace_graph_entry = ftrace_graph_entry_stub; - ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); unregister_pm_notifier(&ftrace_suspend_notifier); unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); } diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index a720dd7cf290..bff6c04d5201 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -3016,6 +3016,8 @@ int ftrace_startup(struct ftrace_ops *ops, int command) if (unlikely(ftrace_disabled)) return -ENODEV; + ftrace_ops_init(ops); + ret = __register_ftrace_function(ops); if (ret) return ret; @@ -7323,7 +7325,7 @@ __init void ftrace_init_global_array_ops(struct trace_array *tr) tr->ops = &global_ops; tr->ops->private = tr; ftrace_init_trace_array(tr); - init_array_fgraph_ops(tr); + init_array_fgraph_ops(tr, tr->ops); } void ftrace_init_array_ops(struct trace_array *tr, ftrace_func_t func) @@ -8055,7 +8057,7 @@ static int register_ftrace_function_nolock(struct ftrace_ops *ops) */ int register_ftrace_function(struct ftrace_ops *ops) { - int ret; + int ret = -1; lock_direct_mutex(); ret = prepare_direct_functions_for_ipmodify(ops); diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index b11e4cf4f72e..3176f8dcaf94 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -891,8 +891,8 @@ extern int __trace_graph_entry(struct trace_array *tr, extern void __trace_graph_return(struct trace_array *tr, struct ftrace_graph_ret *trace, unsigned int trace_ctx); -extern void init_array_fgraph_ops(struct trace_array *tr); -extern int allocate_fgraph_ops(struct trace_array *tr); +extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops); +extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops); extern void free_fgraph_ops(struct trace_array *tr); #ifdef CONFIG_DYNAMIC_FTRACE @@ -975,6 +975,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) preempt_enable_notrace(); return ret; } + #else static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) { @@ -1000,18 +1001,19 @@ static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) (fgraph_max_depth && trace->depth >= fgraph_max_depth); } +void fgraph_init_ops(struct ftrace_ops *dst_ops, + struct ftrace_ops *src_ops); + #else /* CONFIG_FUNCTION_GRAPH_TRACER */ static inline enum print_line_t print_graph_function_flags(struct trace_iterator *iter, u32 flags) { return TRACE_TYPE_UNHANDLED; } -static inline void init_array_fgraph_ops(struct trace_array *tr) { } -static inline int allocate_fgraph_ops(struct trace_array *tr) -{ - return 0; -} static inline void free_fgraph_ops(struct trace_array *tr) { } +/* ftrace_ops may not be defined */ +#define init_array_fgraph_ops(tr, ops) do { } while (0) +#define allocate_fgraph_ops(tr, ops) ({ 0; }) #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ extern struct list_head ftrace_pids; diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c index 8e8da0d0ee52..13bf2415245d 100644 --- a/kernel/trace/trace_functions.c +++ b/kernel/trace/trace_functions.c @@ -91,7 +91,7 @@ int ftrace_create_function_files(struct trace_array *tr, if (!tr->ops) return -EINVAL; - ret = allocate_fgraph_ops(tr); + ret = allocate_fgraph_ops(tr, tr->ops); if (ret) { kfree(tr->ops); return ret; diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 9ccc904a7703..7f30652f0e97 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -288,7 +288,7 @@ static struct fgraph_ops funcgraph_ops = { .retfunc = &trace_graph_return, }; -int allocate_fgraph_ops(struct trace_array *tr) +int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops) { struct fgraph_ops *gops; @@ -301,6 +301,9 @@ int allocate_fgraph_ops(struct trace_array *tr) tr->gops = gops; gops->private = tr; + + fgraph_init_ops(&gops->ops, ops); + return 0; } @@ -309,10 +312,11 @@ void free_fgraph_ops(struct trace_array *tr) kfree(tr->gops); } -__init void init_array_fgraph_ops(struct trace_array *tr) +__init void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops) { tr->gops = &funcgraph_ops; funcgraph_ops.private = tr; + fgraph_init_ops(&tr->gops->ops, ops); } static int graph_trace_init(struct trace_array *tr) From patchwork Fri Jan 12 10:13:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187697 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp80530dyc; Fri, 12 Jan 2024 02:25:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IEkgLtra2VE7P0Ocm96uQIsyVAoEKCktCHBURCguj8LoR9v2/S2uSYIip4mG6OwvXHUif1c X-Received: by 2002:a17:902:9890:b0:1d4:322e:d692 with SMTP id s16-20020a170902989000b001d4322ed692mr514206plp.73.1705055137753; Fri, 12 Jan 2024 02:25:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705055137; cv=none; d=google.com; s=arc-20160816; b=dA6/35aTjO1KFCYGfbWvhKhNVfnROwZX2lTau+4KWkAydbV5Y4AYkcNTldqgHx72mL fkAe/YoDv1JbrZX2LmfXaRcztZwX7GJom91+3wlCNr18mLmMd2XxUMLhDiM4/8Xe5QlH qYjcRamPo3Y7LzZ9D/OepW8z8YN8NaA609ICqE1JvqjtEaIHEGpKodn3jk/nGfiv1YYG gZGu4PA6JRqUugGi4YRHYv+xs4txXuxCsIuNi1XNPifRzTCwM0vi2V4FFJUa3W6Xzx8k sQBNFFtRc7g/SAWEwM7GrgotAAVu1GcPncN8VbAPXRu3sQc00j7eOyqxHvAosbuzZ1Ij Q2wQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=kpI+GfftcFIOgeCmy78JVAwY/+5ZIlL73Ct8+6CJp00=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=r8MVSH8WvLZ5CAT8/0hpJUmsIIYdUxbnljVh1fhDeu4AQpIuJAmDGyOMLlu3kbBuhN SrJJEeYPtS928tQrrzVp9pk0Sf0//Bh2hFI/FoIPAzzyy7C0SbUUGw0cSmKmUNxSvoag MNBALBlGxTJWEkda41lohf8i3ex1K8WLDu0+eTSkC0fVvmbomddy1zYyVWs1tkH1Vue9 UrLFhO1p5kpEnSLqwZ8flKRgmbeRNLm6ku0qSHeiLjNWwOKluM0J3yCeaATGOxLAfYe8 BEtILJ6DXFlo1SFvjMlvOgQntiRP7wVRZa+6BL1ZumeqlAYcz3iLkBOZeKkPpfIO+qNX WQtg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NFJJhyLl; spf=pass (google.com: domain of linux-kernel+bounces-24561-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24561-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id jb4-20020a170903258400b001d4a25267b8si2950863plb.237.2024.01.12.02.25.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:25:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24561-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NFJJhyLl; spf=pass (google.com: domain of linux-kernel+bounces-24561-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24561-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 5F516289FF6 for ; Fri, 12 Jan 2024 10:16:17 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 04DAC5EE65; Fri, 12 Jan 2024 10:13:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NFJJhyLl" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 633B35EE63; Fri, 12 Jan 2024 10:13:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78D80C433C7; Fri, 12 Jan 2024 10:13:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054423; bh=FIMtdTcBTgHmiwOg+KoDtODrD2ZwnJXD9R2kw/o70PY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NFJJhyLlEbCmFL5iiBzoiN08zvPigNZIxmTS3i9FdiP0m7QWbkGI5TRR7qejqQ88d 5+IPA+JeW39f6ZSd6x3vNAkJuVSVWTqYW7j/MkyQ0iDNINBJnNzzHemaSXVCvIoN9o Cl9Kbe6Q0xQqObf7XKBruFTLmHpARd+n1sIlnEya0zzVA1jU1GdxZk91LRsAixqfiD GULvJB6Q/OW+NVmGny3DLRMvVZmFtTgiGavA4Vb/ueQF2pPS4Ht+F5oj7vgMyVUHJs sdyi+oHLyRa1z3Gli6zyZhruVfGsR6qs5lan4grZAWln9CepNl/KVHDHLdyTnthf2P /7EOvpCa4gDhg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 14/36] function_graph: Use a simple LRU for fgraph_array index number Date: Fri, 12 Jan 2024 19:13:37 +0900 Message-Id: <170505441697.459169.6267988498295951630.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879896120847109 X-GMAIL-MSGID: 1787879896120847109 From: Masami Hiramatsu (Google) Since the fgraph_array index is used for the bitmap on the shadow stack, it may leave some entries after a function_graph instance is removed. Thus if another instance reuses the fgraph_array index soon after releasing it, the fgraph may confuse to call the newer callback for the entries which are pushed by the older instance. To avoid reusing the fgraph_array index soon after releasing, introduce a simple LRU table for managing the index number. This will reduce the possibility of this confusion. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v5: - Fix the underflow bug in fgraph_lru_release_index() and return 0 if the release is succeded. Changes in v4: - Newly added. --- kernel/trace/fgraph.c | 67 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 47 insertions(+), 20 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 5724062846f7..ad4ea196b76e 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -99,10 +99,44 @@ enum { DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; -static int fgraph_array_cnt; - static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; +/* LRU index table for fgraph_array */ +static int fgraph_lru_table[FGRAPH_ARRAY_SIZE]; +static int fgraph_lru_next; +static int fgraph_lru_last; + +static void fgraph_lru_init(void) +{ + int i; + + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) + fgraph_lru_table[i] = i; +} + +static int fgraph_lru_release_index(int idx) +{ + if (idx < 0 || idx >= FGRAPH_ARRAY_SIZE || + fgraph_lru_table[fgraph_lru_last] != -1) + return -1; + + fgraph_lru_table[fgraph_lru_last] = idx; + fgraph_lru_last = (fgraph_lru_last + 1) % FGRAPH_ARRAY_SIZE; + return 0; +} + +static int fgraph_lru_alloc_index(void) +{ + int idx = fgraph_lru_table[fgraph_lru_next]; + + if (idx == -1) + return -1; + + fgraph_lru_table[fgraph_lru_next] = -1; + fgraph_lru_next = (fgraph_lru_next + 1) % FGRAPH_ARRAY_SIZE; + return idx; +} + static inline int get_ret_stack_index(struct task_struct *t, int offset) { return t->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; @@ -367,7 +401,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, if (index < 0) goto out; - for (i = 0; i < fgraph_array_cnt; i++) { + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { struct fgraph_ops *gops = fgraph_array[i]; if (gops == &fgraph_stub) @@ -935,21 +969,17 @@ int register_ftrace_graph(struct fgraph_ops *gops) /* The array must always have real data on it */ for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) fgraph_array[i] = &fgraph_stub; + fgraph_lru_init(); } - /* Look for an available spot */ - for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { - if (fgraph_array[i] == &fgraph_stub) - break; - } - if (i >= FGRAPH_ARRAY_SIZE) { + i = fgraph_lru_alloc_index(); + if (i < 0 || + WARN_ON_ONCE(fgraph_array[i] != &fgraph_stub)) { ret = -EBUSY; goto out; } fgraph_array[i] = gops; - if (i + 1 > fgraph_array_cnt) - fgraph_array_cnt = i + 1; gops->idx = i; ftrace_graph_active++; @@ -979,25 +1009,22 @@ int register_ftrace_graph(struct fgraph_ops *gops) void unregister_ftrace_graph(struct fgraph_ops *gops) { int command = 0; - int i; mutex_lock(&ftrace_lock); if (unlikely(!ftrace_graph_active)) goto out; - if (unlikely(gops->idx < 0 || gops->idx >= fgraph_array_cnt)) + if (unlikely(gops->idx < 0 || gops->idx >= FGRAPH_ARRAY_SIZE)) + goto out; + + if (WARN_ON_ONCE(fgraph_array[gops->idx] != gops)) goto out; - WARN_ON_ONCE(fgraph_array[gops->idx] != gops); + if (fgraph_lru_release_index(gops->idx) < 0) + goto out; fgraph_array[gops->idx] = &fgraph_stub; - if (gops->idx + 1 == fgraph_array_cnt) { - i = gops->idx; - while (i >= 0 && fgraph_array[i] == &fgraph_stub) - i--; - fgraph_array_cnt = i + 1; - } ftrace_graph_active--; From patchwork Fri Jan 12 10:13:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187675 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp77054dyc; Fri, 12 Jan 2024 02:16:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IGx1YUuSyfzHXIQVDxv3C0TSMltWloXgyWGsd1sMppJAOLC/HrKK8NTEg6zlEBYA7qKuRke X-Received: by 2002:a05:620a:359:b0:781:dfd0:d497 with SMTP id t25-20020a05620a035900b00781dfd0d497mr1262681qkm.128.1705054594867; Fri, 12 Jan 2024 02:16:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054594; cv=none; d=google.com; s=arc-20160816; b=DFA4wGzMNdakcuamkH/KsZFSXJflDBiYgqPLCGM60l1qMIl1F+pflpb4VZVop4ZSEf vJT5uBBlRev4dvCCpeSzbJAkVFBbfd4hFhH5xIGrIxTiVb+OD75Ky595oCYUWGlszF2A 1dW7SA8wh5UuuXUcICO7OP0BIBQFbMHnwoRYkU+IbNttoqDTHLZIXxvUzlh1WO7wnWt3 BGNlCDchr38cZ82lP2Cg8DAnKK92HmoOfyu7vG/qUTE/jFFeWj0gs01hWjtCHXMKdf9S A7rZ2AXuLggEwaVQH+iEDb336sfC969YjQIv3F/BbAv/vcfBR9ts929YSCJahZ33bovE GLdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=MPh+m7he4s7AOIrwQ5dUjcsL294EHtHtJiIcuGFs1mA=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=EffJ19l3OU/Q6RMkZZBuSZyZAOLsRMFCa1mvI8m+lSJdyUQa1GInyNuvv80gTdZWJI dDGgdfF1c1dvhwqyLUN1+6X2gDreciLABSrb+t4VT2mQxoXR+PEyKWWnlW1GuK1RdAcg rWcJAxt9cRnVEw4ZCkcXrMy4msrIe8htHHeLehwP5CYWTDuvbnd6s9LjDOgjXmRoRrKC oP33wuaROnv5U0Gi8qW+oqhxmYgot0V/OwjzdclOiYsUY6sjUwMf2zJ4y/cA0Vpl3sOP a+WLfyt305ivQpuqGcxoXH5LREQuWPyRpW++4x/7PKd3a5yGhl1M5sAMOD+7O3q50VfO ES6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=OeQVfn0S; spf=pass (google.com: domain of linux-kernel+bounces-24562-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24562-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id os29-20020a05620a811d00b00783264580b0si2572221qkn.57.2024.01.12.02.16.34 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:16:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24562-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=OeQVfn0S; spf=pass (google.com: domain of linux-kernel+bounces-24562-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24562-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id A62FF1C25280 for ; Fri, 12 Jan 2024 10:16:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 01D6A60B84; Fri, 12 Jan 2024 10:13:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OeQVfn0S" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 411A05EE66; Fri, 12 Jan 2024 10:13:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0FE9DC433F1; Fri, 12 Jan 2024 10:13:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054434; bh=dvBZGtr7WXccTAPNoVsk5fYBLF5EMCyUAEgpkrbGiSE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OeQVfn0SCqEU66Nn0PJlNjmJbHypif6ZjMaN423CSWh4sfmZAwtAa/2GxwEsADLSB HXOAmEl4qHte2PCDF1+EqAHTC2bDkQVII4FmKn4YFqrrezDUcWgEo21P/TXEjbJ9LE pXjy85LOYtyDGHssE+1vuO73pcVdxasvzt3D6J2WqgLs641eUfT4L4f3FHHUj9lZKi YeX5yoPqoua5bnebyS5AmXxCZjXObKzfHhdpGAwQgvqWcfSLo07fJn3YhfDuQolk6m icCOHMRcQv45qIaF1Qp6FWPFFvCNItb87ZBYZt2f/X9WgcAuLUCyUcFBt4RLG7ksDB tgmVM953l31uw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 15/36] function_graph: Add "task variables" per task for fgraph_ops Date: Fri, 12 Jan 2024 19:13:48 +0900 Message-Id: <170505442865.459169.16989493613988810562.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879327040424682 X-GMAIL-MSGID: 1787879327040424682 From: Steven Rostedt (VMware) Add a "task variables" array on the tasks shadow ret_stack that is the size of longs for each possible registered fgraph_ops. That's a total of 16, taking up 8 * 16 = 128 bytes (out of a page size 4k). This will allow for fgraph_ops to do specific features on a per task basis having a way to maintain state for each task. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Move fgraph_ops::idx to previous patch in the series. Changes in v2: - Make description lines shorter than 76 chars. --- include/linux/ftrace.h | 1 + kernel/trace/fgraph.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 70 insertions(+), 1 deletion(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 3d9e74ea6065..737f84104577 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1116,6 +1116,7 @@ ftrace_graph_get_ret_stack(struct task_struct *task, int idx); unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp); +unsigned long *fgraph_get_task_var(struct fgraph_ops *gops); /* * Sometimes we don't want to trace a function with the function diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index ad4ea196b76e..4ff5d2864fd2 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -92,10 +92,18 @@ enum { #define SHADOW_STACK_SIZE (PAGE_SIZE) #define SHADOW_STACK_INDEX (SHADOW_STACK_SIZE / sizeof(long)) /* Leave on a buffer at the end */ -#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - (FGRAPH_RET_INDEX + 1)) +#define SHADOW_STACK_MAX_INDEX \ + (SHADOW_STACK_INDEX - (FGRAPH_RET_INDEX + 1 + FGRAPH_ARRAY_SIZE)) #define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) +/* + * Each fgraph_ops has a reservered unsigned long at the end (top) of the + * ret_stack to store task specific state. + */ +#define SHADOW_STACK_TASK_VARS(ret_stack) \ + ((unsigned long *)(&(ret_stack)[SHADOW_STACK_INDEX - FGRAPH_ARRAY_SIZE])) + DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; @@ -182,6 +190,44 @@ static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops) { } +static void ret_stack_set_task_var(struct task_struct *t, int idx, long val) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(t->ret_stack); + + gvals[idx] = val; +} + +static unsigned long * +ret_stack_get_task_var(struct task_struct *t, int idx) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(t->ret_stack); + + return &gvals[idx]; +} + +static void ret_stack_init_task_vars(unsigned long *ret_stack) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(ret_stack); + + memset(gvals, 0, sizeof(*gvals) * FGRAPH_ARRAY_SIZE); +} + +/** + * fgraph_get_task_var - retrieve a task specific state variable + * @gops: The ftrace_ops that owns the task specific variable + * + * Every registered fgraph_ops has a task state variable + * reserved on the task's ret_stack. This function returns the + * address to that variable. + * + * Returns the address to the fgraph_ops @gops tasks specific + * unsigned long variable. + */ +unsigned long *fgraph_get_task_var(struct fgraph_ops *gops) +{ + return ret_stack_get_task_var(current, gops->idx); +} + /* * @offset: The index into @t->ret_stack to find the ret_stack entry * @index: Where to place the index into @t->ret_stack of that entry @@ -791,6 +837,7 @@ static int alloc_retstack_tasklist(unsigned long **ret_stack_list) if (t->ret_stack == NULL) { atomic_set(&t->trace_overrun, 0); + ret_stack_init_task_vars(ret_stack_list[start]); t->curr_ret_stack = 0; t->curr_ret_depth = -1; /* Make sure the tasks see the 0 first: */ @@ -851,6 +898,7 @@ static void graph_init_task(struct task_struct *t, unsigned long *ret_stack) { atomic_set(&t->trace_overrun, 0); + ret_stack_init_task_vars(ret_stack); t->ftrace_timestamp = 0; t->curr_ret_stack = 0; t->curr_ret_depth = -1; @@ -949,6 +997,24 @@ static int start_graph_tracing(void) return ret; } +static void init_task_vars(int idx) +{ + struct task_struct *g, *t; + int cpu; + + for_each_online_cpu(cpu) { + if (idle_task(cpu)->ret_stack) + ret_stack_set_task_var(idle_task(cpu), idx, 0); + } + + read_lock(&tasklist_lock); + for_each_process_thread(g, t) { + if (t->ret_stack) + ret_stack_set_task_var(t, idx, 0); + } + read_unlock(&tasklist_lock); +} + int register_ftrace_graph(struct fgraph_ops *gops) { int command = 0; @@ -998,6 +1064,8 @@ int register_ftrace_graph(struct fgraph_ops *gops) ftrace_graph_return = return_run; ftrace_graph_entry = entry_run; command = FTRACE_START_FUNC_RET; + } else { + init_task_vars(gops->idx); } ret = ftrace_startup(&gops->ops, command); From patchwork Fri Jan 12 10:14:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187680 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp77705dyc; Fri, 12 Jan 2024 02:18:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IHiB8vbXtnrRzV27icV7NcnYDaZTFQEjVSpj4KGLjx79OC++tVJJ1radLsHv6Ux4y/c/6LR X-Received: by 2002:a05:6a00:23c4:b0:6da:d8d5:e71a with SMTP id g4-20020a056a0023c400b006dad8d5e71amr1129686pfc.26.1705054688096; Fri, 12 Jan 2024 02:18:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054688; cv=none; d=google.com; s=arc-20160816; b=v1Muhp50Ovbv/N34MhINLXm2sPdSlGB8BPHOHRSFjnRRd1xad9wgUQCQr5VQey++y4 3Pa28gFvYETn/xMbm85z5Dad07Z7sZ0CdMsQx+C+bZ8HR0+LyF4yJCTmqnAPsmSiq4bP /dY1q6ofg18KR08F6IJIn69nHOvrMZfhG57O75wZBZV4EtP/dnHziZV9z4PiBLYjTVdy 7tWl5oW+ICNrlXvrcgiW2gNTTSRKhPDyLjo3OfunyHBzQZaTryTi8TQVa3Aa5X0j1Bcr 50Q+Ek0CX1MybpkjThFHbJFTR7RRR4kY+PNibj1hh9VrRX23HG4Ak/4TxBDuQ2ocolNw Rlbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=UtwgLIIHKWDXsNWZGWhQ/DWVI2UxLnYXxRdMUh1JeD0=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=QVmu9iYoVuo4bh9bFBc/l3qAiCl+deawV0z9unWbUMFY4H8TTbOTxwZzlFI0+14OG5 gdtSX88g0zZKx9RoBKDush2TOAgF+VvWzx3JuN6ylInEf/qXzjeahN/FVSlsrCgzJIBn 6Es3xYOvAFCB4ywGAc/aX+TRPKXKDOhVf0Bmdb7Tlgjoe6ZI9z5ImFzp0rSsZFVZa8os TjTCKKh5S6akjDZoCw57Tzk8dB7bNIItCgyM4DZHRsJBCTt1Sqj6kIY5msnW6vdCG5w1 VUiw+uaQaqpA95gIoX9n5nMhQ7/gj2KleK0rgPPZjiw0jgHKDNJrBN0Fmq5eSBclAj/4 eYRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ghDQqF1r; spf=pass (google.com: domain of linux-kernel+bounces-24563-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24563-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id f128-20020a625186000000b006daf91964e8si2756216pfb.112.2024.01.12.02.18.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:18:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24563-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ghDQqF1r; spf=pass (google.com: domain of linux-kernel+bounces-24563-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24563-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 4CE3CB20EC0 for ; Fri, 12 Jan 2024 10:16:53 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 114855D755; Fri, 12 Jan 2024 10:14:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ghDQqF1r" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9FF76167A; Fri, 12 Jan 2024 10:14:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B627C433F1; Fri, 12 Jan 2024 10:14:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054446; bh=go745fmDLQTm3Kce0otuPLTKFdjIATiLly/SeVTR4PU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ghDQqF1rQcafkqpAR4qTnPXTVZKeQLbMRWLSZ7kVEox7KG1wmlvvOlHpRh/LtHyTP Vv3ZKN5JiGCLZ809lV2VOEP+yw//fGKBxiPv5DldvSb6SNj+3zmnzYaXORVaBz6weh +74Ubp2MewvsfL++P/Xzh9UR6pUly6GzF3MXPlvyS5clt7roR0KeeikAhHmqhkE98C P60tlpJPZmxy45qL6OrylcO6CXCJQ0Sngo1enTEVTHvSeEAi+0wY+uSnGggu0YfvJ0 1vbs+NHXuy3FoKw09YtWq+ENYTHBnPAI7loc5GU67eTUCM/gVHE17RNee0L1GbJ2ZC dAvP0Tl6wHJug== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 16/36] function_graph: Move set_graph_function tests to shadow stack global var Date: Fri, 12 Jan 2024 19:14:00 +0900 Message-Id: <170505444011.459169.4482819096387420268.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879424354240387 X-GMAIL-MSGID: 1787879424354240387 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the set_graph_funnction was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/trace_recursion.h | 5 +---- kernel/trace/trace.h | 32 +++++++++++++++++++++----------- kernel/trace/trace_functions_graph.c | 6 +++--- kernel/trace/trace_irqsoff.c | 4 ++-- kernel/trace/trace_sched_wakeup.c | 4 ++-- 5 files changed, 29 insertions(+), 22 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index d48cd92d2364..2efd5ec46d7f 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,9 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* Set if the function is in the set_graph_function file */ - TRACE_GRAPH_BIT, - /* * In the very unlikely case that an interrupt came in * at a start of graph tracing, and we want to trace @@ -60,7 +57,7 @@ enum { * that preempted a softirq start of a function that * preempted normal context!!!! Luckily, it can't be * greater than 3, so the next two bits are a mask - * of what the depth is when we set TRACE_GRAPH_BIT + * of what the depth is when we set TRACE_GRAPH_FL */ TRACE_GRAPH_DEPTH_START_BIT, diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 3176f8dcaf94..883d5c64f43f 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -895,11 +895,16 @@ extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops); extern void free_fgraph_ops(struct trace_array *tr); +enum { + TRACE_GRAPH_FL = 1, +}; + #ifdef CONFIG_DYNAMIC_FTRACE extern struct ftrace_hash __rcu *ftrace_graph_hash; extern struct ftrace_hash __rcu *ftrace_graph_notrace_hash; -static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) +static inline int +ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) { unsigned long addr = trace->func; int ret = 0; @@ -921,12 +926,11 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) } if (ftrace_lookup_ip(hash, addr)) { - /* * This needs to be cleared on the return functions * when the depth is zero. */ - trace_recursion_set(TRACE_GRAPH_BIT); + *task_var |= TRACE_GRAPH_FL; trace_recursion_set_depth(trace->depth); /* @@ -946,11 +950,14 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) return ret; } -static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +static inline void +ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace) { - if (trace_recursion_test(TRACE_GRAPH_BIT) && + unsigned long *task_var = fgraph_get_task_var(gops); + + if ((*task_var & TRACE_GRAPH_FL) && trace->depth == trace_recursion_depth()) - trace_recursion_clear(TRACE_GRAPH_BIT); + *task_var &= ~TRACE_GRAPH_FL; } static inline int ftrace_graph_notrace_addr(unsigned long addr) @@ -977,7 +984,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) } #else -static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) +static inline int ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) { return 1; } @@ -986,17 +993,20 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) { return 0; } -static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +static inline void ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace) { } #endif /* CONFIG_DYNAMIC_FTRACE */ extern unsigned int fgraph_max_depth; -static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) +static inline bool +ftrace_graph_ignore_func(struct fgraph_ops *gops, struct ftrace_graph_ent *trace) { + unsigned long *task_var = fgraph_get_task_var(gops); + /* trace it when it is-nested-in or is a function enabled. */ - return !(trace_recursion_test(TRACE_GRAPH_BIT) || - ftrace_graph_addr(trace)) || + return !((*task_var & TRACE_GRAPH_FL) || + ftrace_graph_addr(task_var, trace)) || (trace->depth < 0) || (fgraph_max_depth && trace->depth >= fgraph_max_depth); } diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 7f30652f0e97..66cce73e94f8 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -160,7 +160,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, if (!ftrace_trace_task(tr)) return 0; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; if (ftrace_graph_ignore_irqs()) @@ -247,7 +247,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace, long disabled; int cpu; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); @@ -269,7 +269,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace, static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 5478f4c4f708..fce064e20570 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -184,7 +184,7 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace, unsigned int trace_ctx; int ret; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; /* * Do not trace a function if it's filtered by set_graph_notrace. @@ -214,7 +214,7 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace, unsigned long flags; unsigned int trace_ctx; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (!func_prolog_dec(tr, &data, &flags)) return; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 49bcc812652c..130ca7e7787e 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -120,7 +120,7 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace, unsigned int trace_ctx; int ret = 0; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; /* * Do not trace a function if it's filtered by set_graph_notrace. @@ -149,7 +149,7 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace, struct trace_array_cpu *data; unsigned int trace_ctx; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (!func_prolog_preempt_disable(tr, &data, &trace_ctx)) return; From patchwork Fri Jan 12 10:14:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187676 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp77426dyc; Fri, 12 Jan 2024 02:17:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IEWSUkegujgorhlxwD/pWAdRTcb+a8cKMnvEEVY6K9lc/fUkCUL/PWvwJYb4wsYHdhc3dSm X-Received: by 2002:a05:6512:b1e:b0:50e:af9d:9b1 with SMTP id w30-20020a0565120b1e00b0050eaf9d09b1mr678542lfu.14.1705054645111; Fri, 12 Jan 2024 02:17:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054645; cv=none; d=google.com; s=arc-20160816; b=EiwTMoG2ebx8JOqfwq2vSFBIJ5sSyEjsyIzBcGfh/plGbp5f3ykWjt8YVIE3/HFKDe TzMnZiiIv9N2GBi570hfO9Um1klRwP92ew3o25zSPkNbkOwSkNbRWlgyA9wp1/VY17jZ qkjQsxEiMkv3psc0+KfBKUtIa8d0wdzKXz5RBQiQ0dQZiBlpC83RHyyM0w7pSbUyczA+ fs14oRzEW34l99bEtgVvUgfRkQUc3uUIwaVfgXSyh+6Tcvl1czzpGZmGkwfSL+/egL9O 5QhkSI6nXYmCelTcp9t2dzv9lvAOf/+K1FuDxbNMoNTA3Axu3qm2oU0fuUdksiBscX1m QezA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=9CuribqGRXZVBw1pJiF7m/l1J/GRDApfXdZ+8ZAOymc=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=tsj+NfFxdfDZSo0w/oCelvVoFzWKMbVqQWK5G7xoynS/gWXfMd68JiVn1O78qi3/Xc cqJ45kTXjjqsAk3c6jDsk3Kvj4Cjkn4cbE7F6rqjB8r6kCM3OZAHp/jN4bYLhGYKrwTq VFshK444/gUi5jchf3J6qEJF1MN8PY502og2kpEgayEweJlaJFJxp0L+bZjwLMw3EsrS kpo46QA8im8UETCtLb+DlxHBI8tQL4oOKvZZUPUj7s4zvbLVYzknlofNVEtyNwyl6TgQ U8K/7gc1pYWzqZS3YvT3iOuN2fSg6kvgg0cI+dMxdPZ9zFQ4/2LzPgVSn3QCgMuSUD/6 0uPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="RUGP/VGO"; spf=pass (google.com: domain of linux-kernel+bounces-24565-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24565-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id lm23-20020a170906981700b00a2ceaf80bcasi57781ejb.983.2024.01.12.02.17.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:17:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24565-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="RUGP/VGO"; spf=pass (google.com: domain of linux-kernel+bounces-24565-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24565-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B10CF1F223F7 for ; Fri, 12 Jan 2024 10:17:24 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 419B25EE8A; Fri, 12 Jan 2024 10:14:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RUGP/VGO" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8BF6560B90; Fri, 12 Jan 2024 10:14:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4B9BFC433C7; Fri, 12 Jan 2024 10:14:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054458; bh=L5NcxvFVVjpzi2aQTxHsnjnq0+vQBf6ABI8Dmfni5sw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RUGP/VGOfU+aRircD59KTYoEsKC6ms4z2hImcLDQnsNzwFO8qD8DfR6u9TfTYJnlh g/6jJU5EXuqPm7CuSQegKC+aZBRSLpGbdYcmea/5Tv0Pr1TchKrbCtFKMCME+zMef5 pbaxpdwA8/m6+oomEgUH0kL6Y++aizjsX8Bn2mopHUhChgr5ajtqo7pV2GOOQp/403 5r/6HUoxoz3id100YdW+f45wTX+jM1GBRHisajgQiSREbq0OTs+z21umD3feR2f635 B9pIVGGQoA6WeavLByo4dUaZVEK62mOXqQh4w2DqZD2tK2enthw70/jR7uu64jCVz9 iW5GxDajiCxeQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 17/36] function_graph: Move graph depth stored data to shadow stack global var Date: Fri, 12 Jan 2024 19:14:11 +0900 Message-Id: <170505445179.459169.14013960772012231130.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879379210173579 X-GMAIL-MSGID: 1787879379210173579 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the function graph depth was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/trace_recursion.h | 29 ----------------------------- kernel/trace/trace.h | 34 ++++++++++++++++++++++++++++++++-- 2 files changed, 32 insertions(+), 31 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index 2efd5ec46d7f..00e792bf148d 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,25 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* - * In the very unlikely case that an interrupt came in - * at a start of graph tracing, and we want to trace - * the function in that interrupt, the depth can be greater - * than zero, because of the preempted start of a previous - * trace. In an even more unlikely case, depth could be 2 - * if a softirq interrupted the start of graph tracing, - * followed by an interrupt preempting a start of graph - * tracing in the softirq, and depth can even be 3 - * if an NMI came in at the start of an interrupt function - * that preempted a softirq start of a function that - * preempted normal context!!!! Luckily, it can't be - * greater than 3, so the next two bits are a mask - * of what the depth is when we set TRACE_GRAPH_FL - */ - - TRACE_GRAPH_DEPTH_START_BIT, - TRACE_GRAPH_DEPTH_END_BIT, - /* * To implement set_graph_notrace, if this bit is set, we ignore * function graph tracing of called functions, until the return @@ -78,16 +59,6 @@ enum { #define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0) #define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit))) -#define trace_recursion_depth() \ - (((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3) -#define trace_recursion_set_depth(depth) \ - do { \ - current->trace_recursion &= \ - ~(3 << TRACE_GRAPH_DEPTH_START_BIT); \ - current->trace_recursion |= \ - ((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \ - } while (0) - #define TRACE_CONTEXT_BITS 4 #define TRACE_FTRACE_START TRACE_FTRACE_BIT diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 883d5c64f43f..1a467b5437b3 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -897,8 +897,38 @@ extern void free_fgraph_ops(struct trace_array *tr); enum { TRACE_GRAPH_FL = 1, + + /* + * In the very unlikely case that an interrupt came in + * at a start of graph tracing, and we want to trace + * the function in that interrupt, the depth can be greater + * than zero, because of the preempted start of a previous + * trace. In an even more unlikely case, depth could be 2 + * if a softirq interrupted the start of graph tracing, + * followed by an interrupt preempting a start of graph + * tracing in the softirq, and depth can even be 3 + * if an NMI came in at the start of an interrupt function + * that preempted a softirq start of a function that + * preempted normal context!!!! Luckily, it can't be + * greater than 3, so the next two bits are a mask + * of what the depth is when we set TRACE_GRAPH_FL + */ + + TRACE_GRAPH_DEPTH_START_BIT, + TRACE_GRAPH_DEPTH_END_BIT, }; +static inline unsigned long ftrace_graph_depth(unsigned long *task_var) +{ + return (*task_var >> TRACE_GRAPH_DEPTH_START_BIT) & 3; +} + +static inline void ftrace_graph_set_depth(unsigned long *task_var, int depth) +{ + *task_var &= ~(3 << TRACE_GRAPH_DEPTH_START_BIT); + *task_var |= (depth & 3) << TRACE_GRAPH_DEPTH_START_BIT; +} + #ifdef CONFIG_DYNAMIC_FTRACE extern struct ftrace_hash __rcu *ftrace_graph_hash; extern struct ftrace_hash __rcu *ftrace_graph_notrace_hash; @@ -931,7 +961,7 @@ ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) * when the depth is zero. */ *task_var |= TRACE_GRAPH_FL; - trace_recursion_set_depth(trace->depth); + ftrace_graph_set_depth(task_var, trace->depth); /* * If no irqs are to be traced, but a set_graph_function @@ -956,7 +986,7 @@ ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace unsigned long *task_var = fgraph_get_task_var(gops); if ((*task_var & TRACE_GRAPH_FL) && - trace->depth == trace_recursion_depth()) + trace->depth == ftrace_graph_depth(task_var)) *task_var &= ~TRACE_GRAPH_FL; } From patchwork Fri Jan 12 10:14:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187678 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp77512dyc; Fri, 12 Jan 2024 02:17:41 -0800 (PST) X-Google-Smtp-Source: AGHT+IEiFGkxBClQAUf8P1f8GiwaZgxP+aBvZd1+v/YWPv9HTMX+cUbTftV99YiCRxNe2B5D1TZi X-Received: by 2002:a17:907:1608:b0:a2c:e148:e2d7 with SMTP id cw8-20020a170907160800b00a2ce148e2d7mr157705ejd.2.1705054661416; Fri, 12 Jan 2024 02:17:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054661; cv=none; d=google.com; s=arc-20160816; b=oAyITJqoRNAxrfB7jptZUrQZweTOaApdaiN/eoL5kN0uakcsc17aPItj77+C652ZZK NP82df8hEXTujIHESPvImM6aEuRkrFj0JniD7wZucnWvEfnRo2TT0J5UtNNH5jd3W9Ds 7A7DPIoOrPFZ9wKsHPG8jErm40Shi6nx4Br3i16EgKQRvDvdb6qzzUXL6rWk0NGHGHTq degcQ0hLXUpECdn/8L2dBc7A8uAR3DCrfeEsP+faeOByzw63/708Wtf/E1gOGxr19g1k sxc4hVNfs/NRxhNdOxVL36M87UlaoJEICOoGVaelMZLwswLBpSYVJ82L3ukUdQ+ZR7Vw Pc5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=ocz7aahkU1q+mzRbwwD6DbRc+cK1SxI84ZYWCER1RQI=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=aM45A6DhRcihF/1T8AO8GrdeephXy8QQonkbToEbOJ1SvD8iSjn5fMLHdA7wdv419o wwgqqyPGryg0g3w7clzHdgvFNCm6fLh2U6ZZq2Z+uYZjbhv56Pxf55nofJBiCexBbQ7c bSqQI1HexKdaoXOKy1zme45BJ/5WXaAQAyXjydIzeHpXHYtEwk7igMzl9X82+8cj9BSq OHU6dkiCp8yt0b/rinVqDs3LEqL5MbvFGIxA0bQMC5IPHHF8VfxmcSnbfPY1XqIWDSO4 GKBI4o2yc9T73fiKux0kuSLLTcihiCubLqRPcvf9x96LsxansfgQzDl9cTwdLo/g+i/m Uaow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=fTopnriY; spf=pass (google.com: domain of linux-kernel+bounces-24566-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24566-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id ks3-20020a170906f84300b00a26b9a288bfsi1295052ejb.370.2024.01.12.02.17.41 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:17:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24566-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=fTopnriY; spf=pass (google.com: domain of linux-kernel+bounces-24566-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24566-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 0E9291F23479 for ; Fri, 12 Jan 2024 10:17:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 03D07627FC; Fri, 12 Jan 2024 10:14:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fTopnriY" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 605115EE98; Fri, 12 Jan 2024 10:14:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15094C433C7; Fri, 12 Jan 2024 10:14:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054469; bh=QNjMxMJQNC5kAkv3s4bCawBXsRiU8SwHk0osgr4U2eM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fTopnriYzv+U5ebJAQmvl+MBX+tx4BJqyLju5NGK05KETD2TdyPCPS7lgJ1n7r95V MkbZtkiqziaf6s4mA2rldnuAdBG/j9KbVvpkFulyiGmCdH2kMCo+dEYpULgUlUl7+z haFuM1G8nbqmDIfQClB8y2soCKflYk7I0ma8gbl1DE+vTH9Oxft56YLtmpbN5eMfY+ ItAZ3WLfo6TdZksty/xGKlXBLIZt+vKXl/CiHfCpVQtkoYoJaId90kej0q7p2tDVPp hp0wn9Jge9AQYR2BX7gfPbNkXzykVr/wOoYSVEsZtzS3TnBsAJodIs2047OvAs5BhB Szgip2VEuEMwg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 18/36] function_graph: Move graph notrace bit to shadow stack global var Date: Fri, 12 Jan 2024 19:14:23 +0900 Message-Id: <170505446356.459169.7646049202447015855.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879396361731230 X-GMAIL-MSGID: 1787879396361731230 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the function graph no-trace was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Make description lines shorter than 76 chars. --- include/linux/trace_recursion.h | 7 ------- kernel/trace/trace.h | 9 +++++++++ kernel/trace/trace_functions_graph.c | 10 ++++++---- 3 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index 00e792bf148d..cc11b0e9d220 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,13 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* - * To implement set_graph_notrace, if this bit is set, we ignore - * function graph tracing of called functions, until the return - * function is called to clear it. - */ - TRACE_GRAPH_NOTRACE_BIT, - /* Used to prevent recursion recording from recursing. */ TRACE_RECORD_RECURSION_BIT, }; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 1a467b5437b3..1dfa031c2812 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -916,8 +916,17 @@ enum { TRACE_GRAPH_DEPTH_START_BIT, TRACE_GRAPH_DEPTH_END_BIT, + + /* + * To implement set_graph_notrace, if this bit is set, we ignore + * function graph tracing of called functions, until the return + * function is called to clear it. + */ + TRACE_GRAPH_NOTRACE_BIT, }; +#define TRACE_GRAPH_NOTRACE (1 << TRACE_GRAPH_NOTRACE_BIT) + static inline unsigned long ftrace_graph_depth(unsigned long *task_var) { return (*task_var >> TRACE_GRAPH_DEPTH_START_BIT) & 3; diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 66cce73e94f8..13d0387ac6a6 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -130,6 +130,7 @@ static inline int ftrace_graph_ignore_irqs(void) int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops) { + unsigned long *task_var = fgraph_get_task_var(gops); struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; @@ -138,7 +139,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, int ret; int cpu; - if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) + if (*task_var & TRACE_GRAPH_NOTRACE) return 0; /* @@ -149,7 +150,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, * returning from the function. */ if (ftrace_graph_notrace_addr(trace->func)) { - trace_recursion_set(TRACE_GRAPH_NOTRACE_BIT); + *task_var |= TRACE_GRAPH_NOTRACE_BIT; /* * Need to return 1 to have the return called * that will clear the NOTRACE bit. @@ -240,6 +241,7 @@ void __trace_graph_return(struct trace_array *tr, void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { + unsigned long *task_var = fgraph_get_task_var(gops); struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; @@ -249,8 +251,8 @@ void trace_graph_return(struct ftrace_graph_ret *trace, ftrace_graph_addr_finish(gops, trace); - if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { - trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); + if (*task_var & TRACE_GRAPH_NOTRACE) { + *task_var &= ~TRACE_GRAPH_NOTRACE; return; } From patchwork Fri Jan 12 10:14:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187679 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp77640dyc; Fri, 12 Jan 2024 02:17:58 -0800 (PST) X-Google-Smtp-Source: AGHT+IGWbvKfrOnDZisM64Li/cu5gDHZ8ITTQszt8QHPcGyAMB26oPjc7ubhBFtKBrNow6cYyq/R X-Received: by 2002:ae9:e002:0:b0:783:3f1f:6355 with SMTP id m2-20020ae9e002000000b007833f1f6355mr1112309qkk.70.1705054678475; Fri, 12 Jan 2024 02:17:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054678; cv=none; d=google.com; s=arc-20160816; b=UYXCOqYTdaTavZ8id6jeLhMriLW1G+IBu49fwORbpLVQRQyqdMBSNF7IU1wng3kz1S uCWESjfAwxqJUbDJJwhvULcUKvjZYkNAEoCY4Tnf32szd4tWO8DC3kiz/NToaiIm00+T NVDOqXNTiUgPq2gLYK4zzjfb+bs0WTLejLJsb9Z7PANabb7M85FzhxEkS9mtPkpOPx8R CZA54iz6T/fTVmObRGHeiOSsz8aLu0WSRvccO82uqAWY8yYVuPmr5d+EAipnH+dv87jy UwtMcEXDQVOlUBceJ8QfV/fycdt7UDjTnWbY2FHPo04KaSEI/+BsmwIt99KTjcNFz7D5 4qag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=5yK9ZcvAkE3zTPDyo37o03IMyPbB2jKEQFmw+L2A0hU=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=TUnb3jLneijLMeAnJfClWQIBpH/XHkdKFLd/eEnMS1E7g/RyTD731jwqmSuezWxW/j DNjDdo9yehDMCfL5+sWFpRhSiFcjVyaNBeCPI7vOLeGhubADpU1K0un/bYTIRwDj1up3 dg0iTrxLkxGxx6B8F7xw16gQshR8uH0UmNAMH+FtbA3HQTqNsKiOBXM9SNPyLML6t7iA co4cjMN9j1Fq0QhlEospnmESvxzSDr2wD0zHtzXYyFzDLGCFCVzqOWOWmRy0ZcY700Rx PdW8ksKauB2hrJ9V/5rtd8hBZ2DYKIPOHB4tHL067cCdArSabgx+A+wly9rTTLNJMcse veEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uHTdr5R4; spf=pass (google.com: domain of linux-kernel+bounces-24567-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24567-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id a20-20020a05620a103400b007831755ed0esi2589528qkk.597.2024.01.12.02.17.58 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:17:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24567-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uHTdr5R4; spf=pass (google.com: domain of linux-kernel+bounces-24567-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24567-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 26D8C1C25320 for ; Fri, 12 Jan 2024 10:17:58 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0D50D6280A; Fri, 12 Jan 2024 10:14:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uHTdr5R4" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E3B05D903; Fri, 12 Jan 2024 10:14:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED2F0C433C7; Fri, 12 Jan 2024 10:14:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054482; bh=Qexx1sU7zN84CsGoaBNlxvVol+nkH+SWLGapSVZjn00=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uHTdr5R4GTxtlTMBz3W1poHCBmId3/fmsvlwchN0k5FYYNFJpGDaX2t0ouPDUCOLq ZyECwtIhkY8f/VtZ2m0f3amDMjRX7HXgxtR0Xs9Sq4WWcYeyodNnUH++qsEnLgCaX+ Z6xWsYRA3S9oDXu+ToZNaVs1x7D2++TFPeh5AwrGM3Aa4SvHhsbYlaKrSwb/7223lf 8ghvJbCFksf+sLdGF+DCoYL1TDpL5auePvDiP7NMRPwINVFCjYoMcWXDZdFkLU79hS 3MJzPOhoOcmX7QJRqmQqK5C71Vq7uTb7sOps4AwwvOCIgpeK8v2f40Zy6+7Sx5I3wE nFVTwm67MjMkQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 19/36] function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data() Date: Fri, 12 Jan 2024 19:14:35 +0900 Message-Id: <170505447524.459169.6864069704622202766.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879414228832465 X-GMAIL-MSGID: 1787879414228832465 From: Steven Rostedt (VMware) Added functions that can be called by a fgraph_ops entryfunc and retfunc to store state between the entry of the function being traced to the exit of the same function. The fgraph_ops entryfunc() may call fgraph_reserve_data() to store up to 32 words onto the task's shadow ret_stack and this then can be retrieved by fgraph_retrieve_data() called by the corresponding retfunc(). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Store fgraph_array index to the data entry. - Both function requires fgraph_array index to store/retrieve data. - Reserve correct size of the data. - Return correct data area. Changes in v2: - Retrieve the reserved size by fgraph_retrieve_data(). - Expand the maximum data size to 32 words. - Update stack index with __get_index(val) if FGRAPH_TYPE_ARRAY entry. - fix typos and make description lines shorter than 76 chars. --- include/linux/ftrace.h | 3 + kernel/trace/fgraph.c | 175 ++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 170 insertions(+), 8 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 737f84104577..815e865f46c9 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1075,6 +1075,9 @@ struct fgraph_ops { int idx; }; +void *fgraph_reserve_data(int idx, int size_bytes); +void *fgraph_retrieve_data(int idx, int *size_bytes); + /* * Stack of return addresses for functions * of a thread. diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 4ff5d2864fd2..a0eb7077b853 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -41,17 +41,29 @@ * bits: 10 - 11 Type of storage * 0 - reserved * 1 - bitmap of fgraph_array index + * 2 - reserved data * * For bitmap of fgraph_array index * bits: 12 - 27 The bitmap of fgraph_ops fgraph_array index * + * For reserved data: + * bits: 12 - 17 The size in words that is stored + * bits: 18 - 23 The index of fgraph_array, which shows who is stored + * * That is, at the end of function_graph_enter, if the first and forth * fgraph_ops on the fgraph_array[] (index 0 and 3) needs their retfunc called - * on the return of the function being traced, this is what will be on the - * task's shadow ret_stack: (the stack grows upward) + * on the return of the function being traced, and the forth fgraph_ops + * stored two words of data, this is what will be on the task's shadow + * ret_stack: (the stack grows upward) * * | | <- task->curr_ret_stack * +--------------------------------------------+ + * | data_type(idx:3, size:2, | + * | offset:FGRAPH_RET_INDEX+3) | ( Data with size of 2 words) + * +--------------------------------------------+ ( It is 4 words from the ret_stack) + * | STORED DATA WORD 2 | + * | STORED DATA WORD 1 | + * +------i-------------------------------------+ * | bitmap_type(bitmap:(BIT(3)|BIT(0)), | * | offset:FGRAPH_RET_INDEX) | <- the offset is from here * +--------------------------------------------+ @@ -78,14 +90,23 @@ enum { FGRAPH_TYPE_RESERVED = 0, FGRAPH_TYPE_BITMAP = 1, + FGRAPH_TYPE_DATA = 2, }; #define FGRAPH_INDEX_SIZE 16 #define FGRAPH_INDEX_MASK GENMASK(FGRAPH_INDEX_SIZE - 1, 0) #define FGRAPH_INDEX_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) -/* Currently the max stack index can't be more than register callers */ -#define FGRAPH_MAX_INDEX (FGRAPH_INDEX_SIZE + FGRAPH_RET_INDEX) +#define FGRAPH_DATA_SIZE 5 +#define FGRAPH_DATA_MASK ((1 << FGRAPH_DATA_SIZE) - 1) +#define FGRAPH_DATA_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) + +#define FGRAPH_DATA_INDEX_SIZE 4 +#define FGRAPH_DATA_INDEX_MASK ((1 << FGRAPH_DATA_INDEX_SIZE) - 1) +#define FGRAPH_DATA_INDEX_SHIFT (FGRAPH_DATA_SHIFT + FGRAPH_DATA_SIZE) + +#define FGRAPH_MAX_INDEX \ + ((FGRAPH_INDEX_SIZE << FGRAPH_DATA_SIZE) + FGRAPH_RET_INDEX) #define FGRAPH_ARRAY_SIZE FGRAPH_INDEX_SIZE @@ -97,6 +118,8 @@ enum { #define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) +#define FGRAPH_MAX_DATA_SIZE (sizeof(long) * (1 << FGRAPH_DATA_SIZE)) + /* * Each fgraph_ops has a reservered unsigned long at the end (top) of the * ret_stack to store task specific state. @@ -145,14 +168,39 @@ static int fgraph_lru_alloc_index(void) return idx; } +static inline int __get_index(unsigned long val) +{ + return val & FGRAPH_RET_INDEX_MASK; +} + +static inline int __get_type(unsigned long val) +{ + return (val >> FGRAPH_TYPE_SHIFT) & FGRAPH_TYPE_MASK; +} + +static inline int __get_data_index(unsigned long val) +{ + return (val >> FGRAPH_DATA_INDEX_SHIFT) & FGRAPH_DATA_INDEX_MASK; +} + +static inline int __get_data_size(unsigned long val) +{ + return (val >> FGRAPH_DATA_SHIFT) & FGRAPH_DATA_MASK; +} + +static inline unsigned long get_fgraph_entry(struct task_struct *t, int index) +{ + return t->ret_stack[index]; +} + static inline int get_ret_stack_index(struct task_struct *t, int offset) { - return t->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; + return __get_index(t->ret_stack[offset]); } static inline int get_fgraph_type(struct task_struct *t, int offset) { - return (t->ret_stack[offset] >> FGRAPH_TYPE_SHIFT) & FGRAPH_TYPE_MASK; + return __get_type(t->ret_stack[offset]); } static inline unsigned long @@ -179,6 +227,22 @@ add_fgraph_index_bitmap(struct task_struct *t, int offset, unsigned long bitmap) t->ret_stack[offset] |= (bitmap << FGRAPH_INDEX_SHIFT); } +static inline void *get_fgraph_data(struct task_struct *t, int index) +{ + unsigned long val = t->ret_stack[index]; + + if (__get_type(val) != FGRAPH_TYPE_DATA) + return NULL; + index -= __get_data_size(val); + return (void *)&t->ret_stack[index]; +} + +static inline unsigned long make_fgraph_data(int idx, int size, int offset) +{ + return (idx << FGRAPH_DATA_INDEX_SHIFT) | (size << FGRAPH_DATA_SHIFT) | + (FGRAPH_TYPE_DATA << FGRAPH_TYPE_SHIFT) | offset; +} + /* ftrace_graph_entry set to this to tell some archs to run function graph */ static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops) { @@ -212,6 +276,92 @@ static void ret_stack_init_task_vars(unsigned long *ret_stack) memset(gvals, 0, sizeof(*gvals) * FGRAPH_ARRAY_SIZE); } +/** + * fgraph_reserve_data - Reserve storage on the task's ret_stack + * @idx: The index of fgraph_array + * @size_bytes: The size in bytes to reserve + * + * Reserves space of up to FGRAPH_MAX_DATA_SIZE bytes on the + * task's ret_stack shadow stack, for a given fgraph_ops during + * the entryfunc() call. If entryfunc() returns zero, the storage + * is discarded. An entryfunc() can only call this once per iteration. + * The fgraph_ops retfunc() can retrieve this stored data with + * fgraph_retrieve_data(). + * + * Returns: On success, a pointer to the data on the stack. + * Otherwise, NULL if there's not enough space left on the + * ret_stack for the data, or if fgraph_reserve_data() was called + * more than once for a single entryfunc() call. + */ +void *fgraph_reserve_data(int idx, int size_bytes) +{ + unsigned long val; + void *data; + int curr_ret_stack = current->curr_ret_stack; + int data_size; + + if (size_bytes > FGRAPH_MAX_DATA_SIZE) + return NULL; + + /* Convert to number of longs + data word */ + data_size = DIV_ROUND_UP(size_bytes, sizeof(long)); + + val = get_fgraph_entry(current, curr_ret_stack - 1); + data = ¤t->ret_stack[curr_ret_stack]; + + curr_ret_stack += data_size + 1; + if (unlikely(curr_ret_stack >= SHADOW_STACK_MAX_INDEX)) + return NULL; + + val = make_fgraph_data(idx, data_size, __get_index(val) + data_size + 1); + + /* Set the last word to be reserved */ + current->ret_stack[curr_ret_stack - 1] = val; + + /* Make sure interrupts see this */ + barrier(); + current->curr_ret_stack = curr_ret_stack; + /* Again sync with interrupts, and reset reserve */ + current->ret_stack[curr_ret_stack - 1] = val; + + return data; +} + +/** + * fgraph_retrieve_data - Retrieve stored data from fgraph_reserve_data() + * @idx: the index of fgraph_array (fgraph_ops::idx) + * @size_bytes: pointer to retrieved data size. + * + * This is to be called by a fgraph_ops retfunc(), to retrieve data that + * was stored by the fgraph_ops entryfunc() on the function entry. + * That is, this will retrieve the data that was reserved on the + * entry of the function that corresponds to the exit of the function + * that the fgraph_ops retfunc() is called on. + * + * Returns: The stored data from fgraph_reserve_data() called by the + * matching entryfunc() for the retfunc() this is called from. + * Or NULL if there was nothing stored. + */ +void *fgraph_retrieve_data(int idx, int *size_bytes) +{ + int index = current->curr_ret_stack - 1; + unsigned long val; + + val = get_fgraph_entry(current, index); + while (__get_type(val) == FGRAPH_TYPE_DATA) { + if (__get_data_index(val) == idx) + goto found; + index -= __get_data_size(val) + 1; + val = get_fgraph_entry(current, index); + } + return NULL; +found: + if (size_bytes) + *size_bytes = __get_data_size(val) * + sizeof(long); + return get_fgraph_data(current, index); +} + /** * fgraph_get_task_var - retrieve a task specific state variable * @gops: The ftrace_ops that owns the task specific variable @@ -449,13 +599,18 @@ int function_graph_enter(unsigned long ret, unsigned long func, for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { struct fgraph_ops *gops = fgraph_array[i]; + int save_curr_ret_stack; if (gops == &fgraph_stub) continue; + save_curr_ret_stack = current->curr_ret_stack; if (ftrace_ops_test(&gops->ops, func, NULL) && gops->entryfunc(&trace, gops)) bitmap |= BIT(i); + else + /* Clear out any saved storage */ + current->curr_ret_stack = save_curr_ret_stack; } if (!bitmap) @@ -481,6 +636,7 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, struct fgraph_ops *gops) { struct ftrace_graph_ent trace; + int save_curr_ret_stack; int index; int type; @@ -500,13 +656,15 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, trace.func = func; trace.depth = current->curr_ret_depth; + save_curr_ret_stack = current->curr_ret_stack; if (gops->entryfunc(&trace, gops)) { if (type == FGRAPH_TYPE_RESERVED) set_fgraph_index_bitmap(current, index, BIT(gops->idx)); else add_fgraph_index_bitmap(current, index, BIT(gops->idx)); return 0; - } + } else + current->curr_ret_stack = save_curr_ret_stack; if (type == FGRAPH_TYPE_RESERVED) { current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; @@ -651,7 +809,8 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs * curr_ret_stack is after that. */ barrier(); - current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; + current->curr_ret_stack = index - FGRAPH_RET_INDEX; + current->curr_ret_depth--; return ret; } From patchwork Fri Jan 12 10:14:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187696 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp80508dyc; Fri, 12 Jan 2024 02:25:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IGe1qdr8KalY/GFQpSADy4AFM+3VUHzY2OhywAz02JVyQHjnEE6mTI0DErpKNaQsfOpnrag X-Received: by 2002:a17:903:40cb:b0:1d4:5268:27ed with SMTP id t11-20020a17090340cb00b001d4526827edmr796280pld.21.1705055135180; Fri, 12 Jan 2024 02:25:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705055135; cv=none; d=google.com; s=arc-20160816; b=iK8MYEgVsdlY92ghTo9zoGbOA5WByIKUgksf/L2UeQinb15Bjvat8T3YIumeFX76rA t+cH7Iz9QuDR2k825uwVmq5bl0i+FvUFSQdKef0YBznlDhao9HLsESPeCxUaVdc5H/3Q Y0+NhvsvzadoNmjUVXbdkxu5k6qQuUKRcELQVaeEeOZODp2CFWkj9iTCWsSR0rL41eFc 6I5T4fSQLUU84c1aSRCvh6jwdChsUNRqMFbmkeXj5INVOpfhxq54XdbzlUXGKDc4aEpy n/GDPM71Bv5kVMsihIkzSx7sR3JfNGe7TWTkGjMxauLjxO7GqF5NvAungJnZV8Xd4A/Q aPzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=spoHfX2l8GLSymOqCDcIucUvgjdcc2PcP2LEj8ZzRiU=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=OajNStwy7wm0uQESSn3ZAoEvb/AzDYEEU1PD1t6J5BMV05ncvejQt5SPRPguRlALKo FDwRSyYNKoSiILlXxGphW2BUpnnp3GJsf44Yt8sW8zO6sQQxFcfKbVT9DbksijzxSQRZ 8k5yOv3lOwMiDjyHZdOHhnZYMxsCZoM7woovHnLZWFe3tACJeALe/gb45Ml81GEKUg16 F0Zy9WkbOzhQQqMreXnmVTwrbmsiBNKjIt7aRszV0w00h9VlNTmhL/hTha30FwD/vkXe uMm57tn5nMfjQAZYepQjmoqbOnMJk8Uhxwst2SnEMOQ/0LZ963L7Xkp3ty7JQW6tQanc NS9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=fLN2UMo8; spf=pass (google.com: domain of linux-kernel+bounces-24568-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24568-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id u12-20020a170902e80c00b001d508e3cf5csi3288399plg.287.2024.01.12.02.25.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:25:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24568-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=fLN2UMo8; spf=pass (google.com: domain of linux-kernel+bounces-24568-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24568-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 5732B28CDB2 for ; Fri, 12 Jan 2024 10:18:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6E82662817; Fri, 12 Jan 2024 10:14:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fLN2UMo8" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 476C75D916; Fri, 12 Jan 2024 10:14:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13F5FC433C7; Fri, 12 Jan 2024 10:14:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054493; bh=3y7/UQ9jP1JyrMUIVzw+uMu03u1MVFX8UqCJ0lQhJRQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fLN2UMo84x0dByeQY4WvElW6LNqj/zvsBLTqfBmr5YzTqVPKh7vDNesPKROHdXsUD Lc+Cy3de01SJ/jjkTU3nAq71yQt1Lc/5CRv6F6OfI4G9KuQY8mF8j6ZMwcVY7/FsXI kQiFQONf5WJ8IO33LaPo8TtwMB3AzFvO7nVXRO7KsdTzFl8TZ8SF+eR+iRlten9ebq R3kynMsL+DSQpPu2N7NhQJXC62usaFubh4OG3jpYG8VFXBpy3GTZBcnDiImjJkqmFu ywq5gj/cCMI5kz+PhvbujOfxF/Wc98dJ9925yE0DwUfMKZd6ipV1TsjrATuRTZI//2 wcwzBW89z5V2A== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 20/36] function_graph: Improve push operation for several interrupts Date: Fri, 12 Jan 2024 19:14:47 +0900 Message-Id: <170505448743.459169.5432129428639964197.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879893381265835 X-GMAIL-MSGID: 1787879893381265835 From: Masami Hiramatsu (Google) Improve push and data reserve operation on the shadow stack for several sequencial interrupts. To push a ret_stack or data entry on the shadow stack, we need to prepare an index (offset) entry before updating the stack pointer (curr_ret_stack) so that unwinder from interrupts can find the next return address from the shadow stack. Currently we do write index, update the curr_ret_stack, and rewrite it again. But that is not enough for the case if two interrupts happens and the first one breaks it. For example, 1. write reserved index entry at ret_stack[new_index - 1] and ret addr. 2. interrupt comes. 2.1. push new index and ret addr on ret_stack. 2.2. pop it. (corrupt entries on new_index - 1) 3. return from interrupt. 4. update curr_ret_stack = new_index 5. interrupt comes again. 5.1. unwind <------ may not work. To avoid this issue, this introduces a new rsrv_ret_stack stack reservation pointer and a new push code (slow path) to commit previous reserved code forcibly. 0. update rsrv_ret_stack = new_index. 1. write reserved index entry at ret_stack[new_index - 1] and ret addr. 2. interrupt comes. 2.0. if rsrv_ret_stack != curr_ret_stack, add reserved index entry on ret_stack[rsrv_ret_stack - 1] to point the previous ret_stack pointed by ret_stack[curr_ret_stack - 1]. and update curr_ret_stack = rsrv_ret_stack. 2.1. push new index and ret addr on ret_stack. 2.2. pop it. (corrupt entries on new_index - 1) 3. return from interrupt. 4. update curr_ret_stack = new_index 5. interrupt comes again. 5.1. unwind works, because curr_ret_stack points the previously saved ret_stack. 5.2. this can do push/pop operations too. 6. return from interrupt. 7. rewrite reserved index entry at ret_stack[new_index] again. This maybe a bit heavier but safer. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v6: - Newly added. --- include/linux/sched.h | 1 kernel/trace/fgraph.c | 135 +++++++++++++++++++++++++++++++++++-------------- 2 files changed, 98 insertions(+), 38 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 4dab30f00211..fda551e1aade 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1387,6 +1387,7 @@ struct task_struct { #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* Index of current stored address in ret_stack: */ int curr_ret_stack; + int rsrv_ret_stack; int curr_ret_depth; /* Stack of return addresses for return function tracing: */ diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index a0eb7077b853..6a9206ebc6a2 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -298,31 +298,47 @@ void *fgraph_reserve_data(int idx, int size_bytes) unsigned long val; void *data; int curr_ret_stack = current->curr_ret_stack; + int rsrv_ret_stack = current->rsrv_ret_stack; int data_size; if (size_bytes > FGRAPH_MAX_DATA_SIZE) return NULL; + /* + * Since this API is used after pushing ret_stack, curr_ret_stack + * should be synchronized with rsrv_ret_stack. + */ + if (WARN_ON_ONCE(curr_ret_stack != rsrv_ret_stack)) + return NULL; + /* Convert to number of longs + data word */ data_size = DIV_ROUND_UP(size_bytes, sizeof(long)); val = get_fgraph_entry(current, curr_ret_stack - 1); data = ¤t->ret_stack[curr_ret_stack]; - curr_ret_stack += data_size + 1; - if (unlikely(curr_ret_stack >= SHADOW_STACK_MAX_INDEX)) + rsrv_ret_stack += data_size + 1; + if (unlikely(rsrv_ret_stack >= SHADOW_STACK_MAX_INDEX)) return NULL; val = make_fgraph_data(idx, data_size, __get_index(val) + data_size + 1); - /* Set the last word to be reserved */ - current->ret_stack[curr_ret_stack - 1] = val; - - /* Make sure interrupts see this */ + /* Extend the reserved-ret_stack at first */ + current->rsrv_ret_stack = rsrv_ret_stack; + /* And sync with interrupts, to see the new rsrv_ret_stack */ + barrier(); + /* + * The same reason as the push, this entry must be here before updating + * the curr_ret_stack. But any interrupt comes before updating + * curr_ret_stack, it may commit it with different reserve entry. + * Thus we need to write the data entry after update the curr_ret_stack + * again. And these operations must be ordered. + */ + current->ret_stack[rsrv_ret_stack - 1] = val; barrier(); - current->curr_ret_stack = curr_ret_stack; - /* Again sync with interrupts, and reset reserve */ - current->ret_stack[curr_ret_stack - 1] = val; + current->curr_ret_stack = rsrv_ret_stack; + barrier(); + current->ret_stack[rsrv_ret_stack - 1] = val; return data; } @@ -403,7 +419,16 @@ get_ret_stack(struct task_struct *t, int offset, int *index) return NULL; idx = get_ret_stack_index(t, --offset); - if (WARN_ON_ONCE(idx <= 0 || idx > offset)) + /* + * This can happen if an interrupt comes just before the first push + * increments the curr_ret_stack, and that interrupt pushes another + * entry. In that case, the frist push is forcibly committed with a + * reserved entry which points -1 stack index. + */ + if (unlikely(idx > offset)) + return NULL; + + if (WARN_ON_ONCE(idx <= 0)) return NULL; offset -= idx; @@ -473,7 +498,7 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, struct ftrace_ret_stack *ret_stack; unsigned long long calltime; unsigned long val; - int index; + int index, rindex; if (unlikely(ftrace_graph_is_dead())) return -EBUSY; @@ -481,18 +506,38 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, if (!current->ret_stack) return -EBUSY; - /* - * At first, check whether the previous fgraph callback is pushed by - * the fgraph on the same function entry. - * But if @func is the self tail-call function, we also need to ensure - * the ret_stack is not for the previous call by checking whether the - * bit of @fgraph_idx is set or not. - */ - ret_stack = get_ret_stack(current, current->curr_ret_stack, &index); - if (ret_stack && ret_stack->func == func && - get_fgraph_type(current, index + FGRAPH_RET_INDEX) == FGRAPH_TYPE_BITMAP && - !is_fgraph_index_set(current, index + FGRAPH_RET_INDEX, fgraph_idx)) - return index + FGRAPH_RET_INDEX; + index = READ_ONCE(current->curr_ret_stack); + rindex = READ_ONCE(current->rsrv_ret_stack); + if (unlikely(index != rindex)) { + /* + * This interrupts the push operation. Commit previous push + * temporarily with reserved entry. + */ + if (unlikely(index <= 0)) + /* This will make ret_stack[index - 1] points -1 */ + val = rindex - index; + else + val = get_ret_stack_index(current, index - 1) + + rindex - index; + current->ret_stack[rindex - 1] = val; + /* Forcibly commit it */ + current->curr_ret_stack = index = rindex; + } else { + /* + * Check whether the previous fgraph callback is pushed by the fgraph + * on the same function entry. + * But if @func is the self tail-call function, we also need to ensure + * the ret_stack is not for the previous call by checking whether the + * bit of @fgraph_idx is set or not. + */ + ret_stack = get_ret_stack(current, index, &index); + if (ret_stack && ret_stack->func == func && + get_fgraph_type(current, index + FGRAPH_RET_INDEX) == FGRAPH_TYPE_BITMAP && + !is_fgraph_index_set(current, index + FGRAPH_RET_INDEX, fgraph_idx)) + return index + FGRAPH_RET_INDEX; + /* Since get_ret_stack() overwrites 'index', recover it. */ + index = rindex; + } val = (FGRAPH_TYPE_RESERVED << FGRAPH_TYPE_SHIFT) | FGRAPH_RET_INDEX; @@ -512,38 +557,45 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, calltime = trace_clock_local(); - index = READ_ONCE(current->curr_ret_stack); ret_stack = RET_STACK(current, index); index += FGRAPH_RET_INDEX; - /* ret offset = FGRAPH_RET_INDEX ; type = reserved */ + /* + * At first, reserve the ret_stack. Beyond this point, any interrupt + * will only overwrite ret_stack[index] by a reserved entry which points + * the previous ret_stack or -1. + */ + current->rsrv_ret_stack = index + 1; + /* And ensure that the following happens after reserved */ + barrier(); + current->ret_stack[index] = val; ret_stack->ret = ret; /* * The unwinders expect curr_ret_stack to point to either zero - * or an index where to find the next ret_stack. Even though the - * ret stack might be bogus, we want to write the ret and the - * index to find the ret_stack before we increment the stack point. - * If an interrupt comes in now before we increment the curr_ret_stack - * it may blow away what we wrote. But that's fine, because the - * index will still be correct (even though the 'ret' won't be). - * What we worry about is the index being correct after we increment - * the curr_ret_stack and before we update that index, as if an - * interrupt comes in and does an unwind stack dump, it will need - * at least a correct index! + * or an index where to find the next ret_stack which has actual ret + * address. Thus we want to write the ret and the index to find the + * ret_stack before we increment the curr_ret_stack. */ barrier(); current->curr_ret_stack = index + 1; /* + * There are two possibilities here. + * - More than one interrupts push/pop their entry between update + * rsrv_ret_stack and curr_ret_stack. In this case, curr_ret_stack + * is already equal to the rsrv_ret_stack and + * current->ret_stack[index] is overwritten by reserved entry which + * points the previous ret_stack. But ret_stack->ret is not. + * - Or, no interrupts push/pop. So current->ret_stack[index] keeps + * its value. * This next barrier is to ensure that an interrupt coming in - * will not corrupt what we are about to write. + * will not overwrite what we are about to write anymore. */ barrier(); - /* Still keep it reserved even if an interrupt came in */ + /* Rewrite the entry again in case it was overwritten. */ current->ret_stack[index] = val; - ret_stack->ret = ret; ret_stack->func = func; ret_stack->calltime = calltime; #ifdef HAVE_FUNCTION_GRAPH_FP_TEST @@ -625,6 +677,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, return 0; out_ret: current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; + current->rsrv_ret_stack = current->curr_ret_stack; out: current->curr_ret_depth--; return -EBUSY; @@ -668,6 +721,7 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, if (type == FGRAPH_TYPE_RESERVED) { current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; + current->rsrv_ret_stack = current->curr_ret_stack; current->curr_ret_depth--; } return -EBUSY; @@ -810,6 +864,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs */ barrier(); current->curr_ret_stack = index - FGRAPH_RET_INDEX; + current->rsrv_ret_stack = current->curr_ret_stack; current->curr_ret_depth--; return ret; @@ -998,6 +1053,7 @@ static int alloc_retstack_tasklist(unsigned long **ret_stack_list) atomic_set(&t->trace_overrun, 0); ret_stack_init_task_vars(ret_stack_list[start]); t->curr_ret_stack = 0; + t->rsrv_ret_stack = 0; t->curr_ret_depth = -1; /* Make sure the tasks see the 0 first: */ smp_wmb(); @@ -1060,6 +1116,7 @@ graph_init_task(struct task_struct *t, unsigned long *ret_stack) ret_stack_init_task_vars(ret_stack); t->ftrace_timestamp = 0; t->curr_ret_stack = 0; + t->rsrv_ret_stack = 0; t->curr_ret_depth = -1; /* make curr_ret_stack visible before we add the ret_stack */ smp_wmb(); @@ -1073,6 +1130,7 @@ graph_init_task(struct task_struct *t, unsigned long *ret_stack) void ftrace_graph_init_idle_task(struct task_struct *t, int cpu) { t->curr_ret_stack = 0; + t->rsrv_ret_stack = 0; t->curr_ret_depth = -1; /* * The idle task has no parent, it either has its own @@ -1101,6 +1159,7 @@ void ftrace_graph_init_task(struct task_struct *t) /* Make sure we do not use the parent ret_stack */ t->ret_stack = NULL; t->curr_ret_stack = 0; + t->rsrv_ret_stack = 0; t->curr_ret_depth = -1; if (ftrace_graph_active) { From patchwork Fri Jan 12 10:14:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187683 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp78132dyc; Fri, 12 Jan 2024 02:19:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IH2lJuw3FBEsaL6wxP+zSFed2UEc+6ut74vnswgVTGSRXg88aujS3lzCLxGuEkY5nbneWc3 X-Received: by 2002:a17:903:1cc:b0:1d4:bd18:7c47 with SMTP id e12-20020a17090301cc00b001d4bd187c47mr1132529plh.57.1705054754371; Fri, 12 Jan 2024 02:19:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054754; cv=none; d=google.com; s=arc-20160816; b=c9fJxrQJ1+ilYfvzBCpFdIgDpVqA8Tanu8T4I6JQE7x554DI3JycUGP+PySqIg81cD 2ruO5IbmZFUhK0YTXMJY3z3aehFUKgTsmSNpfJY/Pdb8C/Etec9m6D8Is1QB5DFUUmg/ 4uzjT6d+wjiVhZF/Fvf1bj0WPlRRc12nRQbmiMcxWaJahFYh4J3G5Ok8jKT0GSFyyfrj t1dOUd6IHC2OsM8uklitEic+CoZtha/ceyNNpMAyy7bzRtwrCPxQj0Zk2NWf7DhuH0Bv qoAkKqlfp2hmnlM2ZKXBRa2rZhOYIK7O8omWuKUDTNSyN8iOfhT/IorfyQj2U2V9+MVR EaSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=gNjysZumaRGj7wRl3wCUGBAp98rdVBpztEtFKkRLpNA=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=E/uipIeO9Ocoofasb39HYhzv69gGy9eEaPzezjuUGu9nSPKtDvt2kxHQNonK6qmH6N fYZLbvsItYrY7/qJapsiQtpheWN9j28l7wWgXzM8qjafq4FIl9dq1Dsi9bD2wpOlN1IN Doy4ueNzi+gmY8T/i+B5R5KtgtCudHJE4HrSTCy6RsRQZZgMhnr/3QITVjqNmDrq5+CM jd+VZYDevh1F21vvDAZzfZatmMel55NLsnDqKtPwdgbQyH0SuzZxBInlRrNfq02+qKD9 aMHcDp6woj0dnRLeN6Knuk8xfweeQjcfSLMUCiMYQa/HDKymaCMIKmf5QKrWYUW5SMJj XF3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="J/Qe64jv"; spf=pass (google.com: domain of linux-kernel+bounces-24569-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24569-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id jx5-20020a170903138500b001d58ce6e4d3si2862547plb.390.2024.01.12.02.19.13 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:19:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24569-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="J/Qe64jv"; spf=pass (google.com: domain of linux-kernel+bounces-24569-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24569-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id B60A7B23328 for ; Fri, 12 Jan 2024 10:18:33 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5CA9D62815; Fri, 12 Jan 2024 10:15:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="J/Qe64jv" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DD305D916; Fri, 12 Jan 2024 10:15:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3B5E2C43390; Fri, 12 Jan 2024 10:15:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054506; bh=loWWqL6DZVWbVMu4FDyOcIsVywT7pv1to3y8Tk+anuc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=J/Qe64jvUnHvsGv6R0oS7gUWlvh54qz7H/jOfHRpPg4613fSe1PXAuuaytjAPfvR7 //TsjNYF3kcK5N6kDVEaMb6V+n5zPQzOq0CmtWXbXQhbl5R6fSRXC9wdOX+BqBJI7O qlMKGmH3cts/oCw5dGLIc6VzJWql017ljjVkt74LIC56jz2mmMsircZ0MG29WkOTCA DT85Xo/V7Vzu+MOGSj7MUHUKyxF42rh9TclfCDr5gy2Gbn7NLcYld5h8c49kovTIEI 4MEytI7W0GMUZIoRUMEqUIgrunvswcgeKPvdC/Vy7aPUlP5nJQVfG7NBDFaTG+y9T2 V2jGtrbne4pGQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 21/36] function_graph: Add selftest for passing local variables Date: Fri, 12 Jan 2024 19:14:59 +0900 Message-Id: <170505449911.459169.13401735083093815329.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879494529611236 X-GMAIL-MSGID: 1787879494529611236 From: Steven Rostedt (VMware) Add boot up selftest that passes variables from a function entry to a function exit, and make sure that they do get passed around. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Add reserved size test. - Use pr_*() instead of printk(KERN_*). --- kernel/trace/trace_selftest.c | 169 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 169 insertions(+) diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index f0758afa2f7d..4d86cd4c8c8c 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -756,6 +756,173 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr) #ifdef CONFIG_FUNCTION_GRAPH_TRACER +#ifdef CONFIG_DYNAMIC_FTRACE + +#define BYTE_NUMBER 123 +#define SHORT_NUMBER 12345 +#define WORD_NUMBER 1234567890 +#define LONG_NUMBER 1234567890123456789LL + +static int fgraph_store_size __initdata; +static const char *fgraph_store_type_name __initdata; +static char *fgraph_error_str __initdata; +static char fgraph_error_str_buf[128] __initdata; + +static __init int store_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) +{ + const char *type = fgraph_store_type_name; + int size = fgraph_store_size; + void *p; + + p = fgraph_reserve_data(gops->idx, size); + if (!p) { + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "Failed to reserve %s\n", type); + fgraph_error_str = fgraph_error_str_buf; + return 0; + } + + switch (fgraph_store_size) { + case 1: + *(char *)p = BYTE_NUMBER; + break; + case 2: + *(short *)p = SHORT_NUMBER; + break; + case 4: + *(int *)p = WORD_NUMBER; + break; + case 8: + *(long long *)p = LONG_NUMBER; + break; + } + + return 1; +} + +static __init void store_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) +{ + const char *type = fgraph_store_type_name; + long long expect = 0; + long long found = -1; + int size; + char *p; + + p = fgraph_retrieve_data(gops->idx, &size); + if (!p) { + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "Failed to retrieve %s\n", type); + fgraph_error_str = fgraph_error_str_buf; + return; + } + if (fgraph_store_size > size) { + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "Retrieved size %d is smaller than expected %d\n", + size, (int)fgraph_store_size); + fgraph_error_str = fgraph_error_str_buf; + return; + } + + switch (fgraph_store_size) { + case 1: + expect = BYTE_NUMBER; + found = *(char *)p; + break; + case 2: + expect = SHORT_NUMBER; + found = *(short *)p; + break; + case 4: + expect = WORD_NUMBER; + found = *(int *)p; + break; + case 8: + expect = LONG_NUMBER; + found = *(long long *)p; + break; + } + + if (found != expect) { + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "%s returned not %lld but %lld\n", type, expect, found); + fgraph_error_str = fgraph_error_str_buf; + return; + } + fgraph_error_str = NULL; +} + +static struct fgraph_ops store_bytes __initdata = { + .entryfunc = store_entry, + .retfunc = store_return, +}; + +static int __init test_graph_storage_type(const char *name, int size) +{ + char *func_name; + int len; + int ret; + + fgraph_store_type_name = name; + fgraph_store_size = size; + + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "Failed to execute storage %s\n", name); + fgraph_error_str = fgraph_error_str_buf; + + pr_cont("PASSED\n"); + pr_info("Testing fgraph storage of %d byte%s: ", size, size > 1 ? "s" : ""); + + func_name = "*" __stringify(DYN_FTRACE_TEST_NAME); + len = strlen(func_name); + + ret = ftrace_set_filter(&store_bytes.ops, func_name, len, 1); + if (ret && ret != -ENODEV) { + pr_cont("*Could not set filter* "); + return -1; + } + + ret = register_ftrace_graph(&store_bytes); + if (ret) { + pr_warn("Failed to init store_bytes fgraph tracing\n"); + return -1; + } + + DYN_FTRACE_TEST_NAME(); + + unregister_ftrace_graph(&store_bytes); + + if (fgraph_error_str) { + pr_cont("*** %s ***", fgraph_error_str); + return -1; + } + + return 0; +} +/* Test the storage passed across function_graph entry and return */ +static __init int test_graph_storage(void) +{ + int ret; + + ret = test_graph_storage_type("byte", 1); + if (ret) + return ret; + ret = test_graph_storage_type("short", 2); + if (ret) + return ret; + ret = test_graph_storage_type("word", 4); + if (ret) + return ret; + ret = test_graph_storage_type("long long", 8); + if (ret) + return ret; + return 0; +} +#else +static inline int test_graph_storage(void) { return 0; } +#endif /* CONFIG_DYNAMIC_FTRACE */ + /* Maximum number of functions to trace before diagnosing a hang */ #define GRAPH_MAX_FUNC_TEST 100000000 @@ -913,6 +1080,8 @@ trace_selftest_startup_function_graph(struct tracer *trace, ftrace_set_global_filter(NULL, 0, 1); #endif + ret = test_graph_storage(); + /* Don't test dynamic tracing, the function tracer already did */ out: /* Stop it if we failed */ From patchwork Fri Jan 12 10:15:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187681 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp77986dyc; Fri, 12 Jan 2024 02:18:52 -0800 (PST) X-Google-Smtp-Source: AGHT+IHMb+sLUj5aMAYEWRoi7Sf5wJ7+W2wzxNyRNbrs8kc7S9yvUjpH3Pf8tuIg9IhJaudWL+RI X-Received: by 2002:a05:6512:488e:b0:50d:1c9d:59e4 with SMTP id eq14-20020a056512488e00b0050d1c9d59e4mr442390lfb.47.1705054732270; Fri, 12 Jan 2024 02:18:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054732; cv=none; d=google.com; s=arc-20160816; b=q0RlJ4TwIGjadiCMKfkozCekmFoZYtYJiR/ECh4VKfd/NrYN1vhikasmDIWUEnzMBM 2HJrsSw1MoKU7bP3dt7mZfVkh+uhMpxmI6IamBK++OQjxCl5J67+Wafml56qNu2AaN1H 8B6kEDaK69Q4FMthH9fzZXilN5V6DRQEzlDCJcWN60IjByCwB9LjrsYyd2/cG12Ilcdj ZCSULbL9nZZKRYuQFfWhuxpDs5J15D8+uEn7m8smugzsOlMMefWmHhCAAYy+O8AiY+wk FmuKrAgEunlwLXAUCn4kv6zrrhlOV+KsasA2mKdScrGnPnxy3hxnvPaFRnFnFHJt3FVk 0ERw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=x8OyV35dRw5k/j+0urAtb7UXLLYn8WotvCZqCPUWbck=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=OGJZNYi2Vnor0EDlE3oUQLh4DxXHIjlRE35ACNlVeyvZSn/jQhiTjy46cctwpS2HN5 /B2zWpowKlGbAWKXYgefcNOheYN5zOhiqZy3LIgIN9a8J3E77DSfiiIrzvY7mkipFBOQ Lty3N1AjSI9ZI86o0KblcuJYKwYgYqQJ1Uzl3IWTjT4M+v3AnDiDhu+e80or0wMMq3AV a28touMW4qocG4XAgoVRuFiAdcyrcIzZ78ro0O2qHlHZ3flb6CCefK5rsMzIfkOuanyy D7gyikt/s/WBZ399wLFboEdRtwVuUTGSi8ZZrEBXtdxnR+9E34gbOr/NeMxKiGhI0VM1 y8qQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=sVs+ZWkM; spf=pass (google.com: domain of linux-kernel+bounces-24570-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24570-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id e8-20020a170906374800b00a26ee9a2520si1239634ejc.709.2024.01.12.02.18.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:18:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24570-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=sVs+ZWkM; spf=pass (google.com: domain of linux-kernel+bounces-24570-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24570-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B13D31F23D2C for ; Fri, 12 Jan 2024 10:18:51 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CDA83634F4; Fri, 12 Jan 2024 10:15:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sVs+ZWkM" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB601634EE; Fri, 12 Jan 2024 10:15:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05811C433C7; Fri, 12 Jan 2024 10:15:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054517; bh=ZExIjzlvIHiWOr1SWHx178nMkcePO/iYDVrSnbmxavo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sVs+ZWkM1z2tZNZ+DnjZcn4KBCmQXe8N3q4G8eQ+q6OdSwvnloEPXyrLXFTIPgN4n 8Ri7fORdA5IWI9/KsGqyb6fAvooX1+QJ6ncqYD0a4cm2cs36b1e986EvuCcLO4OaQj lCJcvhjsM9FZVtRPJwt4ucfglRpOnzpt9P49RoF5ZEh3SuEaXQ/0j9hw8XXK2dyyfh PWIyyWI8Gbr7nagl/T8UOdeN/881YeZjANEpPZUY2jGGK9U6oM/sqzpLzzun/4k8Fm MTckFwCwltcO1B9EYotz0ULQBM0sPYA5O0L6vZbolKP9S3M6f+6KcHUCTQY9mDISVL Mc7OLxyF88Hzg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 22/36] function_graph: Add a new entry handler with parent_ip and ftrace_regs Date: Fri, 12 Jan 2024 19:15:11 +0900 Message-Id: <170505451140.459169.10466274060076226583.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879470728744413 X-GMAIL-MSGID: 1787879470728744413 From: Masami Hiramatsu (Google) Add a new entry handler to fgraph_ops as 'entryregfunc' which takes parent_ip and ftrace_regs. Note that the 'entryfunc' and 'entryregfunc' are mutual exclusive. You can set only one of them. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Update for new multiple fgraph. --- arch/arm64/kernel/ftrace.c | 2 + arch/loongarch/kernel/ftrace_dyn.c | 6 ++++ arch/powerpc/kernel/trace/ftrace.c | 2 + arch/powerpc/kernel/trace/ftrace_64_pg.c | 10 ++++--- arch/x86/kernel/ftrace.c | 42 ++++++++++++++++-------------- include/linux/ftrace.h | 19 +++++++++++--- kernel/trace/fgraph.c | 30 +++++++++++++++++---- 7 files changed, 76 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index b96740829798..779b975f03f5 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -497,7 +497,7 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, return; if (!function_graph_enter_ops(*parent, ip, frame_pointer, - (void *)frame_pointer, gops)) + (void *)frame_pointer, fregs, gops)) *parent = (unsigned long)&return_to_handler; ftrace_test_recursion_unlock(bit); diff --git a/arch/loongarch/kernel/ftrace_dyn.c b/arch/loongarch/kernel/ftrace_dyn.c index 73858c9029cc..39b3f09a5e0c 100644 --- a/arch/loongarch/kernel/ftrace_dyn.c +++ b/arch/loongarch/kernel/ftrace_dyn.c @@ -244,7 +244,11 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct pt_regs *regs = &fregs->regs; unsigned long *parent = (unsigned long *)®s->regs[1]; - prepare_ftrace_return(ip, (unsigned long *)parent); + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + + if (!function_graph_enter_regs(regs->regs[1], ip, 0, parent, fregs)) + regs->regs[1] = (unsigned long)&return_to_handler; } #else static int ftrace_modify_graph_caller(bool enable) diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c index 82010629cf88..9bf1b6912116 100644 --- a/arch/powerpc/kernel/trace/ftrace.c +++ b/arch/powerpc/kernel/trace/ftrace.c @@ -422,7 +422,7 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, if (bit < 0) goto out; - if (!function_graph_enter(parent_ip, ip, 0, (unsigned long *)sp)) + if (!function_graph_enter_regs(parent_ip, ip, 0, (unsigned long *)sp, fregs)) parent_ip = ppc_function_entry(return_to_handler); ftrace_test_recursion_unlock(bit); diff --git a/arch/powerpc/kernel/trace/ftrace_64_pg.c b/arch/powerpc/kernel/trace/ftrace_64_pg.c index 7b85c3b460a3..43f6cfaaf7db 100644 --- a/arch/powerpc/kernel/trace/ftrace_64_pg.c +++ b/arch/powerpc/kernel/trace/ftrace_64_pg.c @@ -795,7 +795,8 @@ int ftrace_disable_ftrace_graph_caller(void) * in current thread info. Return the address we want to divert to. */ static unsigned long -__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) +__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp, + struct ftrace_regs *fregs) { unsigned long return_hooker; int bit; @@ -812,7 +813,7 @@ __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp return_hooker = ppc_function_entry(return_to_handler); - if (!function_graph_enter(parent, ip, 0, (unsigned long *)sp)) + if (!function_graph_enter_regs(parent, ip, 0, (unsigned long *)sp, fregs)) parent = return_hooker; ftrace_test_recursion_unlock(bit); @@ -824,13 +825,14 @@ __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { - fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]); + fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, + fregs->regs.gpr[1], fregs); } #else unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) { - return __prepare_ftrace_return(parent, ip, sp); + return __prepare_ftrace_return(parent, ip, sp, NULL); } #endif #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 845e29b4254f..0f757e399a96 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -614,16 +614,8 @@ int ftrace_disable_ftrace_graph_caller(void) } #endif /* CONFIG_DYNAMIC_FTRACE && !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ -/* - * Hook the return address and push it in the stack of return addrs - * in current thread info. - */ -void prepare_ftrace_return(unsigned long ip, unsigned long *parent, - unsigned long frame_pointer) +static inline bool skip_ftrace_return(void) { - unsigned long return_hooker = (unsigned long)&return_to_handler; - int bit; - /* * When resuming from suspend-to-ram, this function can be indirectly * called from early CPU startup code while the CPU is in real mode, @@ -633,13 +625,28 @@ void prepare_ftrace_return(unsigned long ip, unsigned long *parent, * This check isn't as accurate as virt_addr_valid(), but it should be * good enough for this purpose, and it's fast. */ - if (unlikely((long)__builtin_frame_address(0) >= 0)) - return; + if ((long)__builtin_frame_address(0) >= 0) + return true; - if (unlikely(ftrace_graph_is_dead())) - return; + if (ftrace_graph_is_dead()) + return true; + + if (atomic_read(¤t->tracing_graph_pause)) + return true; + return false; +} - if (unlikely(atomic_read(¤t->tracing_graph_pause))) +/* + * Hook the return address and push it in the stack of return addrs + * in current thread info. + */ +void prepare_ftrace_return(unsigned long ip, unsigned long *parent, + unsigned long frame_pointer) +{ + unsigned long return_hooker = (unsigned long)&return_to_handler; + int bit; + + if (unlikely(skip_ftrace_return())) return; bit = ftrace_test_recursion_trylock(ip, *parent); @@ -661,17 +668,14 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct fgraph_ops *gops = container_of(op, struct fgraph_ops, ops); int bit; - if (unlikely(ftrace_graph_is_dead())) - return; - - if (unlikely(atomic_read(¤t->tracing_graph_pause))) + if (unlikely(skip_ftrace_return())) return; bit = ftrace_test_recursion_trylock(ip, *parent); if (bit < 0) return; - if (!function_graph_enter_ops(*parent, ip, 0, parent, gops)) + if (!function_graph_enter_ops(*parent, ip, 0, parent, fregs, gops)) *parent = (unsigned long)&return_to_handler; ftrace_test_recursion_unlock(bit); diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 815e865f46c9..65d4d4b68768 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1063,6 +1063,11 @@ typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *, typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *, struct fgraph_ops *); /* entry */ +typedef int (*trace_func_graph_regs_ent_t)(unsigned long func, + unsigned long parent_ip, + struct ftrace_regs *fregs, + struct fgraph_ops *); /* entry w/ regs */ + extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); #ifdef CONFIG_FUNCTION_GRAPH_TRACER @@ -1070,6 +1075,7 @@ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph struct fgraph_ops { trace_func_graph_ent_t entryfunc; trace_func_graph_ret_t retfunc; + trace_func_graph_regs_ent_t entryregfunc; struct ftrace_ops ops; /* for the hash lists */ void *private; int idx; @@ -1106,13 +1112,20 @@ struct ftrace_ret_stack { extern void return_to_handler(void); extern int -function_graph_enter(unsigned long ret, unsigned long func, - unsigned long frame_pointer, unsigned long *retp); +function_graph_enter_regs(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct ftrace_regs *fregs); + +static inline int function_graph_enter(unsigned long ret, unsigned long func, + unsigned long fp, unsigned long *retp) +{ + return function_graph_enter_regs(ret, func, fp, retp, NULL); +} extern int function_graph_enter_ops(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp, - struct fgraph_ops *gops); + struct ftrace_regs *fregs, struct fgraph_ops *gops); struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int idx); diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 6a9206ebc6a2..0cb02de2db70 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -621,9 +621,21 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, # define MCOUNT_INSN_SIZE 0 #endif +static inline int call_entry_func(struct ftrace_graph_ent *trace, + unsigned long func, unsigned long ret, + struct ftrace_regs *fregs, + struct fgraph_ops *gops) +{ + if (gops->entryregfunc) + return gops->entryregfunc(func, ret, fregs, gops); + + return gops->entryfunc(trace, gops); +} + /* If the caller does not use ftrace, call this function. */ -int function_graph_enter(unsigned long ret, unsigned long func, - unsigned long frame_pointer, unsigned long *retp) +int function_graph_enter_regs(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct ftrace_regs *fregs) { struct ftrace_graph_ent trace; unsigned long bitmap = 0; @@ -658,7 +670,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, save_curr_ret_stack = current->curr_ret_stack; if (ftrace_ops_test(&gops->ops, func, NULL) && - gops->entryfunc(&trace, gops)) + call_entry_func(&trace, func, ret, fregs, gops)) bitmap |= BIT(i); else /* Clear out any saved storage */ @@ -686,6 +698,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, /* This is called from ftrace_graph_func() via ftrace */ int function_graph_enter_ops(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp, + struct ftrace_regs *fregs, struct fgraph_ops *gops) { struct ftrace_graph_ent trace; @@ -710,7 +723,7 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, trace.func = func; trace.depth = current->curr_ret_depth; save_curr_ret_stack = current->curr_ret_stack; - if (gops->entryfunc(&trace, gops)) { + if (call_entry_func(&trace, func, ret, fregs, gops)) { if (type == FGRAPH_TYPE_RESERVED) set_fgraph_index_bitmap(current, index, BIT(gops->idx)); else @@ -993,7 +1006,8 @@ void fgraph_init_ops(struct ftrace_ops *dst_ops, struct ftrace_ops *src_ops) { dst_ops->func = ftrace_graph_func; - dst_ops->flags = FTRACE_OPS_FL_PID | FTRACE_OPS_GRAPH_STUB; + dst_ops->flags = FTRACE_OPS_FL_PID | FTRACE_OPS_GRAPH_STUB | + FTRACE_OPS_FL_SAVE_ARGS; #ifdef FTRACE_GRAPH_TRAMP_ADDR dst_ops->trampoline = FTRACE_GRAPH_TRAMP_ADDR; @@ -1239,10 +1253,14 @@ int register_ftrace_graph(struct fgraph_ops *gops) int ret = 0; int i; + if (gops->entryfunc && gops->entryregfunc) + return -EINVAL; + mutex_lock(&ftrace_lock); if (!gops->ops.func) { - gops->ops.flags |= FTRACE_OPS_GRAPH_STUB; + gops->ops.flags |= FTRACE_OPS_GRAPH_STUB | + FTRACE_OPS_FL_SAVE_ARGS; gops->ops.func = ftrace_graph_func; #ifdef FTRACE_GRAPH_TRAMP_ADDR gops->ops.trampoline = FTRACE_GRAPH_TRAMP_ADDR; From patchwork Fri Jan 12 10:15:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187682 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp78084dyc; Fri, 12 Jan 2024 02:19:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IE/qfx1WiPTaY/usc1DVl1H4IWiqi6xIYcGxnvdhObLcGlnnkrPyECxd2lf3bFqZ/sh+41z X-Received: by 2002:a05:6214:242c:b0:681:21f5:7b4f with SMTP id gy12-20020a056214242c00b0068121f57b4fmr776208qvb.1.1705054748710; Fri, 12 Jan 2024 02:19:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054748; cv=none; d=google.com; s=arc-20160816; b=JujEvPQQFYjc1nk/vw0nHegc5/FPmWZRcz9gEtRdTwO8I19J/q/xIxrpZIv/0Rl5Ef fxje7D3ZrxL7i84d6O9eQPCtq5eLKPoSpqEHfohVwTSRjlEX3YJjA1ROFjSlipt7WXWM BJNKoiB+ssd1mFRD61TcSQ0Kl2gm4SDK8xAn7MDEcv7Iax8hKc7wgFq8x6m+cd5BHHDZ Dxmj/KbqBWPYjde0c0ohgWky1J/jdJuN4FysMKOJ377OnhXbot0Wwd4QX/A4y5OLoRe1 YN0WbxsJ/KYrNY936jX8qzj2AFTJSyeyKQxll+Xfv5T5BanAWwE1P1gnAlYMbOrzYGOr QDvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=9qDh+6BUHQUPRlEskba6pr/qCzP7Sl1hqzIsHUB2nSE=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=yNsr0AWF/h2ZZCxiYs4OE2J5cdK5LWhrBaQJamhsSr0Pct6RXVd21sP7oL2i7t1oRz uzMT/99iERv4+SAm79IOg5s+SWApXQORonrzbzoo5pMxyOGa+SL4Gb48jEqGXACq3NTi IRfDgfyPql8vOAJHjoz3OrAZ3GIsYNw4R4hrHPwyJ32loURIE5m5gofZ92w96D8DL9CY 30k91+UGjhhPFk7n/p6+uxsWKs7D8fVBilBQXb+08PMKWsW1zDXvM7zzkJZxq3wZseqD WxEyj2QfItVpjD3jQIkXocoEyJdg2+/+cGXbOS9s4sTDH9bX3xoLT6AtjZ9Gf1q028tj gawA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pzBmrzVU; spf=pass (google.com: domain of linux-kernel+bounces-24571-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24571-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id n18-20020a0cdc92000000b0068087e996c1si2479287qvk.455.2024.01.12.02.19.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:19:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24571-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pzBmrzVU; spf=pass (google.com: domain of linux-kernel+bounces-24571-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24571-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 72B861C2508B for ; Fri, 12 Jan 2024 10:19:08 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3C70863503; Fri, 12 Jan 2024 10:15:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pzBmrzVU" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAAA6634FC; Fri, 12 Jan 2024 10:15:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 76877C433C7; Fri, 12 Jan 2024 10:15:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054528; bh=cOVoWSf5NdFJF3ZMipfWZ3TW+oD2PH8jSGs6Ggfl9Gc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pzBmrzVUp4LFlbzejOgP1chryIOriYxpKcyZgkzpkxnswjrPU6J3g+ri6qT6bs6zl FIj0/F7ISHWhWqJ7B5D5awaIOhUG/dgiztK45nPShc07DDf7iB7NgYLaOwsIkhwoT3 STWG7c+8Ez3jxtNG8BouBre1PBuai2r2/d6GsCMtxzCfy/eUgtb2UR1SRtdhin0pXQ QwDgBfBJxB79Ja/+MmI/wA5iwYU8x8cMNgL+YAOnbI17RB9V0BK4X3XXcB5TE1klyl N6ZQ2EdymRV4D9RAAGzO8vj286u5BAmr7E8H3d2R6jt2upgQ7G7SMGkD64Bdzo1Hp7 c5vqJJZswum1g== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 23/36] function_graph: Add a new exit handler with parent_ip and ftrace_regs Date: Fri, 12 Jan 2024 19:15:23 +0900 Message-Id: <170505452307.459169.10891050900074648719.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879488516851901 X-GMAIL-MSGID: 1787879488516851901 From: Masami Hiramatsu (Google) Add a new return handler to fgraph_ops as 'retregfunc' which takes parent_ip and ftrace_regs instead of ftrace_graph_ret. This handler is available only if the arch support CONFIG_HAVE_FUNCTION_GRAPH_FREGS. Note that the 'retfunc' and 'reregfunc' are mutual exclusive. You can set only one of them. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v6: - update to use ftrace_regs_get_return_value() because of reordering patches. Changes in v3: - Update for new multiple fgraph. - Save the return address to instruction pointer in ftrace_regs. --- arch/x86/include/asm/ftrace.h | 2 + include/linux/ftrace.h | 10 +++++- kernel/trace/Kconfig | 5 ++- kernel/trace/fgraph.c | 70 ++++++++++++++++++++++++++++------------- 4 files changed, 63 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index c88bf47f46da..a061f8832b20 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -72,6 +72,8 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) override_function_with_return(&(fregs)->regs) #define ftrace_regs_query_register_offset(name) \ regs_query_register_offset(name) +#define ftrace_regs_get_frame_pointer(fregs) \ + frame_pointer(&(fregs)->regs) struct ftrace_ops; #define ftrace_graph_func ftrace_graph_func diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 65d4d4b68768..da2a23f5a9ed 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -43,7 +43,9 @@ struct dyn_ftrace; char *arch_ftrace_match_adjust(char *str, const char *search); -#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS +unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs); +#elif defined(CONFIG_HAVE_FUNCTION_GRAPH_RETVAL) struct fgraph_ret_regs; unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs); #else @@ -157,6 +159,7 @@ struct ftrace_regs { #define ftrace_regs_set_instruction_pointer(fregs, ip) do { } while (0) #endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ + static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs) { if (!fregs) @@ -1067,6 +1070,10 @@ typedef int (*trace_func_graph_regs_ent_t)(unsigned long func, unsigned long parent_ip, struct ftrace_regs *fregs, struct fgraph_ops *); /* entry w/ regs */ +typedef void (*trace_func_graph_regs_ret_t)(unsigned long func, + unsigned long parent_ip, + struct ftrace_regs *, + struct fgraph_ops *); /* return w/ regs */ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); @@ -1076,6 +1083,7 @@ struct fgraph_ops { trace_func_graph_ent_t entryfunc; trace_func_graph_ret_t retfunc; trace_func_graph_regs_ent_t entryregfunc; + trace_func_graph_regs_ret_t retregfunc; struct ftrace_ops ops; /* for the hash lists */ void *private; int idx; diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 61c541c36596..308b3bec01b1 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -34,6 +34,9 @@ config HAVE_FUNCTION_GRAPH_TRACER config HAVE_FUNCTION_GRAPH_RETVAL bool +config HAVE_FUNCTION_GRAPH_FREGS + bool + config HAVE_DYNAMIC_FTRACE bool help @@ -232,7 +235,7 @@ config FUNCTION_GRAPH_TRACER config FUNCTION_GRAPH_RETVAL bool "Kernel Function Graph Return Value" - depends on HAVE_FUNCTION_GRAPH_RETVAL + depends on HAVE_FUNCTION_GRAPH_RETVAL || HAVE_FUNCTION_GRAPH_FREGS depends on FUNCTION_GRAPH_TRACER default n help diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 0cb02de2db70..0f11f80bdd6c 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -742,8 +742,8 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, /* Retrieve a function return address to the trace stack on thread info.*/ static struct ftrace_ret_stack * -ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, - unsigned long frame_pointer, int *index) +ftrace_pop_return_trace(unsigned long *ret, unsigned long frame_pointer, + int *index) { struct ftrace_ret_stack *ret_stack; @@ -788,10 +788,6 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, *index += FGRAPH_RET_INDEX; *ret = ret_stack->ret; - trace->func = ret_stack->func; - trace->calltime = ret_stack->calltime; - trace->overrun = atomic_read(¤t->trace_overrun); - trace->depth = current->curr_ret_depth; /* * We still want to trace interrupts coming in if * max_depth is set to 1. Make sure the decrement is @@ -830,21 +826,42 @@ static struct notifier_block ftrace_suspend_notifier = { /* fgraph_ret_regs is not defined without CONFIG_FUNCTION_GRAPH_RETVAL */ struct fgraph_ret_regs; +static void fgraph_call_retfunc(struct ftrace_regs *fregs, + struct fgraph_ret_regs *ret_regs, + struct ftrace_ret_stack *ret_stack, + struct fgraph_ops *gops) +{ + struct ftrace_graph_ret trace; + + trace.func = ret_stack->func; + trace.calltime = ret_stack->calltime; + trace.overrun = atomic_read(¤t->trace_overrun); + trace.depth = current->curr_ret_depth; + trace.rettime = trace_clock_local(); +#ifdef CONFIG_FUNCTION_GRAPH_RETVAL + if (fregs) + trace.retval = ftrace_regs_get_return_value(fregs); + else + trace.retval = fgraph_ret_regs_return_value(ret_regs); +#endif + gops->retfunc(&trace, gops); +} + /* * Send the trace to the ring-buffer. * @return the original return address. */ -static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs, +static unsigned long __ftrace_return_to_handler(struct ftrace_regs *fregs, + struct fgraph_ret_regs *ret_regs, unsigned long frame_pointer) { struct ftrace_ret_stack *ret_stack; - struct ftrace_graph_ret trace; unsigned long bitmap; unsigned long ret; int index; int i; - ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer, &index); + ret_stack = ftrace_pop_return_trace(&ret, frame_pointer, &index); if (unlikely(!ret_stack)) { ftrace_graph_stop(); @@ -853,10 +870,8 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs return (unsigned long)panic; } - trace.rettime = trace_clock_local(); -#ifdef CONFIG_FUNCTION_GRAPH_RETVAL - trace.retval = fgraph_ret_regs_return_value(ret_regs); -#endif + if (fregs) + ftrace_regs_set_instruction_pointer(fregs, ret); bitmap = get_fgraph_index_bitmap(current, index); for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { @@ -867,7 +882,10 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs if (gops == &fgraph_stub) continue; - gops->retfunc(&trace, gops); + if (gops->retregfunc) + gops->retregfunc(ret_stack->func, ret, fregs, gops); + else + fgraph_call_retfunc(fregs, ret_regs, ret_stack, gops); } /* @@ -883,20 +901,22 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs return ret; } -/* - * After all architecures have selected HAVE_FUNCTION_GRAPH_RETVAL, we can - * leave only ftrace_return_to_handler(ret_regs). - */ -#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS +unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs) +{ + return __ftrace_return_to_handler(fregs, NULL, + ftrace_regs_get_frame_pointer(fregs)); +} +#elif defined(CONFIG_HAVE_FUNCTION_GRAPH_RETVAL) unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs) { - return __ftrace_return_to_handler(ret_regs, + return __ftrace_return_to_handler(NULL, ret_regs, fgraph_ret_regs_frame_pointer(ret_regs)); } #else unsigned long ftrace_return_to_handler(unsigned long frame_pointer) { - return __ftrace_return_to_handler(NULL, frame_pointer); + return __ftrace_return_to_handler(NULL, NULL, frame_pointer); } #endif @@ -1253,9 +1273,15 @@ int register_ftrace_graph(struct fgraph_ops *gops) int ret = 0; int i; - if (gops->entryfunc && gops->entryregfunc) + if ((gops->entryfunc && gops->entryregfunc) || + (gops->retfunc && gops->retregfunc)) return -EINVAL; +#ifndef CONFIG_HAVE_FUNCTION_GRAPH_FREGS + if (gops->retregfunc) + return -EOPNOTSUPP; +#endif + mutex_lock(&ftrace_lock); if (!gops->ops.func) { From patchwork Fri Jan 12 10:15:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187684 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp78195dyc; Fri, 12 Jan 2024 02:19:26 -0800 (PST) X-Google-Smtp-Source: AGHT+IE29vtxiib6Y1rwzYdYjl9d6HpV5ReCcqTP6AFP249JDGtM1ooZ7eUpZLAdhPHyDfPfYIjI X-Received: by 2002:a17:907:7eaa:b0:a2c:96a1:99ed with SMTP id qb42-20020a1709077eaa00b00a2c96a199edmr1436506ejc.11.1705054766426; Fri, 12 Jan 2024 02:19:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054766; cv=none; d=google.com; s=arc-20160816; b=n4DNBYeyyJGX4PLRr5hvPMzSkiMstTW4gbCysmqKoSZOVBWBfamXgt7AY+SLgWbZnb LrMmAicbuBJG5mYxnvkvVWQ2ZHHMT+nihElrAL5Oe/ENLd3Y2A0ud0b6kAXSsWQE0cTH Fxj8mfyIDcgt3/bljJCXqbM8sA2pNTENW/+oThiA75+hNuGERVvO2K82fdkraIVgK5fz qpKA0FeLIWnEeQsJhz52UwqQQ0TQgNb4JZ9izQwHp7JtBP+cIa8FV95mjV8Xt3hCW8jb UaVb7eeAkeIlVrgM+0xae+pEKFyN/NEGQ9AwCXZc3NLawBi+VvM9ZNewQrivpbim8GhP X5Xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=WXaO3rPugAi0HaTx2w271P8mFLQmqt9kUi2g1TRQzOI=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=hCj6HfYcJPQNjE+fcqqgzyDitZrwRIXa9mpmDUUCeEqHRn7qdJh6zULwnGEWIR1Vtq ASu8Rafga4D00gC8dBZpUbSs5G0J5+jL47gz3z/p36acxGkGgpVZkGbzPri+puWoYAvY u3RFQwgsPKKSvemITDAyOd+4i6UPW8UGlHFIGT/A74AHrZu+MaH2VhHyTmXF1hL9fNKz b6sPZD7/uFPA/sZavuZtRR7FPV32eX6WM+Uue4OHMK4y5IMAnZoIaPSvJ6C/I/jfYReA ac0vLGGrJ3kmIduzSPHdKT9OxzmLNdNriy+KeEbiZNL3ulgY/3sTnvHTWb7QBcCZeS8o 72JA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=UyqSz4nE; spf=pass (google.com: domain of linux-kernel+bounces-24572-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24572-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id cf4-20020a170906b2c400b00a2335936fc9si1320390ejb.458.2024.01.12.02.19.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:19:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24572-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=UyqSz4nE; spf=pass (google.com: domain of linux-kernel+bounces-24572-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24572-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 06FEF1F236E6 for ; Fri, 12 Jan 2024 10:19:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4217160EE8; Fri, 12 Jan 2024 10:15:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UyqSz4nE" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A09A66350B; Fri, 12 Jan 2024 10:15:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 85A87C433C7; Fri, 12 Jan 2024 10:15:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054540; bh=5r0EiHY3VraDrYlZ1BWHRcwyp2h65Zz0Zx9/NZDFHmM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UyqSz4nEzBqO7v4mv9knVojGcee0dJ02LGk/tC8RUqRoNURh5WVDuP4dgerOMDYbE e6BxgQoO2bfbJrTE5uImzIRxDmdkM5ZaK0ajJu/rq9CcjQkDQ2L5SQB511/GHkt0Si E4nIh4BlwBmDbDtmxVSS6JrfEK94XmjB9GBRhCHY2mmBzr6BiXr+gd6Lx17qSRk+Gi dKl5OlUDcSWgn26aYD37e8QQtpAuOzCmBlVUwbAJFMF3z27mF27wAhJpk5vheQhpYb YGajNiqKfDMYrwZCKYJUFxKUYuJ78R4HeVR7WrCqV4dPigf2TZD9/7+ixtj0PxvUVt pnuiVjUjqgLkw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 24/36] x86/ftrace: Enable HAVE_FUNCTION_GRAPH_FREGS Date: Fri, 12 Jan 2024 19:15:34 +0900 Message-Id: <170505453413.459169.7198256674189199722.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879506492396031 X-GMAIL-MSGID: 1787879506492396031 From: Masami Hiramatsu (Google) Support HAVE_FUNCTION_GRAPH_FREGS on x86-64, which saves ftrace_regs on the stack in ftrace_graph return trampoline so that the callbacks can access registers via ftrace_regs APIs. Note that this only recovers 'rax' and 'rdx' registers because other registers are not used anymore and recovered by caller. 'rax' and 'rdx' will be used for passing the return value. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Add a comment about rip. Changes in v2: - Save rsp register and drop clearing orig_ax. --- arch/x86/Kconfig | 3 ++- arch/x86/kernel/ftrace_64.S | 37 +++++++++++++++++++++++++++++-------- 2 files changed, 31 insertions(+), 9 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 1566748f16c4..375ad280ee75 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -219,7 +219,8 @@ config X86 select HAVE_FAST_GUP select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD - select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER + select HAVE_FUNCTION_GRAPH_FREGS if HAVE_DYNAMIC_FTRACE_WITH_ARGS + select HAVE_FUNCTION_GRAPH_RETVAL if !HAVE_DYNAMIC_FTRACE_WITH_ARGS select HAVE_FUNCTION_GRAPH_TRACER if X86_32 || (X86_64 && DYNAMIC_FTRACE) select HAVE_FUNCTION_TRACER select HAVE_GCC_PLUGINS diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S index 214f30e9f0c0..8a16f774604e 100644 --- a/arch/x86/kernel/ftrace_64.S +++ b/arch/x86/kernel/ftrace_64.S @@ -348,21 +348,42 @@ STACK_FRAME_NON_STANDARD_FP(__fentry__) SYM_CODE_START(return_to_handler) UNWIND_HINT_UNDEFINED ANNOTATE_NOENDBR - subq $24, %rsp + /* + * Save the registers requires for ftrace_regs; + * rax, rcx, rdx, rdi, rsi, r8, r9 and rbp + */ + subq $(FRAME_SIZE), %rsp + movq %rax, RAX(%rsp) + movq %rcx, RCX(%rsp) + movq %rdx, RDX(%rsp) + movq %rsi, RSI(%rsp) + movq %rdi, RDI(%rsp) + movq %r8, R8(%rsp) + movq %r9, R9(%rsp) + movq %rbp, RBP(%rsp) + /* + * orig_ax is not cleared because it is used for indicating the direct + * trampoline in the fentry. And rip is not set because we don't know + * the correct return address here. + */ + + leaq FRAME_SIZE(%rsp), %rcx + movq %rcx, RSP(%rsp) - /* Save the return values */ - movq %rax, (%rsp) - movq %rdx, 8(%rsp) - movq %rbp, 16(%rsp) movq %rsp, %rdi call ftrace_return_to_handler movq %rax, %rdi - movq 8(%rsp), %rdx - movq (%rsp), %rax - addq $24, %rsp + /* + * Restore only rax and rdx because other registers are not used + * for return value nor callee saved. Caller will reuse/recover it. + */ + movq RDX(%rsp), %rdx + movq RAX(%rsp), %rax + + addq $(FRAME_SIZE), %rsp /* * Jump back to the old return address. This cannot be JMP_NOSPEC rdi * since IBT would demand that contain ENDBR, which simply isn't so for From patchwork Fri Jan 12 10:15:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187695 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp80371dyc; Fri, 12 Jan 2024 02:25:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IFwYTfxQ3ckUKS3cexTtz7jJGteUkSV7+gY/MTSSI2A1IGNd9Mkt6cV/vE506A8nZf4j8mQ X-Received: by 2002:a05:6808:11c6:b0:3bd:1fdc:eeaa with SMTP id p6-20020a05680811c600b003bd1fdceeaamr910628oiv.55.1705055112185; Fri, 12 Jan 2024 02:25:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705055112; cv=none; d=google.com; s=arc-20160816; b=X95R5J4Dyx4jxAVcjUXLvKB0Zh0YC5uvB3gWa3JgfASSI9tPCDLhWItBC18KkC94zl FQ+yY+sCi+k92wFVT3dehIM0K9q1KkJa52OWDN9TGHzwa88J86wPFwEgnJqRLYHmiO/+ GDqQSYal3pPVoRPuX4bqAoOg19utxHPt02b1StS6SVrbSPoX93jx5VmT8/ZjrIUbMfmE NYS7wVFuDJBs+HRkjXV+IxMeUST9hTo1QR4Phz1Z5VFngzgqVUHgSytgRoyjD7Kos1PA K+9ngg741oJ4btpIoYUweLdAIVKMQ47aaGE5Vr9P64r4bYzo9Rwnay+afoTnpEr8xvjm xFEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=gyhDufMlqWYyit6UVacrM1UC8KltlsDNU7U2cyRBaIc=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=pE+0E6T3RjMx/4hyfR4ROm3ukE1Xf259LCsoEx6OOXG1MU9uCWcZiARkgy0tPLc5VN +KqmpKSOAEGIhCea/XVb3iRWC4WXdAYPQ1rpB0DzyPJ+XuAWDD0iCxrzwIn9G9yATRBM 3mWn9brvK4bfc0n4jT2qr9S4sF00M2LG9Tuy9OXbVoEZ0/KMcgiyrJ7KwIS/FPzBtd1l P0+I3+ncNa2zDsxzNWx0x9cWVFiLH648G4LMEm6tYE3Rw5v4lnklDFL4hDHCOTixzT57 bK9HR3A/3IO+wkufAHxyIajEcdYkkCVT/o8LUVkbPhUoe30o6rREWcu9Dntb/bhhA4Si Dh7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=VXpG45fs; spf=pass (google.com: domain of linux-kernel+bounces-24573-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24573-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id kg16-20020a170903061000b001d5a52386cfsi977589plb.27.2024.01.12.02.25.12 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:25:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24573-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=VXpG45fs; spf=pass (google.com: domain of linux-kernel+bounces-24573-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24573-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 3163C28E2A7 for ; Fri, 12 Jan 2024 10:19:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3E99764A93; Fri, 12 Jan 2024 10:15:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VXpG45fs" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FF095DF1F; Fri, 12 Jan 2024 10:15:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9669C433F1; Fri, 12 Jan 2024 10:15:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054551; bh=Tv6V79hvaGG+hzXn9p0p+FTonUg+dTMFhQaVZNAc9e0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VXpG45fs15/T3CetNIg6DF7mB7CoAitb1sqPFlt6+9dUtMxj/aZcxcZ43Fjx5eJpp 7DwuRXg1FyXYNhNNtTrSEkee8s1iFanU3Hd8whyXcj/lK7Xnfu5bcUrg85RkC+7k7T 3s5gD4UvyxvXJXamqeV06K6nEqrlUfwnh921uOmR4o7mmmB+mOWHf7F1kxOKulRBph 6HKuI+t+2lurBLOoExGa1dfuifCuqXIDjpimZOGRrDZS/E2BX6asedi24r+RWKC5jx JqX9Ech9Y3eDI7p2KJ8aesP703YQ8WPTGXLVM93SLqjFK7ebcfq61Hf+Me2c/VxCxK lJYENoKp64uHw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 25/36] arm64: ftrace: Enable HAVE_FUNCTION_GRAPH_FREGS Date: Fri, 12 Jan 2024 19:15:45 +0900 Message-Id: <170505454553.459169.17513084336482780988.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879868817396895 X-GMAIL-MSGID: 1787879868817396895 From: Masami Hiramatsu (Google) Enable CONFIG_HAVE_FUNCTION_GRAPH_FREGS on arm64. Note that this depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS which is enabled if the compiler supports "-fpatchable-function-entry=2". If not, it continue to use ftrace_ret_regs. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Newly added. --- arch/arm64/Kconfig | 2 ++ arch/arm64/include/asm/ftrace.h | 6 ++++++ arch/arm64/kernel/entry-ftrace.S | 28 ++++++++++++++++++++++++++++ 3 files changed, 36 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b071a00425d..beebc724dcae 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -192,6 +192,8 @@ config ARM64 select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_ARGS \ if $(cc-option,-fpatchable-function-entry=2) + select HAVE_FUNCTION_GRAPH_FREGS \ + if HAVE_DYNAMIC_FTRACE_WITH_ARGS select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS \ if DYNAMIC_FTRACE_WITH_ARGS && DYNAMIC_FTRACE_WITH_CALL_OPS select HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS \ diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index ab158196480c..efd5dbf74dd6 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -131,6 +131,12 @@ ftrace_regs_set_return_value(struct ftrace_regs *fregs, fregs->regs[0] = ret; } +static __always_inline unsigned long +ftrace_regs_get_frame_pointer(struct ftrace_regs *fregs) +{ + return fregs->fp; +} + static __always_inline void ftrace_override_function_with_return(struct ftrace_regs *fregs) { diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index f0c16640ef21..d87ccdb9e678 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -328,6 +328,33 @@ SYM_FUNC_END(ftrace_stub_graph) * Run ftrace_return_to_handler() before going back to parent. * @fp is checked against the value passed by ftrace_graph_caller(). */ +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS +SYM_CODE_START(return_to_handler) + /* save ftrace_regs except for PC */ + sub sp, sp, #FREGS_SIZE + stp x0, x1, [sp, #FREGS_X0] + stp x2, x3, [sp, #FREGS_X2] + stp x4, x5, [sp, #FREGS_X4] + stp x6, x7, [sp, #FREGS_X6] + str x8, [sp, #FREGS_X8] + str x29, [sp, #FREGS_FP] + str x9, [sp, #FREGS_LR] + str x10, [sp, #FREGS_SP] + + mov x0, sp + bl ftrace_return_to_handler // addr = ftrace_return_to_hander(fregs); + mov x30, x0 // restore the original return address + + /* restore return value regs */ + ldp x0, x1, [sp, #FREGS_X0] + ldp x2, x3, [sp, #FREGS_X2] + ldp x4, x5, [sp, #FREGS_X4] + ldp x6, x7, [sp, #FREGS_X6] + add sp, sp, #FREGS_SIZE + + ret +SYM_CODE_END(return_to_handler) +#else /* !CONFIG_HAVE_FUNCTION_GRAPH_FREGS */ SYM_CODE_START(return_to_handler) /* save return value regs */ sub sp, sp, #FGRET_REGS_SIZE @@ -350,4 +377,5 @@ SYM_CODE_START(return_to_handler) ret SYM_CODE_END(return_to_handler) +#endif /* CONFIG_HAVE_FUNCTION_GRAPH_FREGS */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ From patchwork Fri Jan 12 10:15:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187688 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp78948dyc; Fri, 12 Jan 2024 02:21:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IGDB6rmec9hFQEX/MmrGV6XoBIrKcWG3NkUrkT8rAAqfjMwRDnsSxzPD9nYScN42zyDjV1u X-Received: by 2002:a05:6358:7304:b0:175:9b4d:5d81 with SMTP id d4-20020a056358730400b001759b4d5d81mr1932937rwg.59.1705054872684; Fri, 12 Jan 2024 02:21:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054872; cv=none; d=google.com; s=arc-20160816; b=DzW8gwFtbIgA+l3Finfl7N+GdmlvitwotB52mK87lPzBgbAnfpZ1L5AhL/MsEdiIRU Lk6p4djO4evLfCya5cdR0FWEHOJpfp1Idaxgwsi2aJO+/et3RNqzlP0GjsQZYemATdsK bXcZZHcjxmgIldUnsWcmKEqaWP8v7X3jkI+GEIE72SfJ43D5hYQbZPt2s7jBbl4jeGkb veCq+5AWwFUBrVhzDiqDUIhv6glHoBrdIuB0ekm9/xAwJUMrKkfWHDU/YKFqi8G8Kkum hMl+45Pp7XAmHjG18FzLC/lTbziWYkyh7ES1lBGwCCsQF/pZaWBxjOemyf/eksv9ika0 XtdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=lD2lpdbJyGNqZ97G6enCcDL6VV9haZPqdmqCNvjFWtQ=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=IMEadDZD4F7c3jQmFbvy3/q07825AgFqW1vIoPAu6ZrDTHMjbpejoDv7fmqlo8kG2p +D6GyqTQu4Xw8YK7+yRpwWaHIItwzDpUx4G/ZexGzGPIoB9bFpCIulLWPakxjNwfcxN7 ryQwx4yxxVRDJzO0koY450HbLUUAg19XAQ5Sz0Shqq8CgQgl0owJAQCQSLpjcpwdvmEp QSLYVuXCXeVqRaGuwVOibNEGhwVOa6BmwzEKpD4q66iE4U+CJV12VXIkZv1sOlSkpd22 8b/pyZdBxGq/7GOi0pFsFGr0FnKZsV3G5ee5oeOfDIjGLaf6M4XZeIjV287GGVZmFV6F 5m9Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=vDNMjZFr; spf=pass (google.com: domain of linux-kernel+bounces-24574-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24574-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id d34-20020a631d62000000b005cdf37c9c29si2981353pgm.703.2024.01.12.02.21.12 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:21:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24574-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=vDNMjZFr; spf=pass (google.com: domain of linux-kernel+bounces-24574-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24574-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 52AB6B22CB0 for ; Fri, 12 Jan 2024 10:20:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6101E64AA1; Fri, 12 Jan 2024 10:16:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="vDNMjZFr" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61A255DF37; Fri, 12 Jan 2024 10:16:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6187CC433F1; Fri, 12 Jan 2024 10:15:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054562; bh=Uo5IRlEOIApYNBEAUfTpLKKDuGIT6J+JqpPbo/h9hpQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vDNMjZFrArfyhKAIbHO14yMouEgNiS26JyC/i9DAD92Z/du1nEwW8mHveuHjJdXtx e/Olj5JP5JtIiGlsBqSdB8J86Z5V5ZDPjPnCwoN0BcfOIDoAV1NLivSbUguR4/pwzJ n2unHe6df5WymSZ5Ck/3sdoh068MxKHSgBo+2SHczF1pLUUvFZ1PD6cjLU5BA9Ldbu 8oYPJkkSgzbAFlVdXqvfTjhB0cnJJHRxzuRO8noHjcGvb6L6/vLEgTbIzWnXVVdF2T PZmurGSkLKsIP6mL6c5FFJ825sfen1v3JcJh0RvXEtJpo17fIkYzxcn6Bqp/WDT31Y oEDWniDDRWpLg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 26/36] fprobe: Use ftrace_regs in fprobe entry handler Date: Fri, 12 Jan 2024 19:15:57 +0900 Message-Id: <170505455690.459169.11934459697611471730.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879618559573660 X-GMAIL-MSGID: 1787879618559573660 From: Masami Hiramatsu (Google) This allows fprobes to be available with CONFIG_DYNAMIC_FTRACE_WITH_ARGS instead of CONFIG_DYNAMIC_FTRACE_WITH_REGS, then we can enable fprobe on arm64. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest --- Changes in v6: - Keep using SAVE_REGS flag to avoid breaking bpf kprobe-multi test. --- include/linux/fprobe.h | 2 +- kernel/trace/Kconfig | 3 ++- kernel/trace/bpf_trace.c | 10 +++++++--- kernel/trace/fprobe.c | 3 ++- kernel/trace/trace_fprobe.c | 6 +++++- lib/test_fprobe.c | 4 ++-- samples/fprobe/fprobe_example.c | 2 +- 7 files changed, 20 insertions(+), 10 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 3e03758151f4..36c0595f7b93 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -35,7 +35,7 @@ struct fprobe { int nr_maxactive; int (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *regs, void *entry_data); void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 308b3bec01b1..805d72ab77c6 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -290,7 +290,7 @@ config DYNAMIC_FTRACE_WITH_ARGS config FPROBE bool "Kernel Function Probe (fprobe)" depends on FUNCTION_TRACER - depends on DYNAMIC_FTRACE_WITH_REGS + depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS depends on HAVE_RETHOOK select RETHOOK default n @@ -675,6 +675,7 @@ config FPROBE_EVENTS select TRACING select PROBE_EVENTS select DYNAMIC_EVENTS + depends on DYNAMIC_FTRACE_WITH_REGS default y help This allows user to add tracing events on the function entry and diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 84e8a0f6e4e0..d3f8745d8ead 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2503,7 +2503,7 @@ static int __init bpf_event_init(void) fs_initcall(bpf_event_init); #endif /* CONFIG_MODULES */ -#ifdef CONFIG_FPROBE +#if defined(CONFIG_FPROBE) && defined(CONFIG_DYNAMIC_FTRACE_WITH_REGS) struct bpf_kprobe_multi_link { struct bpf_link link; struct fprobe fp; @@ -2733,10 +2733,14 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, static int kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *data) { struct bpf_kprobe_multi_link *link; + struct pt_regs *regs = ftrace_get_regs(fregs); + + if (!regs) + return 0; link = container_of(fp, struct bpf_kprobe_multi_link, fp); kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); @@ -3008,7 +3012,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr kvfree(cookies); return err; } -#else /* !CONFIG_FPROBE */ +#else /* !CONFIG_FPROBE || !CONFIG_DYNAMIC_FTRACE_WITH_REGS */ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index 6cd2a4e3afb8..a932c3a79e8f 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -46,7 +46,7 @@ static inline void __fprobe_handler(unsigned long ip, unsigned long parent_ip, } if (fp->entry_handler) - ret = fp->entry_handler(fp, ip, parent_ip, ftrace_get_regs(fregs), entry_data); + ret = fp->entry_handler(fp, ip, parent_ip, fregs, entry_data); /* If entry_handler returns !0, nmissed is not counted. */ if (rh) { @@ -182,6 +182,7 @@ static void fprobe_init(struct fprobe *fp) fp->ops.func = fprobe_kprobe_handler; else fp->ops.func = fprobe_handler; + fp->ops.flags |= FTRACE_OPS_FL_SAVE_REGS; } diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 7d2ddbcfa377..ef6b36fd05ae 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -320,12 +320,16 @@ NOKPROBE_SYMBOL(fexit_perf_func); #endif /* CONFIG_PERF_EVENTS */ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); + struct pt_regs *regs = ftrace_get_regs(fregs); int ret = 0; + if (!regs) + return 0; + if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) fentry_trace_func(tf, entry_ip, regs); #ifdef CONFIG_PERF_EVENTS diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 24de0e5ff859..ff607babba18 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -40,7 +40,7 @@ static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32)) static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); /* This can be called on the fprobe_selftest_target and the fprobe_selftest_target2 */ @@ -81,7 +81,7 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); return 0; diff --git a/samples/fprobe/fprobe_example.c b/samples/fprobe/fprobe_example.c index 64e715e7ed11..1545a1aac616 100644 --- a/samples/fprobe/fprobe_example.c +++ b/samples/fprobe/fprobe_example.c @@ -50,7 +50,7 @@ static void show_backtrace(void) static int sample_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { if (use_trace) /* From patchwork Fri Jan 12 10:16:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187685 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp78554dyc; Fri, 12 Jan 2024 02:20:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IFJX/nG137+Lsd0aYfr8VczZKuDT6uvO3Jwav6M2iqJzfq2D/XEfRZN2A5bWJoq4v+MPaIc X-Received: by 2002:a2e:8017:0:b0:2cd:50a7:12c7 with SMTP id j23-20020a2e8017000000b002cd50a712c7mr581632ljg.36.1705054819004; Fri, 12 Jan 2024 02:20:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054818; cv=none; d=google.com; s=arc-20160816; b=UzlnHGyrjRX18/W79UsmYe/PxcV6T5cp488KgWVvS62/UjynZ8Dan0OhcC2srUtXQZ JSqTavfrF+70UIQrJeELP5rztL8J4W3coIN65iV10jN/SXvSzEuBn7qe5CC3LEq0wsxn HMmVvzQ0DRw/XXQPRliQ0Hk8Vl+hi4yfsEz8hJh2bVVjnNpeqWKtaglRGjI9YMhjlqBF nF+WAUJjo8cUrh3BqiABDmMB/rGdFtziO2UX6/JIV8Zn0b0dz/75vghJo223DK+9Ahqh xkegJXc2B9iZZJw42VNEddc3A2n81yAy4lWY4wUykBmes0Rvq7hfZ88z/p8v25cHJTmP I2+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=dFY0+9TX+dy2FvzCoh3sQLrS41n/vGmCh85BI4JWasc=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=euZhMhc6oSEJCTIbE1XymAzGU8xzRw7zz8H/4q3GvRIBel9EbwN3D22C8XtG6YRDGQ tqF42PI/jaRoZe50dMgnWfaWCRn62RUktSsXKXztwRan6xlqYPzNCQz2URc5WsxDg+Do vazk98ea9q8l5w1XuOQzrHnHw5iWKtdHvIjHIp8MyOgfrXhQPZzfzH69KKWSmg2g5TZG rjAH0mRWPE1+2sRFdz30fx+fUvV+9KerQNxIGoKRPtYCWzNhWXh7YZluGr+15GBeiiEP zMH+cZORASS7AXsEiEdCj7ZjUKeBNEGxf86IDuwJdE8EQoMVicTtcKFahPg4vmPPOr3d O+dg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Z9alAlbr; spf=pass (google.com: domain of linux-kernel+bounces-24575-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24575-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id n9-20020a50cc49000000b005579ca9f99dsi1258489edi.504.2024.01.12.02.20.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:20:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24575-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Z9alAlbr; spf=pass (google.com: domain of linux-kernel+bounces-24575-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24575-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 708741F26893 for ; Fri, 12 Jan 2024 10:20:18 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D0DE064ABD; Fri, 12 Jan 2024 10:16:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Z9alAlbr" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D4F864AB0; Fri, 12 Jan 2024 10:16:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7B41C433F1; Fri, 12 Jan 2024 10:16:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054574; bh=oiJYB2UuFrGS8fIQt3vASzn+Wih9LPxEKlaTTe/MYC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z9alAlbrPpR2TPX3Rf1N0YUGuI8mZozN9VoTkscMHXFEY/Y3LgLoPdJ6wpbvglyji CJNiL4VxTvngtiGXgvG2DwtoFq+WP1NoVOHfox5vnbnkSnT2zXc6Ji4JoCf8SjUIjs +8CqVBk7K6GAxre9giB5k5mdHJoXF85nJ2l+6YPrXc3sWSOIinnWS7gkvyNH8nL+5O pjbUk5GROADwXdu/hWlXnhplgRC9iB57RF9GG7mbltwqQCsYdj8DOAAfKuamluFLWC j6L+Wc3LtqYvFLWK1XCUPaxdxP02ibrd53Ieh6hR8HVk3dlUMalK3AqzFMSZ+1vcxC 322emnmUo/EPg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 27/36] fprobe: Use ftrace_regs in fprobe exit handler Date: Fri, 12 Jan 2024 19:16:08 +0900 Message-Id: <170505456827.459169.5071371071652498082.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879561889868041 X-GMAIL-MSGID: 1787879561889868041 From: Masami Hiramatsu (Google) Change the fprobe exit handler to use ftrace_regs structure instead of pt_regs. This also introduce HAVE_PT_REGS_TO_FTRACE_REGS_CAST which means the ftrace_regs's memory layout is equal to the pt_regs so that those are able to cast. Fprobe introduces a new dependency with that. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Use ftrace_regs_get_return_value() Changes from previous series: NOTHING, just forward ported. --- arch/loongarch/Kconfig | 1 + arch/s390/Kconfig | 1 + arch/x86/Kconfig | 1 + include/linux/fprobe.h | 2 +- include/linux/ftrace.h | 5 +++++ kernel/trace/Kconfig | 8 ++++++++ kernel/trace/bpf_trace.c | 6 +++++- kernel/trace/fprobe.c | 3 ++- kernel/trace/trace_fprobe.c | 6 +++++- lib/test_fprobe.c | 6 +++--- samples/fprobe/fprobe_example.c | 2 +- 11 files changed, 33 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index ee123820a476..b0bd252aefe8 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -108,6 +108,7 @@ config LOONGARCH select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_ARGS + select HAVE_PT_REGS_TO_FTRACE_REGS_CAST select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EBPF_JIT diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index d5d8f99d1f25..5736c1970f49 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -168,6 +168,7 @@ config S390 select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_ARGS + select HAVE_PT_REGS_TO_FTRACE_REGS_CAST select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EBPF_JIT if HAVE_MARCH_Z196_FEATURES diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 375ad280ee75..1a8b5d447e85 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -209,6 +209,7 @@ config X86 select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64 + select HAVE_PT_REGS_TO_FTRACE_REGS_CAST if X86_64 select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_SAMPLE_FTRACE_DIRECT if X86_64 select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64 diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 36c0595f7b93..879a30956009 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -38,7 +38,7 @@ struct fprobe { unsigned long ret_ip, struct ftrace_regs *regs, void *entry_data); void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); }; diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index da2a23f5a9ed..a72a2eaec576 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -159,6 +159,11 @@ struct ftrace_regs { #define ftrace_regs_set_instruction_pointer(fregs, ip) do { } while (0) #endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ +#ifdef CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST + +static_assert(sizeof(struct pt_regs) == sizeof(struct ftrace_regs)); + +#endif /* CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST */ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs) { diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 805d72ab77c6..1a2544712690 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -60,6 +60,13 @@ config HAVE_DYNAMIC_FTRACE_WITH_ARGS This allows for use of ftrace_regs_get_argument() and ftrace_regs_get_stack_pointer(). +config HAVE_PT_REGS_TO_FTRACE_REGS_CAST + bool + help + If this is set, the memory layout of the ftrace_regs data structure + is the same as the pt_regs. So the pt_regs is possible to be casted + to ftrace_regs. + config HAVE_DYNAMIC_FTRACE_NO_PATCHABLE bool help @@ -291,6 +298,7 @@ config FPROBE bool "Kernel Function Probe (fprobe)" depends on FUNCTION_TRACER depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS + depends on HAVE_PT_REGS_TO_FTRACE_REGS_CAST || !HAVE_DYNAMIC_FTRACE_WITH_ARGS depends on HAVE_RETHOOK select RETHOOK default n diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index d3f8745d8ead..efb792f8f2ea 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2749,10 +2749,14 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, static void kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *data) { struct bpf_kprobe_multi_link *link; + struct pt_regs *regs = ftrace_get_regs(fregs); + + if (!regs) + return; link = container_of(fp, struct bpf_kprobe_multi_link, fp); kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index a932c3a79e8f..31210423efc3 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -124,6 +124,7 @@ static void fprobe_exit_handler(struct rethook_node *rh, void *data, { struct fprobe *fp = (struct fprobe *)data; struct fprobe_rethook_node *fpr; + struct ftrace_regs *fregs = (struct ftrace_regs *)regs; int bit; if (!fp || fprobe_disabled(fp)) @@ -141,7 +142,7 @@ static void fprobe_exit_handler(struct rethook_node *rh, void *data, return; } - fp->exit_handler(fp, fpr->entry_ip, ret_ip, regs, + fp->exit_handler(fp, fpr->entry_ip, ret_ip, fregs, fp->entry_data_size ? (void *)fpr->data : NULL); ftrace_test_recursion_unlock(bit); } diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index ef6b36fd05ae..3982626c82e6 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -341,10 +341,14 @@ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, NOKPROBE_SYMBOL(fentry_dispatcher); static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); + struct pt_regs *regs = ftrace_get_regs(fregs); + + if (!regs) + return; if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) fexit_trace_func(tf, entry_ip, ret_ip, regs); diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index ff607babba18..271ce0caeec0 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -59,9 +59,9 @@ static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { - unsigned long ret = regs_return_value(regs); + unsigned long ret = ftrace_regs_get_return_value(fregs); KUNIT_EXPECT_FALSE(current_test, preemptible()); if (ip != target_ip) { @@ -89,7 +89,7 @@ static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); KUNIT_EXPECT_EQ(current_test, ip, target_nest_ip); diff --git a/samples/fprobe/fprobe_example.c b/samples/fprobe/fprobe_example.c index 1545a1aac616..d476d1f07538 100644 --- a/samples/fprobe/fprobe_example.c +++ b/samples/fprobe/fprobe_example.c @@ -67,7 +67,7 @@ static int sample_entry_handler(struct fprobe *fp, unsigned long ip, } static void sample_exit_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *regs, void *data) { unsigned long rip = ret_ip; From patchwork Fri Jan 12 10:16:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187686 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp78695dyc; Fri, 12 Jan 2024 02:20:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IFqb8ljosQIiCBbRmLXlFuARh1HMn2M1ng5ZavEwx8laNA01+aIPqY9UCW9o/v+TaegPWZR X-Received: by 2002:a2e:8090:0:b0:2cd:34e2:b7e9 with SMTP id i16-20020a2e8090000000b002cd34e2b7e9mr288541ljg.52.1705054835552; Fri, 12 Jan 2024 02:20:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054835; cv=none; d=google.com; s=arc-20160816; b=CjkKaBYTE/W3CLSaBekZjKqjm+0lCGOGZ+Uwe6bydHTGeD4teqTN/unc5bOaGEe3eJ 7/8aVG5l74EmNI6CDYxzGXC85oYRTIiwyci+UJd7JDxV/L/zTDj7hV9daefK3hNObnbe WN8ajvaKNLdJz4dEZMqN+zsNf0HpQ4Z5gDnKvkMNLil0XAenko76thlK+MDPrRxbc31C TdJVHCFmgAeOMa78jteXxR2c4EtJ6GUtIziq3ORP+xRLqEfeZ+8mpn4/U0m1fFzFq+W/ WePVcJvPRMbhqZg8G9bQ1Ng/vMFjyzeuLlRdhWRFziCZ/3JMcfd/6wpPcsyDI3TW6oU2 wp+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=6CvyDT2/9dgk4JdmyncNzCvoFxaDuIj66p5rM2LDAl8=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=hMWyAk2HFHzgmxTBD/bHfsePGRRdB7nXETe+LG+Jcj4r7/k1A9qAKj7J7BmH0Qtrus biLX5Lj1jggfBv5B1mGEdPiqCLTVJvtC/E+ASv9K/JFRl92CTJjUdnlxVbH7P24Cqaid Fgszq7DqYMf//wvB999oltm0hCn2bnspsbjtdJIlb9nOTMa5N9tyI7KCTlPLm95HXDeL YVW6vmDvVdDflZX1/XNgF/Q8DM5AVss2TkDbeKiPx8EHl+a58yMBUEL3RqPrIovJTkZ2 odH/7lJIbdLXSxboGSx3++qIUARIcAFjl9NoLveNw6zddPF+e6EfxRELphcScZRGd+Qt xlog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=lPorhJkH; spf=pass (google.com: domain of linux-kernel+bounces-24576-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24576-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id er6-20020a056402448600b0055468afa598si1304461edb.125.2024.01.12.02.20.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:20:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24576-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=lPorhJkH; spf=pass (google.com: domain of linux-kernel+bounces-24576-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24576-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 2951B1F23D75 for ; Fri, 12 Jan 2024 10:20:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8C8175FF0D; Fri, 12 Jan 2024 10:16:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lPorhJkH" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE27A64CC4; Fri, 12 Jan 2024 10:16:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 26B04C433C7; Fri, 12 Jan 2024 10:16:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054585; bh=JC1iyaIU7mWHHBTGXNCiN27YHUuUtROsifUJHNlxJlc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lPorhJkHRuSBb5mqIzorwQUYPmVRo89tUoEyuJR2Jalj3rkWbL/QU+HvU+TCqh3k4 zO9GmQD+M6nQaH/SVIBGxuJM/RFKMBznFpMfgcdqC/uaMUpPISCzP7axJL8crlnd9Q fZF6xqGRwdFiUWat0fXBZLdHH5YYIpsirAW01GLosp0JbM3ReC2Cl0sYARJubQdCa7 LVyzXBvEsTR58vHq31eavpyx6yqOhKOsUFX8Dk1igWc2Qf4cEVYhTMueK9SVxSXvLl be7iS3/wbu9kwCsIQ0N0ycNXpnUUIA/kPO/PXPD2j4drS0B02Nvn6oUEwUrkgVj/ln 6yhY6px026plg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 28/36] tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs Date: Fri, 12 Jan 2024 19:16:19 +0900 Message-Id: <170505457974.459169.12611245988226659386.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879579272774743 X-GMAIL-MSGID: 1787879579272774743 From: Masami Hiramatsu (Google) Add ftrace_partial_regs() which converts the ftrace_regs to pt_regs. If the architecture defines its own ftrace_regs, this copies partial registers to pt_regs and returns it. If not, ftrace_regs is the same as pt_regs and ftrace_partial_regs() will return ftrace_regs::regs. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest --- Changes from previous series: NOTHING, just forward ported. --- arch/arm64/include/asm/ftrace.h | 11 +++++++++++ include/linux/ftrace.h | 17 +++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index efd5dbf74dd6..31051fa2b4d9 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -143,6 +143,17 @@ ftrace_override_function_with_return(struct ftrace_regs *fregs) fregs->pc = fregs->lr; } +static __always_inline struct pt_regs * +ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs) +{ + memcpy(regs->regs, fregs->regs, sizeof(u64) * 9); + regs->sp = fregs->sp; + regs->pc = fregs->pc; + regs->regs[29] = fregs->fp; + regs->regs[30] = fregs->lr; + return regs; +} + int ftrace_regs_query_register_offset(const char *name); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index a72a2eaec576..515ec804d605 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -173,6 +173,23 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs return arch_ftrace_get_regs(fregs); } +#if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \ + defined(CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST) + +static __always_inline struct pt_regs * +ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + /* + * If CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST=y, ftrace_regs memory + * layout is the same as pt_regs. So always returns that address. + * Since arch_ftrace_get_regs() will check some members and may return + * NULL, we can not use it. + */ + return &fregs->regs; +} + +#endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST */ + /* * When true, the ftrace_regs_{get,set}_*() functions may be used on fregs. * Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs. From patchwork Fri Jan 12 10:16:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187699 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp82491dyc; Fri, 12 Jan 2024 02:30:57 -0800 (PST) X-Google-Smtp-Source: AGHT+IHSfDM6oUs/Ja0VLMpeV3W3lumitm5eqWs35bA81HrVnEmcOHPWTQDtwb7Ww3/yX6uh4kOG X-Received: by 2002:a05:6870:1682:b0:204:334:3337 with SMTP id j2-20020a056870168200b0020403343337mr1342565oae.50.1705055457276; Fri, 12 Jan 2024 02:30:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705055457; cv=none; d=google.com; s=arc-20160816; b=U0Nfj1SkwkjirLGH37jHP/2kj83JRPcrqhHvGdZ1miuKzDYb3vdmS3OhocobbtcjHy p8ivilubFUaMfsBPXCkX37XYWUgiPZsbuYQyuIXpMEVET2pRm1LIMwSvSQTrmsBwehkX 1FPA804DuUYW5/D6UBEIhyWSC43AOPxlOwqQNsYicTJDAOK7wWJoCHFKVWSnYh08cBDi wfQ1y98qo7UAEZO+rx1/85g8nw9UTSdZXTYri6BZ8RW2+01f4EJrqZLFTC/sPYVY87pJ XZKOWz3yAmITNzzFwo/idX1mfULSuzUJJPdA34abEUah7HJzvuB8wfaL6OtSi9wad4yt NZYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=I/NOdqZd4vjz2DBRERgllFY0C5eXmhFfbexyugXtquw=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=ka4/yjhjvTmDY0/KLxWnu6koIvm5WGzuZSSUWL4ro3V3vN6jWZcuFmHYIykWimTP8L 4v1uH9v4pNxY9RZ+I5/RCCl94JmFANWaDcv4XGY12Tb4ixT2XaELSFBG5SBI+5KySVRM 5AMqPRN/etegiw7uDantewVRKhSKRzBgIeqmqI4CpAblhL2yY8FD+Gdh2/8n4fEs9KIm LC7WeXpFysUYg1oaKr/htbV0355e6NlKr1e5g9Khydjru+nAYRvyHUzifzHc7pVgu9nZ l5xkjvmBvmafBBssSC9LS5bMpJmFPEUmAJ4fZQr5Gg6AC9qMnARJNH9gsH8ZAZyZDlqY fwUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=daeZwfSN; spf=pass (google.com: domain of linux-kernel+bounces-24577-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24577-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id x12-20020a63f70c000000b005ce0b2ab7a3si2897502pgh.887.2024.01.12.02.30.57 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:30:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24577-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=daeZwfSN; spf=pass (google.com: domain of linux-kernel+bounces-24577-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24577-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id A9DEB28EF5C for ; Fri, 12 Jan 2024 10:20:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8A5635EE87; Fri, 12 Jan 2024 10:16:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="daeZwfSN" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D44CE5DF13; Fri, 12 Jan 2024 10:16:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B1762C433C7; Fri, 12 Jan 2024 10:16:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054597; bh=qNmjHVAcUhvkoBgfYIyluKuCUP9AzOWE8BbeAF90uB0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=daeZwfSNTwQZ2fmtQBwAyjUTbM1eSft8cfUGaIWterJ1VCgY1F8o0dPS/wR+rP/G/ j+Gv0Wag/w4JZaXLK00ol2BxRNm2exXI59dBqPxYfz+6arS/C6p7ZM4Hdw0/ZQzsVA RTt6qu9mJrGkJuV59dqqo4mYCuqM8GPMvegewhGwOtsTCUaN2YBwvsAxwvhjKGe+VD rPEZkf5t0qjxAjw1e/Trz/UcpWiwKzic8BJLdv66Q3DUkqb9lNne5MGcDUBUBfVF60 vdAAQ+wVB9tiNRlIsGRigwVgHhUIHvNOx5JYHkwfnz5e3Wy5fk3eAGYpvqCgJsC5nx mt5veFDo6ZgMg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 29/36] tracing: Add ftrace_fill_perf_regs() for perf event Date: Fri, 12 Jan 2024 19:16:31 +0900 Message-Id: <170505459120.459169.12920628413665440562.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787880230900290070 X-GMAIL-MSGID: 1787880230900290070 From: Masami Hiramatsu (Google) Add ftrace_fill_perf_regs() which should be compatible with the perf_fetch_caller_regs(). In other words, the pt_regs returned from the ftrace_fill_perf_regs() must satisfy 'user_mode(regs) == false' and can be used for stack tracing. Signed-off-by: Masami Hiramatsu (Google) --- Changes from previous series: NOTHING, just forward ported. --- arch/arm64/include/asm/ftrace.h | 7 +++++++ arch/powerpc/include/asm/ftrace.h | 7 +++++++ arch/s390/include/asm/ftrace.h | 5 +++++ arch/x86/include/asm/ftrace.h | 7 +++++++ include/linux/ftrace.h | 31 +++++++++++++++++++++++++++++++ 5 files changed, 57 insertions(+) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 31051fa2b4d9..c1921bdf760b 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -154,6 +154,13 @@ ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs) return regs; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->pc = (fregs)->pc; \ + (_regs)->regs[29] = (fregs)->fp; \ + (_regs)->sp = (fregs)->sp; \ + (_regs)->pstate = PSR_MODE_EL1h; \ + } while (0) + int ftrace_regs_query_register_offset(const char *name); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h index 7e138e0e3baf..8737a794c764 100644 --- a/arch/powerpc/include/asm/ftrace.h +++ b/arch/powerpc/include/asm/ftrace.h @@ -52,6 +52,13 @@ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs * return fregs->regs.msr ? &fregs->regs : NULL; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->result = 0; \ + (_regs)->nip = (fregs)->regs.nip; \ + (_regs)->gpr[1] = (fregs)->regs.gpr[1]; \ + asm volatile("mfmsr %0" : "=r" ((_regs)->msr)); \ + } while (0) + static __always_inline void ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip) diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index 01e775c98425..c2a269c1617c 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -97,6 +97,11 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, #define ftrace_regs_query_register_offset(name) \ regs_query_register_offset(name) +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->psw.addr = (fregs)->regs.psw.addr; \ + (_regs)->gprs[15] = (fregs)->regs.gprs[15]; \ + } while (0) + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS /* * When an ftrace registered caller is tracing a function that is diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index a061f8832b20..2e3de45e9746 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -54,6 +54,13 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) return &fregs->regs; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->ip = (fregs)->regs.ip; \ + (_regs)->sp = (fregs)->regs.sp; \ + (_regs)->cs = __KERNEL_CS; \ + (_regs)->flags = 0; \ + } while (0) + #define ftrace_regs_set_instruction_pointer(fregs, _ip) \ do { (fregs)->regs.ip = (_ip); } while (0) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 515ec804d605..8150edcf8496 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -190,6 +190,37 @@ ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs) #endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST */ +#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS + +/* + * Please define arch dependent pt_regs which compatible to the + * perf_arch_fetch_caller_regs() but based on ftrace_regs. + * This requires + * - user_mode(_regs) returns false (always kernel mode). + * - able to use the _regs for stack trace. + */ +#ifndef arch_ftrace_fill_perf_regs +/* As same as perf_arch_fetch_caller_regs(), do nothing by default */ +#define arch_ftrace_fill_perf_regs(fregs, _regs) do {} while (0) +#endif + +static __always_inline struct pt_regs * +ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + arch_ftrace_fill_perf_regs(fregs, regs); + return regs; +} + +#else /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ + +static __always_inline struct pt_regs * +ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + return &fregs->regs; +} + +#endif + /* * When true, the ftrace_regs_{get,set}_*() functions may be used on fregs. * Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs. From patchwork Fri Jan 12 10:16:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187687 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp78934dyc; Fri, 12 Jan 2024 02:21:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IGwXvL/WUZsOcLq6eXN6RrAIHt44y2vxEe4FADZiIdB4Ym2ie110HkGRTK2nao5hCPEK6Ir X-Received: by 2002:a05:622a:150:b0:429:c8e4:9f32 with SMTP id v16-20020a05622a015000b00429c8e49f32mr1194621qtw.88.1705054871703; Fri, 12 Jan 2024 02:21:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054871; cv=none; d=google.com; s=arc-20160816; b=05oV/t4DlX2xtPVngfCfGrWcYaiWcRVQeQHb7D7r1t9DJ9NeFW6Y0HM7OHA/265pHu YPd4szZrWRFiCAXRQ+NzmwFWFfdCfDGOfrmOsIT9hMVLwEcKXwAAal2lXhvkd3OhPkQN S2k+U/0xqTdYeJrLrB3CbRiFs97jdbhGVeKRIu/AVIqrW8/MoZm8hgZ4lHy3WFC5IaCy JPfkaYRxp3brfSc07Uw/HXLSVe13yjjSfvpuBIIlJd5F/rfIM9uLuSvbz2zo62MPL6Cl Ht6GNAjezBj+27XgVIt7UmnlniWejxIhjZqIp+f+saenxs+SA3oqfA0d/r9Q0bSFuUwH 9hjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=u1pK4TVBFLCoIZuRdbut2mSBBsi8+UK2X7ay+IVeSUY=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=heRI3E5bsqLFmWSuYS7jSiw2WaKxAUU0v/tyFZwUnYwGnrD9zLK/pZNSzzHq0PrZs5 ZEI4VMbCqQYLwDR2nvDhyKBh8Pn3UuU/Yr40TrurASacDNpoTY5iUCXTHejJZlGBVJ3U nVc408zqCLqTdDj5nH4Vz4LQP9nTufFBZeowGG3IG4yFt1W+8f1PptJgbAdJ6GS28VtQ jiTWGUm0FJdBXqWofxSAnQWQ04XhYTTwCDv+QURCmnWP43qCj5JGygT+dg4wc8tTi+qb zbg6Hhm2mPeeJ2kAP+UU72rBa/EZwILmfEKKg2Prg1QwtGivtfMHoUD7oWt9vYkq7Sa7 +Fcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=K+FxjhYS; spf=pass (google.com: domain of linux-kernel+bounces-24578-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24578-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id c1-20020ac87d81000000b00429ccb041f3si994745qtd.45.2024.01.12.02.21.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:21:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24578-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=K+FxjhYS; spf=pass (google.com: domain of linux-kernel+bounces-24578-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24578-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 713DB1C25429 for ; Fri, 12 Jan 2024 10:21:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 33D1664CF1; Fri, 12 Jan 2024 10:16:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="K+FxjhYS" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DC755EE8B; Fri, 12 Jan 2024 10:16:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 492C1C43390; Fri, 12 Jan 2024 10:16:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054608; bh=o+I60+HLnGs6KfeJTnOl8ndkevTEMUwwEElHTkUskfo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K+FxjhYSuuirASQ7f3YilkVZzGmUDsmXiyUtY16hS+iPxLCGxupbC2Wio+UBKViwr 3COkOh3C4lsfmDUBrL4wWp9L10FVBF5PT6Sk/fMD2RLg3aBJzUzMIgMLIyoU/jUKei ldQjfOlxFQqrmuBzg4adcEKHnm25330tn5B0JVxMRpaAzi3XGodHJeOxJDygJyZypR cNpDcuyzSywo0dY3lgvHn9PemRtC250gyCal7qWxNMKUhryuh7WHWasezKwsnVCfou p5SDQ9zKGiqb6CZrinOuXCHGne/ROyQEJZcBQzLE6o7pi3yXd6Z9euLIpCh/GsZOqT 6LtIYunH0rrHA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 30/36] tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS Date: Fri, 12 Jan 2024 19:16:42 +0900 Message-Id: <170505460278.459169.8371239398056629828.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879617145413202 X-GMAIL-MSGID: 1787879617145413202 From: Masami Hiramatsu (Google) Allow fprobe events to be enabled with CONFIG_DYNAMIC_FTRACE_WITH_ARGS. With this change, fprobe events mostly use ftrace_regs instead of pt_regs. Note that if the arch doesn't enable HAVE_PT_REGS_COMPAT_FTRACE_REGS, fprobe events will not be able to be used from perf. Signed-off-by: Masami Hiramatsu (Google) --- Chagnes in v3: - Use ftrace_regs_get_return_value(). Changes in v2: - Define ftrace_regs_get_kernel_stack_nth() for !CONFIG_HAVE_REGS_AND_STACK_ACCESS_API. Changes from previous series: Update against the new series. --- include/linux/ftrace.h | 17 +++++++++ kernel/trace/Kconfig | 1 - kernel/trace/trace_fprobe.c | 74 ++++++++++++++++++++------------------- kernel/trace/trace_probe_tmpl.h | 2 + 4 files changed, 55 insertions(+), 39 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 8150edcf8496..ad28daa507f7 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -250,6 +250,23 @@ static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs) regs_query_register_offset(name) #endif +#ifdef CONFIG_HAVE_REGS_AND_STACK_ACCESS_API +static __always_inline unsigned long +ftrace_regs_get_kernel_stack_nth(struct ftrace_regs *fregs, unsigned int nth) +{ + unsigned long *stackp; + + stackp = (unsigned long *)ftrace_regs_get_stack_pointer(fregs); + if (((unsigned long)(stackp + nth) & ~(THREAD_SIZE - 1)) == + ((unsigned long)stackp & ~(THREAD_SIZE - 1))) + return *(stackp + nth); + + return 0; +} +#else /* !CONFIG_HAVE_REGS_AND_STACK_ACCESS_API */ +#define ftrace_regs_get_kernel_stack_nth(fregs, nth) (0L) +#endif /* CONFIG_HAVE_REGS_AND_STACK_ACCESS_API */ + typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs); diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 1a2544712690..8b15adde1d8f 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -683,7 +683,6 @@ config FPROBE_EVENTS select TRACING select PROBE_EVENTS select DYNAMIC_EVENTS - depends on DYNAMIC_FTRACE_WITH_REGS default y help This allows user to add tracing events on the function entry and diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 3982626c82e6..7d2a66135f83 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -132,7 +132,7 @@ static int process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, void *base) { - struct pt_regs *regs = rec; + struct ftrace_regs *fregs = rec; unsigned long val; int ret; @@ -140,17 +140,17 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, /* 1st stage: get value from context */ switch (code->op) { case FETCH_OP_STACK: - val = regs_get_kernel_stack_nth(regs, code->param); + val = ftrace_regs_get_kernel_stack_nth(fregs, code->param); break; case FETCH_OP_STACKP: - val = kernel_stack_pointer(regs); + val = ftrace_regs_get_stack_pointer(fregs); break; case FETCH_OP_RETVAL: - val = regs_return_value(regs); + val = ftrace_regs_get_return_value(fregs); break; #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API case FETCH_OP_ARG: - val = regs_get_kernel_argument(regs, code->param); + val = ftrace_regs_get_argument(fregs, code->param); break; #endif case FETCH_NOP_SYMBOL: /* Ignore a place holder */ @@ -170,7 +170,7 @@ NOKPROBE_SYMBOL(process_fetch_insn) /* function entry handler */ static nokprobe_inline void __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs, + struct ftrace_regs *fregs, struct trace_event_file *trace_file) { struct fentry_trace_entry_head *entry; @@ -184,36 +184,36 @@ __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, if (trace_trigger_soft_disabled(trace_file)) return; - dsize = __get_data_size(&tf->tp, regs); + dsize = __get_data_size(&tf->tp, fregs); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + tf->tp.size + dsize); if (!entry) return; - fbuffer.regs = regs; + fbuffer.regs = ftrace_get_regs(fregs); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry->ip = entry_ip; - store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, sizeof(*entry), dsize); trace_event_buffer_commit(&fbuffer); } static void fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs) + struct ftrace_regs *fregs) { struct event_file_link *link; trace_probe_for_each_link_rcu(link, &tf->tp) - __fentry_trace_func(tf, entry_ip, regs, link->file); + __fentry_trace_func(tf, entry_ip, fregs, link->file); } NOKPROBE_SYMBOL(fentry_trace_func); /* Kretprobe handler */ static nokprobe_inline void __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, struct trace_event_file *trace_file) { struct fexit_trace_entry_head *entry; @@ -227,60 +227,63 @@ __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, if (trace_trigger_soft_disabled(trace_file)) return; - dsize = __get_data_size(&tf->tp, regs); + dsize = __get_data_size(&tf->tp, fregs); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + tf->tp.size + dsize); if (!entry) return; - fbuffer.regs = regs; + fbuffer.regs = ftrace_get_regs(fregs); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry->func = entry_ip; entry->ret_ip = ret_ip; - store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, sizeof(*entry), dsize); trace_event_buffer_commit(&fbuffer); } static void fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs) + unsigned long ret_ip, struct ftrace_regs *fregs) { struct event_file_link *link; trace_probe_for_each_link_rcu(link, &tf->tp) - __fexit_trace_func(tf, entry_ip, ret_ip, regs, link->file); + __fexit_trace_func(tf, entry_ip, ret_ip, fregs, link->file); } NOKPROBE_SYMBOL(fexit_trace_func); #ifdef CONFIG_PERF_EVENTS static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs) + struct ftrace_regs *fregs) { struct trace_event_call *call = trace_probe_event_call(&tf->tp); struct fentry_trace_entry_head *entry; struct hlist_head *head; int size, __size, dsize; + struct pt_regs *regs; int rctx; head = this_cpu_ptr(call->perf_events); if (hlist_empty(head)) return 0; - dsize = __get_data_size(&tf->tp, regs); + dsize = __get_data_size(&tf->tp, fregs); __size = sizeof(*entry) + tf->tp.size + dsize; size = ALIGN(__size + sizeof(u32), sizeof(u64)); size -= sizeof(u32); - entry = perf_trace_buf_alloc(size, NULL, &rctx); + entry = perf_trace_buf_alloc(size, ®s, &rctx); if (!entry) return 0; + regs = ftrace_fill_perf_regs(fregs, regs); + entry->ip = entry_ip; memset(&entry[1], 0, dsize); - store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, sizeof(*entry), dsize); perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, head, NULL); return 0; @@ -289,30 +292,33 @@ NOKPROBE_SYMBOL(fentry_perf_func); static void fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs) + unsigned long ret_ip, struct ftrace_regs *fregs) { struct trace_event_call *call = trace_probe_event_call(&tf->tp); struct fexit_trace_entry_head *entry; struct hlist_head *head; int size, __size, dsize; + struct pt_regs *regs; int rctx; head = this_cpu_ptr(call->perf_events); if (hlist_empty(head)) return; - dsize = __get_data_size(&tf->tp, regs); + dsize = __get_data_size(&tf->tp, fregs); __size = sizeof(*entry) + tf->tp.size + dsize; size = ALIGN(__size + sizeof(u32), sizeof(u64)); size -= sizeof(u32); - entry = perf_trace_buf_alloc(size, NULL, &rctx); + entry = perf_trace_buf_alloc(size, ®s, &rctx); if (!entry) return; + regs = ftrace_fill_perf_regs(fregs, regs); + entry->func = entry_ip; entry->ret_ip = ret_ip; - store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, sizeof(*entry), dsize); perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, head, NULL); } @@ -324,17 +330,14 @@ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); - struct pt_regs *regs = ftrace_get_regs(fregs); int ret = 0; - if (!regs) - return 0; - if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) - fentry_trace_func(tf, entry_ip, regs); + fentry_trace_func(tf, entry_ip, fregs); + #ifdef CONFIG_PERF_EVENTS if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE)) - ret = fentry_perf_func(tf, entry_ip, regs); + ret = fentry_perf_func(tf, entry_ip, fregs); #endif return ret; } @@ -345,16 +348,13 @@ static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); - struct pt_regs *regs = ftrace_get_regs(fregs); - - if (!regs) - return; if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) - fexit_trace_func(tf, entry_ip, ret_ip, regs); + fexit_trace_func(tf, entry_ip, ret_ip, fregs); + #ifdef CONFIG_PERF_EVENTS if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE)) - fexit_perf_func(tf, entry_ip, ret_ip, regs); + fexit_perf_func(tf, entry_ip, ret_ip, fregs); #endif } NOKPROBE_SYMBOL(fexit_dispatcher); diff --git a/kernel/trace/trace_probe_tmpl.h b/kernel/trace/trace_probe_tmpl.h index 3935b347f874..05445a745a07 100644 --- a/kernel/trace/trace_probe_tmpl.h +++ b/kernel/trace/trace_probe_tmpl.h @@ -232,7 +232,7 @@ process_fetch_insn_bottom(struct fetch_insn *code, unsigned long val, /* Sum up total data length for dynamic arrays (strings) */ static nokprobe_inline int -__get_data_size(struct trace_probe *tp, struct pt_regs *regs) +__get_data_size(struct trace_probe *tp, void *regs) { struct probe_arg *arg; int i, len, ret = 0; From patchwork Fri Jan 12 10:16:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187689 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp79102dyc; Fri, 12 Jan 2024 02:21:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IGCBNaeYMk4DwGt2v8f+MzdqeVqOJuAutIf0xvanzqDoIb2ITVrYK49qwvVI/GnyIX+mCvd X-Received: by 2002:a0c:e292:0:b0:67f:c133:3922 with SMTP id r18-20020a0ce292000000b0067fc1333922mr652254qvl.129.1705054897092; Fri, 12 Jan 2024 02:21:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054897; cv=none; d=google.com; s=arc-20160816; b=BcSt2qqCoVHXQpMicPp7fqJ5SyvZyc/5XaBMKGfLyckwI7EiIUfIYEwcAVSzEtE9J+ FaIR5twdYK3FQiIrsT7KL7ixBebgO6AeoV7ZDR80XP0CjI4MbN1wgMvoc0e5C28/m+IY Pqv0msgFD5SYbehK8rSRg8PQOtshW2Zm8vXAt8U6qXGIZo+zYBTpJQntWnjR/4n5S8n4 jITew5PHz4Iy36+2LiA/U+h4Ls0GghG1vXZAWXwTujIhRns2mG/dBXO8mqcqdWSgVBQD +poc2lyXeQqC/+uljBp9gttOzzqE+wkxC6tP7miFHSA8Q8WlV0yJ/JADi41icl/tg/RB Qd9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=B+IimFdLV6G8hCJEMsVcQeurP5etXlmAnEIlOhkzF3g=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=eBhps0XuAU9yjPjzRtY/vXcHZC3zBN3ALtsHgXesI7ZfoYDchsEQwAk7fEHvJ2a5Q4 0T+MynehI6wHI+tpBeDFvSp0quYHsinicBaCYJonK3IEfybswaroUSy4gC2i/Q8sZKRH ARDdZzVIl/HMT4j++M9dBXGfl7+G7Mq+dJSX4VAsX/af+epsrr+LCUzShFZuZAdlBkYJ PCo6nfwcMLeac5sdrwY9+OFH8VspLb7r+QVcTyk86G1HS5UQ2VnvePf7Y0hpPjGQ24vm ADeD9Piw6WrqXE84zw4Gt/R0di+Taud7PXJ8P5Va+dBVvtNjzI/lpVB6NOQ2anOyFpc1 SxYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pO5KVae6; spf=pass (google.com: domain of linux-kernel+bounces-24580-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24580-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id e23-20020a0caa57000000b0067f38efba40si2512695qvb.144.2024.01.12.02.21.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:21:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24580-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pO5KVae6; spf=pass (google.com: domain of linux-kernel+bounces-24580-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24580-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id DD3A91C252D0 for ; Fri, 12 Jan 2024 10:21:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C25B7651A0; Fri, 12 Jan 2024 10:17:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pO5KVae6" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25C7565197; Fri, 12 Jan 2024 10:17:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED3ACC433F1; Fri, 12 Jan 2024 10:16:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054620; bh=uaiQ0NlnGDpVimzHf7hc4bouD6LM9m4Fjq8sHDW6itc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pO5KVae6AmOuqr6kVlJNH4be4B4TxbaMaLYsDVdubgNkwu7Yt5xOUbIg1Ei6Py3/F DE7LEQSb+gJYMhB6aLgrhmfCbqhrh2WizYlJEsjbkb4UCw8pLZNIhRWxRQ4vlLndn5 H1PPtiznmiVcRK/ldP8naIsSVMXvNk7FcKIo5CdGx6tnFPb22Xntqri4NCZar7pqEN VIG42/WGYGdZqNBwuDHocupYL5vZzXIkkr+K/sPudQC7jsJYR0ncr2EBdcqZb9y2qw EHf05ahEti5GRx6GZ0snNTq9v7GpVBQ8EA8MVznLLn6Kod2EPfR+LMWC0bcTTIQ6vK 0dP2h5Ucn802A== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 31/36] bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled Date: Fri, 12 Jan 2024 19:16:54 +0900 Message-Id: <170505461435.459169.1646224653234049177.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879643701164475 X-GMAIL-MSGID: 1787879643701164475 From: Masami Hiramatsu (Google) Enable kprobe_multi feature if CONFIG_FPROBE is enabled. The pt_regs is converted from ftrace_regs by ftrace_partial_regs(), thus some registers may always returns 0. But it should be enough for function entry (access arguments) and exit (access return value). Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest --- Changes from previous series: NOTHING, Update against the new series. --- kernel/trace/bpf_trace.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index efb792f8f2ea..24ee4e960f1d 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2503,7 +2503,7 @@ static int __init bpf_event_init(void) fs_initcall(bpf_event_init); #endif /* CONFIG_MODULES */ -#if defined(CONFIG_FPROBE) && defined(CONFIG_DYNAMIC_FTRACE_WITH_REGS) +#ifdef CONFIG_FPROBE struct bpf_kprobe_multi_link { struct bpf_link link; struct fprobe fp; @@ -2526,6 +2526,8 @@ struct user_syms { char *buf; }; +static DEFINE_PER_CPU(struct pt_regs, bpf_kprobe_multi_pt_regs); + static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 cnt) { unsigned long __user usymbol; @@ -2703,13 +2705,14 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx) static int kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, - unsigned long entry_ip, struct pt_regs *regs) + unsigned long entry_ip, struct ftrace_regs *fregs) { struct bpf_kprobe_multi_run_ctx run_ctx = { .link = link, .entry_ip = entry_ip, }; struct bpf_run_ctx *old_run_ctx; + struct pt_regs *regs; int err; if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { @@ -2720,6 +2723,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, migrate_disable(); rcu_read_lock(); + regs = ftrace_partial_regs(fregs, this_cpu_ptr(&bpf_kprobe_multi_pt_regs)); old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); err = bpf_prog_run(link->link.prog, regs); bpf_reset_run_ctx(old_run_ctx); @@ -2737,13 +2741,9 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, void *data) { struct bpf_kprobe_multi_link *link; - struct pt_regs *regs = ftrace_get_regs(fregs); - - if (!regs) - return 0; link = container_of(fp, struct bpf_kprobe_multi_link, fp); - kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); + kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), fregs); return 0; } @@ -2753,13 +2753,9 @@ kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip, void *data) { struct bpf_kprobe_multi_link *link; - struct pt_regs *regs = ftrace_get_regs(fregs); - - if (!regs) - return; link = container_of(fp, struct bpf_kprobe_multi_link, fp); - kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); + kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), fregs); } static int symbols_cmp_r(const void *a, const void *b, const void *priv) @@ -3016,7 +3012,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr kvfree(cookies); return err; } -#else /* !CONFIG_FPROBE || !CONFIG_DYNAMIC_FTRACE_WITH_REGS */ +#else /* !CONFIG_FPROBE */ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; From patchwork Fri Jan 12 10:17:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187690 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp79210dyc; Fri, 12 Jan 2024 02:21:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IHoAdVq19oSolYw8SnmxrqcLUzoeUjbdnBHdQQJsxKVfFLwH55v48dXsHVWT2fQe0GJUxHg X-Received: by 2002:a05:6808:3193:b0:3bb:cc24:107d with SMTP id cd19-20020a056808319300b003bbcc24107dmr822278oib.71.1705054914830; Fri, 12 Jan 2024 02:21:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054914; cv=none; d=google.com; s=arc-20160816; b=1BhZfwuR7itEav3qCxF434u2inD1NMVqdIKV7VLuOTzeYA1zXc/+XTHH4T/+w9khmL vhq2Iy2GkwdibzUO82Ythq4i09WYTcPPhL4i797uz6Kx1xHHb1rHf9//NpkH+LL/WJbx c+Fn9eC8owyBxqN0TF4fhydCs4ieVfA4OfvsKeiFAshPeJNLOPMSsGSAzrfUCpVhd/hi CxPEyjuX/h/iQGZtt9RofFI58sJ+3Ylef4uoCItpJv1e7tCVRDwkY7Hh5sLDmBzwhz6S 8RW38vRNz6oV+8mMAYz+A277Kb42/iuccAUoaiUzAloZGy+H85237x7GdTQibq9F+nJK US2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=B5mAt8W8azn47LK2kphRwjqH60uu+XqgzfbWzrBC480=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=INE6S2HyfcRl7MD40RaDiHouqFghfAIeeBM5tzphuY8i9blTeIUIVzlhPmIomkj8V3 yeFQ4h9wr322B09VJ16F/CqDoMRXj1iPDOMGu6m+ErY9KGn58yorNMpBXqisNCzAbcuh PXCpU4LOqvzcyL0APHbeifk2IBMMYH30J7XGPfp+ZoivEAh8Ilq7EWVqhfK02fBQw5sW qmhl7k7uKgwj4yVYWsLf9cTH1dmIhpG+XXrnIkmMi/YlFStQkbZug43KDp+s2P2QlY9f oLbxZ06OgpDfv9pe1oUNZJ79fDtBdH8vHtqoX5WvIHOb21bnduBDz+1R9VoXgrpz9ZmY WTnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=VbG1OhU1; spf=pass (google.com: domain of linux-kernel+bounces-24581-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24581-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id r5-20020a0ccc05000000b0067f6e471793si2537717qvk.553.2024.01.12.02.21.54 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:21:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24581-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=VbG1OhU1; spf=pass (google.com: domain of linux-kernel+bounces-24581-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24581-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 8BAC01C24B8F for ; Fri, 12 Jan 2024 10:21:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AFCA6651AC; Fri, 12 Jan 2024 10:17:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VbG1OhU1" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5769D65197; Fri, 12 Jan 2024 10:17:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6F0AEC433F1; Fri, 12 Jan 2024 10:17:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054632; bh=xiqizROhD4kUvIgeKz/lwPBlsbppoAx8BUL+Q3/3Du4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VbG1OhU13oN2suyJhIcfmLHWK4bNxDSKGr60G60z92CLIqeJ2vhmgfPZ48rtXXe5J V2QdlMMy6Aaf9JE+zhTGASV4pqawt23VT1Ml+tAVEHm981Omw6HPlgXg1dB8jS8gzv MYP7lmdFfJts8vl77Ggb/htULZeC8U65a+h4TAkSl445cFNxBpV9ucSHCz7uOWomFB MVw6LfVe/nbG29HtAV0MiuveSdsUTDzzP+YL0DXH3K49wy6lMNe44CUd02PwTxediR 1RABNxFqO1/vct9B8NY3Nd/tpi4Cvf3ORhq+YoxS3fbnokYeXtH4SBTrdBH8E4YUGh 2Q1HgauiFJtRw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 32/36] fprobe: Rewrite fprobe on function-graph tracer Date: Fri, 12 Jan 2024 19:17:06 +0900 Message-Id: <170505462606.459169.1375700979988728260.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879662106199689 X-GMAIL-MSGID: 1787879662106199689 From: Masami Hiramatsu (Google) Rewrite fprobe implementation on function-graph tracer. Major API changes are: - 'nr_maxactive' field is deprecated. - This depends on CONFIG_DYNAMIC_FTRACE_WITH_ARGS or !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS, and CONFIG_HAVE_FUNCTION_GRAPH_FREGS. So currently works only on x86_64. - Currently the entry size is limited in 15 * sizeof(long). - If there is too many fprobe exit handler set on the same function, it will fail to probe. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Update for new reserve_data/retrieve_data API. - Fix internal push/pop on fgraph data logic so that it can correctly save/restore the returning fprobes. Changes in v2: - Add more lockdep_assert_held(fprobe_mutex) - Use READ_ONCE() and WRITE_ONCE() for fprobe_hlist_node::fp. - Add NOKPROBE_SYMBOL() for the functions which is called from entry/exit callback. --- include/linux/fprobe.h | 54 +++- kernel/trace/Kconfig | 8 - kernel/trace/fprobe.c | 633 ++++++++++++++++++++++++++++++++++-------------- lib/test_fprobe.c | 45 --- 4 files changed, 494 insertions(+), 246 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 879a30956009..08b37b0d1d05 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -5,32 +5,56 @@ #include #include -#include +#include +#include +#include + +struct fprobe; + +/** + * strcut fprobe_hlist_node - address based hash list node for fprobe. + * + * @hlist: The hlist node for address search hash table. + * @addr: The address represented by this. + * @fp: The fprobe which owns this. + */ +struct fprobe_hlist_node { + struct hlist_node hlist; + unsigned long addr; + struct fprobe *fp; +}; + +/** + * struct fprobe_hlist - hash list nodes for fprobe. + * + * @hlist: The hlist node for existence checking hash table. + * @rcu: rcu_head for RCU deferred release. + * @fp: The fprobe which owns this fprobe_hlist. + * @size: The size of @array. + * @array: The fprobe_hlist_node for each address to probe. + */ +struct fprobe_hlist { + struct hlist_node hlist; + struct rcu_head rcu; + struct fprobe *fp; + int size; + struct fprobe_hlist_node array[]; +}; /** * struct fprobe - ftrace based probe. - * @ops: The ftrace_ops. + * * @nmissed: The counter for missing events. * @flags: The status flag. - * @rethook: The rethook data structure. (internal data) * @entry_data_size: The private data storage size. - * @nr_maxactive: The max number of active functions. + * @nr_maxactive: The max number of active functions. (*deprecated) * @entry_handler: The callback function for function entry. * @exit_handler: The callback function for function exit. + * @hlist_array: The fprobe_hlist for fprobe search from IP hash table. */ struct fprobe { -#ifdef CONFIG_FUNCTION_TRACER - /* - * If CONFIG_FUNCTION_TRACER is not set, CONFIG_FPROBE is disabled too. - * But user of fprobe may keep embedding the struct fprobe on their own - * code. To avoid build error, this will keep the fprobe data structure - * defined here, but remove ftrace_ops data structure. - */ - struct ftrace_ops ops; -#endif unsigned long nmissed; unsigned int flags; - struct rethook *rethook; size_t entry_data_size; int nr_maxactive; @@ -40,6 +64,8 @@ struct fprobe { void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); + + struct fprobe_hlist *hlist_array; }; /* This fprobe is soft-disabled. */ diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 8b15adde1d8f..169588021d90 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -296,11 +296,9 @@ config DYNAMIC_FTRACE_WITH_ARGS config FPROBE bool "Kernel Function Probe (fprobe)" - depends on FUNCTION_TRACER - depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS - depends on HAVE_PT_REGS_TO_FTRACE_REGS_CAST || !HAVE_DYNAMIC_FTRACE_WITH_ARGS - depends on HAVE_RETHOOK - select RETHOOK + depends on FUNCTION_GRAPH_TRACER + depends on HAVE_FUNCTION_GRAPH_FREGS + depends on DYNAMIC_FTRACE_WITH_ARGS || !HAVE_DYNAMIC_FTRACE_WITH_ARGS default n help This option enables kernel function probe (fprobe) based on ftrace. diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index 31210423efc3..53e681c2458b 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -8,98 +8,193 @@ #include #include #include -#include +#include +#include #include #include #include "trace.h" -struct fprobe_rethook_node { - struct rethook_node node; - unsigned long entry_ip; - unsigned long entry_parent_ip; - char data[]; -}; +#define FPROBE_IP_HASH_BITS 8 +#define FPROBE_IP_TABLE_SIZE (1 << FPROBE_IP_HASH_BITS) -static inline void __fprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) -{ - struct fprobe_rethook_node *fpr; - struct rethook_node *rh = NULL; - struct fprobe *fp; - void *entry_data = NULL; - int ret = 0; +#define FPROBE_HASH_BITS 6 +#define FPROBE_TABLE_SIZE (1 << FPROBE_HASH_BITS) - fp = container_of(ops, struct fprobe, ops); +/* + * fprobe_table: hold 'fprobe_hlist::hlist' for checking the fprobe still + * exists. The key is the address of fprobe instance. + * fprobe_ip_table: hold 'fprobe_hlist::array[*]' for searching the fprobe + * instance related to the funciton address. The key is the ftrace IP + * address. + * + * When unregistering the fprobe, fprobe_hlist::fp and fprobe_hlist::array[*].fp + * are set NULL and delete those from both hash tables (by hlist_del_rcu). + * After an RCU grace period, the fprobe_hlist itself will be released. + * + * fprobe_table and fprobe_ip_table can be accessed from either + * - Normal hlist traversal and RCU add/del under 'fprobe_mutex' is held. + * - RCU hlist traversal under disabling preempt + */ +static struct hlist_head fprobe_table[FPROBE_TABLE_SIZE]; +static struct hlist_head fprobe_ip_table[FPROBE_IP_TABLE_SIZE]; +static DEFINE_MUTEX(fprobe_mutex); - if (fp->exit_handler) { - rh = rethook_try_get(fp->rethook); - if (!rh) { - fp->nmissed++; - return; - } - fpr = container_of(rh, struct fprobe_rethook_node, node); - fpr->entry_ip = ip; - fpr->entry_parent_ip = parent_ip; - if (fp->entry_data_size) - entry_data = fpr->data; +/* + * Find first fprobe in the hlist. It will be iterated twice in the entry + * probe, once for correcting the total required size, the second time is + * calling back the user handlers. + * Thus the hlist in the fprobe_table must be sorted and new probe needs to + * be added *before* the first fprobe. + */ +static struct fprobe_hlist_node *find_first_fprobe_node(unsigned long ip) +{ + struct fprobe_hlist_node *node; + struct hlist_head *head; + + head = &fprobe_ip_table[hash_ptr((void *)ip, FPROBE_IP_HASH_BITS)]; + hlist_for_each_entry_rcu(node, head, hlist, + lockdep_is_held(&fprobe_mutex)) { + if (node->addr == ip) + return node; } + return NULL; +} +NOKPROBE_SYMBOL(find_first_fprobe_node); - if (fp->entry_handler) - ret = fp->entry_handler(fp, ip, parent_ip, fregs, entry_data); +/* Node insertion and deletion requires the fprobe_mutex */ +static void insert_fprobe_node(struct fprobe_hlist_node *node) +{ + unsigned long ip = node->addr; + struct fprobe_hlist_node *next; + struct hlist_head *head; - /* If entry_handler returns !0, nmissed is not counted. */ - if (rh) { - if (ret) - rethook_recycle(rh); - else - rethook_hook(rh, ftrace_get_regs(fregs), true); + lockdep_assert_held(&fprobe_mutex); + + next = find_first_fprobe_node(ip); + if (next) { + hlist_add_before_rcu(&node->hlist, &next->hlist); + return; } + head = &fprobe_ip_table[hash_ptr((void *)ip, FPROBE_IP_HASH_BITS)]; + hlist_add_head_rcu(&node->hlist, head); } -static void fprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) +/* Return true if there are synonims */ +static bool delete_fprobe_node(struct fprobe_hlist_node *node) { - struct fprobe *fp; - int bit; + lockdep_assert_held(&fprobe_mutex); - fp = container_of(ops, struct fprobe, ops); - if (fprobe_disabled(fp)) - return; + WRITE_ONCE(node->fp, NULL); + hlist_del_rcu(&node->hlist); + return !!find_first_fprobe_node(node->addr); +} - /* recursion detection has to go before any traceable function and - * all functions before this point should be marked as notrace - */ - bit = ftrace_test_recursion_trylock(ip, parent_ip); - if (bit < 0) { - fp->nmissed++; - return; +/* Check existence of the fprobe */ +static bool is_fprobe_still_exist(struct fprobe *fp) +{ + struct hlist_head *head; + struct fprobe_hlist *fph; + + head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)]; + hlist_for_each_entry_rcu(fph, head, hlist, + lockdep_is_held(&fprobe_mutex)) { + if (fph->fp == fp) + return true; } - __fprobe_handler(ip, parent_ip, ops, fregs); - ftrace_test_recursion_unlock(bit); + return false; +} +NOKPROBE_SYMBOL(is_fprobe_still_exist); + +static int add_fprobe_hash(struct fprobe *fp) +{ + struct fprobe_hlist *fph = fp->hlist_array; + struct hlist_head *head; + + lockdep_assert_held(&fprobe_mutex); + if (WARN_ON_ONCE(!fph)) + return -EINVAL; + + if (is_fprobe_still_exist(fp)) + return -EEXIST; + + head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)]; + hlist_add_head_rcu(&fp->hlist_array->hlist, head); + return 0; } -NOKPROBE_SYMBOL(fprobe_handler); -static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) +static int del_fprobe_hash(struct fprobe *fp) { - struct fprobe *fp; - int bit; + struct fprobe_hlist *fph = fp->hlist_array; - fp = container_of(ops, struct fprobe, ops); - if (fprobe_disabled(fp)) - return; + lockdep_assert_held(&fprobe_mutex); - /* recursion detection has to go before any traceable function and - * all functions called before this point should be marked as notrace - */ - bit = ftrace_test_recursion_trylock(ip, parent_ip); - if (bit < 0) { - fp->nmissed++; - return; + if (WARN_ON_ONCE(!fph)) + return -EINVAL; + + if (!is_fprobe_still_exist(fp)) + return -ENOENT; + + fph->fp = NULL; + hlist_del_rcu(&fph->hlist); + return 0; +} + +/* The entry data size is 4 bits (=16) * sizeof(long) in maximum */ +#define FPROBE_HEADER_SIZE_BITS 4 +#define MAX_FPROBE_DATA_SIZE_WORD ((1L << FPROBE_HEADER_SIZE_BITS) - 1) +#define MAX_FPROBE_DATA_SIZE (MAX_FPROBE_DATA_SIZE_WORD * sizeof(long)) +#define FPROBE_HEADER_PTR_BITS (BITS_PER_LONG - FPROBE_HEADER_SIZE_BITS) +#define FPROBE_HEADER_PTR_MASK GENMASK(FPROBE_HEADER_PTR_BITS - 1, 0) +#define FPROBE_HEADER_SIZE sizeof(unsigned long) + +static inline unsigned long encode_fprobe_header(struct fprobe *fp, int size_words) +{ + if (WARN_ON_ONCE(size_words > MAX_FPROBE_DATA_SIZE_WORD || + ((unsigned long)fp & ~FPROBE_HEADER_PTR_MASK) != + ~FPROBE_HEADER_PTR_MASK)) { + return 0; } + return ((unsigned long)size_words << FPROBE_HEADER_PTR_BITS) | + ((unsigned long)fp & FPROBE_HEADER_PTR_MASK); +} + +/* Return reserved data size in words */ +static inline int decode_fprobe_header(unsigned long val, struct fprobe **fp) +{ + unsigned long ptr; + + ptr = (val & FPROBE_HEADER_PTR_MASK) | ~FPROBE_HEADER_PTR_MASK; + if (fp) + *fp = (struct fprobe *)ptr; + return val >> FPROBE_HEADER_PTR_BITS; +} + +/* + * fprobe shadow stack management: + * Since fprobe shares a single fgraph_ops, it needs to share the stack entry + * among the probes on the same function exit. Note that a new probe can be + * registered before a target function is returning, we can not use the hash + * table to find the corresponding probes. Thus the probe address is stored on + * the shadow stack with its entry data size. + * + */ +static inline int __fprobe_handler(unsigned long ip, unsigned long parent_ip, + struct fprobe *fp, struct ftrace_regs *fregs, + void *data) +{ + if (!fp->entry_handler) + return 0; + + return fp->entry_handler(fp, ip, parent_ip, fregs, data); +} +static inline int __fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, + struct fprobe *fp, struct ftrace_regs *fregs, + void *data) +{ + int ret; /* * This user handler is shared with other kprobes and is not expected to be * called recursively. So if any other kprobe handler is running, this will @@ -108,45 +203,180 @@ static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, */ if (unlikely(kprobe_running())) { fp->nmissed++; - goto recursion_unlock; + return 0; } kprobe_busy_begin(); - __fprobe_handler(ip, parent_ip, ops, fregs); + ret = __fprobe_handler(ip, parent_ip, fp, fregs, data); kprobe_busy_end(); - -recursion_unlock: - ftrace_test_recursion_unlock(bit); + return ret; } -static void fprobe_exit_handler(struct rethook_node *rh, void *data, - unsigned long ret_ip, struct pt_regs *regs) +static int fprobe_entry(unsigned long func, unsigned long ret_ip, + struct ftrace_regs *fregs, struct fgraph_ops *gops) { - struct fprobe *fp = (struct fprobe *)data; - struct fprobe_rethook_node *fpr; - struct ftrace_regs *fregs = (struct ftrace_regs *)regs; - int bit; + struct fprobe_hlist_node *node, *first; + unsigned long *fgraph_data = NULL; + unsigned long header; + int reserved_words; + struct fprobe *fp; + int used, ret; - if (!fp || fprobe_disabled(fp)) - return; + if (WARN_ON_ONCE(!fregs)) + return 0; - fpr = container_of(rh, struct fprobe_rethook_node, node); + first = node = find_first_fprobe_node(func); + if (unlikely(!first)) + return 0; + + reserved_words = 0; + hlist_for_each_entry_from_rcu(node, hlist) { + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (!fp || !fp->exit_handler) + continue; + /* + * Since fprobe can be enabled until the next loop, we ignore the + * fprobe's disabled flag in this loop. + */ + reserved_words += + DIV_ROUND_UP(fp->entry_data_size, sizeof(long)) + 1; + } + node = first; + if (reserved_words) { + fgraph_data = fgraph_reserve_data(gops->idx, reserved_words * sizeof(long)); + if (unlikely(!fgraph_data)) { + hlist_for_each_entry_from_rcu(node, hlist) { + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (fp && !fprobe_disabled(fp)) + fp->nmissed++; + } + return 0; + } + } /* - * we need to assure no calls to traceable functions in-between the - * end of fprobe_handler and the beginning of fprobe_exit_handler. + * TODO: recursion detection has been done in the fgraph. Thus we need + * to add a callback to increment missed counter. */ - bit = ftrace_test_recursion_trylock(fpr->entry_ip, fpr->entry_parent_ip); - if (bit < 0) { - fp->nmissed++; + used = 0; + hlist_for_each_entry_from_rcu(node, hlist) { + void *data; + + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (!fp || fprobe_disabled(fp)) + continue; + + if (fp->entry_data_size && fp->exit_handler) + data = fgraph_data + used + 1; + else + data = NULL; + + if (fprobe_shared_with_kprobes(fp)) + ret = __fprobe_kprobe_handler(func, ret_ip, fp, fregs, data); + else + ret = __fprobe_handler(func, ret_ip, fp, fregs, data); + /* If entry_handler returns !0, nmissed is not counted but skips exit_handler. */ + if (!ret && fp->exit_handler) { + int size_words = DIV_ROUND_UP(fp->entry_data_size, sizeof(long)); + + header = encode_fprobe_header(fp, size_words); + if (likely(header)) { + fgraph_data[used] = header; + used += size_words + 1; + } + } + } + if (used < reserved_words) + memset(fgraph_data + used, 0, reserved_words - used); + + /* If any exit_handler is set, data must be used. */ + return used != 0; +} +NOKPROBE_SYMBOL(fprobe_entry); + +static void fprobe_return(unsigned long func, unsigned long ret_ip, + struct ftrace_regs *fregs, struct fgraph_ops *gops) +{ + unsigned long *fgraph_data = NULL; + unsigned long val; + struct fprobe *fp; + int size, curr; + int size_words; + + fgraph_data = (unsigned long *)fgraph_retrieve_data(gops->idx, &size); + if (!fgraph_data) return; + size_words = DIV_ROUND_UP(size, sizeof(long)); + + preempt_disable(); + + curr = 0; + while (size_words > curr) { + val = fgraph_data[curr++]; + if (!val) + break; + + size = decode_fprobe_header(val, &fp); + if (fp && is_fprobe_still_exist(fp) && !fprobe_disabled(fp)) { + if (WARN_ON_ONCE(curr + size > size_words)) + break; + fp->exit_handler(fp, func, ret_ip, fregs, + size ? fgraph_data + curr : NULL); + } + curr += size + 1; } + preempt_enable(); +} +NOKPROBE_SYMBOL(fprobe_return); - fp->exit_handler(fp, fpr->entry_ip, ret_ip, fregs, - fp->entry_data_size ? (void *)fpr->data : NULL); - ftrace_test_recursion_unlock(bit); +static struct fgraph_ops fprobe_graph_ops = { + .entryregfunc = fprobe_entry, + .retregfunc = fprobe_return, +}; +static int fprobe_graph_active; + +/* Add @addrs to the ftrace filter and register fgraph if needed. */ +static int fprobe_graph_add_ips(unsigned long *addrs, int num) +{ + int ret; + + lockdep_assert_held(&fprobe_mutex); + + ret = ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 0, 0); + if (ret) + return ret; + + if (!fprobe_graph_active) { + ret = register_ftrace_graph(&fprobe_graph_ops); + if (WARN_ON_ONCE(ret)) { + ftrace_free_filter(&fprobe_graph_ops.ops); + return ret; + } + } + fprobe_graph_active++; + return 0; +} + +/* Remove @addrs from the ftrace filter and unregister fgraph if possible. */ +static void fprobe_graph_remove_ips(unsigned long *addrs, int num) +{ + lockdep_assert_held(&fprobe_mutex); + + fprobe_graph_active--; + if (!fprobe_graph_active) { + /* Q: should we unregister it ? */ + unregister_ftrace_graph(&fprobe_graph_ops); + return; + } + + ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0); } -NOKPROBE_SYMBOL(fprobe_exit_handler); static int symbols_cmp(const void *a, const void *b) { @@ -176,56 +406,97 @@ static unsigned long *get_ftrace_locations(const char **syms, int num) return ERR_PTR(-ENOENT); } -static void fprobe_init(struct fprobe *fp) -{ - fp->nmissed = 0; - if (fprobe_shared_with_kprobes(fp)) - fp->ops.func = fprobe_kprobe_handler; - else - fp->ops.func = fprobe_handler; - - fp->ops.flags |= FTRACE_OPS_FL_SAVE_REGS; -} +struct filter_match_data { + const char *filter; + const char *notfilter; + size_t index; + size_t size; + unsigned long *addrs; +}; -static int fprobe_init_rethook(struct fprobe *fp, int num) +static int filter_match_callback(void *data, const char *name, unsigned long addr) { - int size; + struct filter_match_data *match = data; - if (num <= 0) - return -EINVAL; + if (!glob_match(match->filter, name) || + (match->notfilter && glob_match(match->notfilter, name))) + return 0; - if (!fp->exit_handler) { - fp->rethook = NULL; + if (!ftrace_location(addr)) return 0; - } - /* Initialize rethook if needed */ - if (fp->nr_maxactive) - size = fp->nr_maxactive; - else - size = num * num_possible_cpus() * 2; - if (size <= 0) - return -EINVAL; + if (match->addrs) + match->addrs[match->index] = addr; - /* Initialize rethook */ - fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler, - sizeof(struct fprobe_rethook_node), size); - if (IS_ERR(fp->rethook)) - return PTR_ERR(fp->rethook); + match->index++; + return match->index == match->size; +} - return 0; +/* + * Make IP list from the filter/no-filter glob patterns. + * Return the number of matched symbols, or -ENOENT. + */ +static int ip_list_from_filter(const char *filter, const char *notfilter, + unsigned long *addrs, size_t size) +{ + struct filter_match_data match = { .filter = filter, .notfilter = notfilter, + .index = 0, .size = size, .addrs = addrs}; + int ret; + + ret = kallsyms_on_each_symbol(filter_match_callback, &match); + if (ret < 0) + return ret; + ret = module_kallsyms_on_each_symbol(NULL, filter_match_callback, &match); + if (ret < 0) + return ret; + + return match.index ?: -ENOENT; } static void fprobe_fail_cleanup(struct fprobe *fp) { - if (!IS_ERR_OR_NULL(fp->rethook)) { - /* Don't need to cleanup rethook->handler because this is not used. */ - rethook_free(fp->rethook); - fp->rethook = NULL; + kfree(fp->hlist_array); + fp->hlist_array = NULL; +} + +/* Initialize the fprobe data structure. */ +static int fprobe_init(struct fprobe *fp, unsigned long *addrs, int num) +{ + struct fprobe_hlist *hlist_array; + unsigned long addr; + int size, i; + + if (!fp || !addrs || num <= 0) + return -EINVAL; + + size = ALIGN(fp->entry_data_size, sizeof(long)); + if (size > MAX_FPROBE_DATA_SIZE) + return -E2BIG; + fp->entry_data_size = size; + + hlist_array = kzalloc(struct_size(hlist_array, array, num), GFP_KERNEL); + if (!hlist_array) + return -ENOMEM; + + fp->nmissed = 0; + + hlist_array->size = num; + fp->hlist_array = hlist_array; + hlist_array->fp = fp; + for (i = 0; i < num; i++) { + hlist_array->array[i].fp = fp; + addr = ftrace_location(addrs[i]); + if (!addr) { + fprobe_fail_cleanup(fp); + return -ENOENT; + } + hlist_array->array[i].addr = addr; } - ftrace_free_filter(&fp->ops); + return 0; } +#define FPROBE_IPS_MAX INT_MAX + /** * register_fprobe() - Register fprobe to ftrace by pattern. * @fp: A fprobe data structure to be registered. @@ -239,46 +510,24 @@ static void fprobe_fail_cleanup(struct fprobe *fp) */ int register_fprobe(struct fprobe *fp, const char *filter, const char *notfilter) { - struct ftrace_hash *hash; - unsigned char *str; - int ret, len; + unsigned long *addrs; + int ret; if (!fp || !filter) return -EINVAL; - fprobe_init(fp); - - len = strlen(filter); - str = kstrdup(filter, GFP_KERNEL); - ret = ftrace_set_filter(&fp->ops, str, len, 0); - kfree(str); - if (ret) + ret = ip_list_from_filter(filter, notfilter, NULL, FPROBE_IPS_MAX); + if (ret < 0) return ret; - if (notfilter) { - len = strlen(notfilter); - str = kstrdup(notfilter, GFP_KERNEL); - ret = ftrace_set_notrace(&fp->ops, str, len, 0); - kfree(str); - if (ret) - goto out; - } - - /* TODO: - * correctly calculate the total number of filtered symbols - * from both filter and notfilter. - */ - hash = rcu_access_pointer(fp->ops.local_hash.filter_hash); - if (WARN_ON_ONCE(!hash)) - goto out; - - ret = fprobe_init_rethook(fp, (int)hash->count); - if (!ret) - ret = register_ftrace_function(&fp->ops); + addrs = kcalloc(ret, sizeof(unsigned long), GFP_KERNEL); + if (!addrs) + return -ENOMEM; + ret = ip_list_from_filter(filter, notfilter, addrs, ret); + if (ret > 0) + ret = register_fprobe_ips(fp, addrs, ret); -out: - if (ret) - fprobe_fail_cleanup(fp); + kfree(addrs); return ret; } EXPORT_SYMBOL_GPL(register_fprobe); @@ -286,7 +535,7 @@ EXPORT_SYMBOL_GPL(register_fprobe); /** * register_fprobe_ips() - Register fprobe to ftrace by address. * @fp: A fprobe data structure to be registered. - * @addrs: An array of target ftrace location addresses. + * @addrs: An array of target function address. * @num: The number of entries of @addrs. * * Register @fp to ftrace for enabling the probe on the address given by @addrs. @@ -298,23 +547,27 @@ EXPORT_SYMBOL_GPL(register_fprobe); */ int register_fprobe_ips(struct fprobe *fp, unsigned long *addrs, int num) { - int ret; - - if (!fp || !addrs || num <= 0) - return -EINVAL; - - fprobe_init(fp); + struct fprobe_hlist *hlist_array; + int ret, i; - ret = ftrace_set_filter_ips(&fp->ops, addrs, num, 0, 0); + ret = fprobe_init(fp, addrs, num); if (ret) return ret; - ret = fprobe_init_rethook(fp, num); - if (!ret) - ret = register_ftrace_function(&fp->ops); + mutex_lock(&fprobe_mutex); + + hlist_array = fp->hlist_array; + ret = fprobe_graph_add_ips(addrs, num); + if (!ret) { + add_fprobe_hash(fp); + for (i = 0; i < hlist_array->size; i++) + insert_fprobe_node(&hlist_array->array[i]); + } + mutex_unlock(&fprobe_mutex); if (ret) fprobe_fail_cleanup(fp); + return ret; } EXPORT_SYMBOL_GPL(register_fprobe_ips); @@ -352,14 +605,13 @@ EXPORT_SYMBOL_GPL(register_fprobe_syms); bool fprobe_is_registered(struct fprobe *fp) { - if (!fp || (fp->ops.saved_func != fprobe_handler && - fp->ops.saved_func != fprobe_kprobe_handler)) + if (!fp || !fp->hlist_array) return false; return true; } /** - * unregister_fprobe() - Unregister fprobe from ftrace + * unregister_fprobe() - Unregister fprobe. * @fp: A fprobe data structure to be unregistered. * * Unregister fprobe (and remove ftrace hooks from the function entries). @@ -368,23 +620,40 @@ bool fprobe_is_registered(struct fprobe *fp) */ int unregister_fprobe(struct fprobe *fp) { - int ret; + struct fprobe_hlist *hlist_array; + unsigned long *addrs = NULL; + int ret = 0, i, count; - if (!fprobe_is_registered(fp)) - return -EINVAL; + mutex_lock(&fprobe_mutex); + if (!fp || !is_fprobe_still_exist(fp)) { + ret = -EINVAL; + goto out; + } - if (!IS_ERR_OR_NULL(fp->rethook)) - rethook_stop(fp->rethook); + hlist_array = fp->hlist_array; + addrs = kcalloc(hlist_array->size, sizeof(unsigned long), GFP_KERNEL); + if (!addrs) { + ret = -ENOMEM; /* TODO: Fallback to one-by-one loop */ + goto out; + } - ret = unregister_ftrace_function(&fp->ops); - if (ret < 0) - return ret; + /* Remove non-synonim ips from table and hash */ + count = 0; + for (i = 0; i < hlist_array->size; i++) { + if (!delete_fprobe_node(&hlist_array->array[i])) + addrs[count++] = hlist_array->array[i].addr; + } + del_fprobe_hash(fp); - if (!IS_ERR_OR_NULL(fp->rethook)) - rethook_free(fp->rethook); + fprobe_graph_remove_ips(addrs, count); - ftrace_free_filter(&fp->ops); + kfree_rcu(hlist_array, rcu); + fp->hlist_array = NULL; +out: + mutex_unlock(&fprobe_mutex); + + kfree(addrs); return ret; } EXPORT_SYMBOL_GPL(unregister_fprobe); diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 271ce0caeec0..cf92111b5c79 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -17,10 +17,8 @@ static u32 rand1, entry_val, exit_val; /* Use indirect calls to avoid inlining the target functions */ static u32 (*target)(u32 value); static u32 (*target2)(u32 value); -static u32 (*target_nest)(u32 value, u32 (*nest)(u32)); static unsigned long target_ip; static unsigned long target2_ip; -static unsigned long target_nest_ip; static int entry_return_value; static noinline u32 fprobe_selftest_target(u32 value) @@ -33,11 +31,6 @@ static noinline u32 fprobe_selftest_target2(u32 value) return (value / div_factor) + 1; } -static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32)) -{ - return nest(value + 2); -} - static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *data) @@ -79,22 +72,6 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, KUNIT_EXPECT_NULL(current_test, data); } -static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, - struct ftrace_regs *fregs, void *data) -{ - KUNIT_EXPECT_FALSE(current_test, preemptible()); - return 0; -} - -static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, - struct ftrace_regs *fregs, void *data) -{ - KUNIT_EXPECT_FALSE(current_test, preemptible()); - KUNIT_EXPECT_EQ(current_test, ip, target_nest_ip); -} - /* Test entry only (no rethook) */ static void test_fprobe_entry(struct kunit *test) { @@ -191,25 +168,6 @@ static void test_fprobe_data(struct kunit *test) KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); } -/* Test nr_maxactive */ -static void test_fprobe_nest(struct kunit *test) -{ - static const char *syms[] = {"fprobe_selftest_target", "fprobe_selftest_nest_target"}; - struct fprobe fp = { - .entry_handler = nest_entry_handler, - .exit_handler = nest_exit_handler, - .nr_maxactive = 1, - }; - - current_test = test; - KUNIT_EXPECT_EQ(test, 0, register_fprobe_syms(&fp, syms, 2)); - - target_nest(rand1, target); - KUNIT_EXPECT_EQ(test, 1, fp.nmissed); - - KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); -} - static void test_fprobe_skip(struct kunit *test) { struct fprobe fp = { @@ -247,10 +205,8 @@ static int fprobe_test_init(struct kunit *test) rand1 = get_random_u32_above(div_factor); target = fprobe_selftest_target; target2 = fprobe_selftest_target2; - target_nest = fprobe_selftest_nest_target; target_ip = get_ftrace_location(target); target2_ip = get_ftrace_location(target2); - target_nest_ip = get_ftrace_location(target_nest); return 0; } @@ -260,7 +216,6 @@ static struct kunit_case fprobe_testcases[] = { KUNIT_CASE(test_fprobe), KUNIT_CASE(test_fprobe_syms), KUNIT_CASE(test_fprobe_data), - KUNIT_CASE(test_fprobe_nest), KUNIT_CASE(test_fprobe_skip), {} }; From patchwork Fri Jan 12 10:17:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187691 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp79288dyc; Fri, 12 Jan 2024 02:22:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IEzEcG6aXSIp4JcMG/VjullNPkdBvvtmwP2IwrnPQwg/WVgMLMG3m139HvVXc73X7ApwnW9 X-Received: by 2002:a17:903:2347:b0:1d4:4898:cf9d with SMTP id c7-20020a170903234700b001d44898cf9dmr706825plh.129.1705054930757; Fri, 12 Jan 2024 02:22:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054930; cv=none; d=google.com; s=arc-20160816; b=dW/QlAUPJ+vwEe8OmozjiRHI81vPQAbLYMjh4Lv0arYSv1Gt0xi3yKVKY29xSfUaog wBwD48H7bSB4qELD9+3UDSImj5To7egMtA+ntxD1/tk0Ad9XoBQxodt9Bp/DOzE5UKzY MfbUAS7JLmOIhxCQ6MMecfu5UTKlbPsilNlbfHvhYkT4KUfokL87PEN6yrENdU77UnU0 0PNhu4+81Y2IzxLlEzbg43dIYGJBaQzqP0bulJj7zyNOXz576V6YZnGRqvimaW9izFnq 8EI2LlOflFiQ2XhJ7YmM8JfMsz4dRClSWrpAAFxAfDA4PV4AC0EFuAe3lIAZ6hKLM23Z YZhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=j926EWqfo0g9vIUVygZM8OTFYJPm6KP/dXgYh+R7ao4=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=TxDMkUP6rTQ6u5Tudw7MG49b0XST174/pGI/GZGiSq72idWoIx8q9Y7825CeYjcL7J FVCvPt15hXAd0cwy3UIqzloprpnLOYBH5Wmn7RzD7bp1xbiIoRVOasirt6LgLlc7YCdy jbEuTW8aS1z38ZU7UXil2LUBiw6WWRcSi9nuGqjCGhCFCIIRGcztV9BpIUUZYD+bLf2c XB8Cptjxuh7zEX63aBvaV4FCPkyUiDsXxdCbj0Z4fh23KFA5GYzgjFsexo0IteW73pdo Ijn1N7NDNI/C7JxH1PYgjvUieI8QiNJWr/qASSPLS+Ziq6EhvGcwtg0mxNortg7BxL4B ao7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=S6aGHQ4z; spf=pass (google.com: domain of linux-kernel+bounces-24582-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24582-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id c7-20020a170903234700b001d4f2286e15si3268140plh.393.2024.01.12.02.22.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:22:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24582-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=S6aGHQ4z; spf=pass (google.com: domain of linux-kernel+bounces-24582-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24582-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 80C0428F899 for ; Fri, 12 Jan 2024 10:22:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 04937627F0; Fri, 12 Jan 2024 10:17:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="S6aGHQ4z" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52FA065197; Fri, 12 Jan 2024 10:17:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34A9DC43399; Fri, 12 Jan 2024 10:17:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054644; bh=V7BE8q0MmiFvaddSn9SJAfAslRoLC7k0iroDoA8t4/U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S6aGHQ4zPVPSzDuPqat29zlakh9uHO8Q+vP1Ikvv4xcFLnqxNicgPzP/x5D19TbKm SqLsfBh+RlG3n9o8r4D0K8WTXtLF4Q5tv1EVvDBmlPjNsNlv7X3WJjI5Q2uZD28nED KAnqu8uglFVkOpqslfeJsa+eITjxbi051s/YA8aWhCY0MpTHkTH4ovrh7YVqoIExXd /ba7RrmqGrym43Wyh+aSYdM15TVg7ac5YL3SWqLP/7/uZSbprWdv1gfmcAJwFgC/yi qcRpDZa0fydNaYFVNaEnXJPphvwmBHpWquh/s6e24+KVs/siE5hYFZyCW7EXnuH6xO rf8MwAhYs4tYQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 33/36] tracing/fprobe: Remove nr_maxactive from fprobe Date: Fri, 12 Jan 2024 19:17:17 +0900 Message-Id: <170505463759.459169.11138167965973829550.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879678741716177 X-GMAIL-MSGID: 1787879678741716177 From: Masami Hiramatsu (Google) Remove depercated fprobe::nr_maxactive. This involves fprobe events to rejects the maxactive number. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Newly added. --- include/linux/fprobe.h | 2 -- kernel/trace/trace_fprobe.c | 44 ++++++------------------------------------- 2 files changed, 6 insertions(+), 40 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 08b37b0d1d05..c28d06ddfb8e 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -47,7 +47,6 @@ struct fprobe_hlist { * @nmissed: The counter for missing events. * @flags: The status flag. * @entry_data_size: The private data storage size. - * @nr_maxactive: The max number of active functions. (*deprecated) * @entry_handler: The callback function for function entry. * @exit_handler: The callback function for function exit. * @hlist_array: The fprobe_hlist for fprobe search from IP hash table. @@ -56,7 +55,6 @@ struct fprobe { unsigned long nmissed; unsigned int flags; size_t entry_data_size; - int nr_maxactive; int (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *regs, diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 7d2a66135f83..d96de0dbc0cb 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -375,7 +375,6 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group, const char *event, const char *symbol, struct tracepoint *tpoint, - int maxactive, int nargs, bool is_return) { struct trace_fprobe *tf; @@ -395,7 +394,6 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group, tf->fp.entry_handler = fentry_dispatcher; tf->tpoint = tpoint; - tf->fp.nr_maxactive = maxactive; ret = trace_probe_init(&tf->tp, event, group, false); if (ret < 0) @@ -974,12 +972,11 @@ static int __trace_fprobe_create(int argc, const char *argv[]) * FETCHARG:TYPE : use TYPE instead of unsigned long. */ struct trace_fprobe *tf = NULL; - int i, len, new_argc = 0, ret = 0; + int i, new_argc = 0, ret = 0; bool is_return = false; char *symbol = NULL; const char *event = NULL, *group = FPROBE_EVENT_SYSTEM; const char **new_argv = NULL; - int maxactive = 0; char buf[MAX_EVENT_NAME_LEN]; char gbuf[MAX_EVENT_NAME_LEN]; char sbuf[KSYM_NAME_LEN]; @@ -1000,33 +997,13 @@ static int __trace_fprobe_create(int argc, const char *argv[]) trace_probe_log_init("trace_fprobe", argc, argv); - event = strchr(&argv[0][1], ':'); - if (event) - event++; - - if (isdigit(argv[0][1])) { - if (event) - len = event - &argv[0][1] - 1; - else - len = strlen(&argv[0][1]); - if (len > MAX_EVENT_NAME_LEN - 1) { - trace_probe_log_err(1, BAD_MAXACT); - goto parse_error; - } - memcpy(buf, &argv[0][1], len); - buf[len] = '\0'; - ret = kstrtouint(buf, 0, &maxactive); - if (ret || !maxactive) { + if (argv[0][1] != '\0') { + if (argv[0][1] != ':') { + trace_probe_log_set_index(0); trace_probe_log_err(1, BAD_MAXACT); goto parse_error; } - /* fprobe rethook instances are iterated over via a list. The - * maximum should stay reasonable. - */ - if (maxactive > RETHOOK_MAXACTIVE_MAX) { - trace_probe_log_err(1, MAXACT_TOO_BIG); - goto parse_error; - } + event = &argv[0][2]; } trace_probe_log_set_index(1); @@ -1036,12 +1013,6 @@ static int __trace_fprobe_create(int argc, const char *argv[]) if (ret < 0) goto parse_error; - if (!is_return && maxactive) { - trace_probe_log_set_index(0); - trace_probe_log_err(1, BAD_MAXACT_TYPE); - goto parse_error; - } - trace_probe_log_set_index(0); if (event) { ret = traceprobe_parse_event_name(&event, &group, gbuf, @@ -1095,8 +1066,7 @@ static int __trace_fprobe_create(int argc, const char *argv[]) } /* setup a probe */ - tf = alloc_trace_fprobe(group, event, symbol, tpoint, maxactive, - argc, is_return); + tf = alloc_trace_fprobe(group, event, symbol, tpoint, argc, is_return); if (IS_ERR(tf)) { ret = PTR_ERR(tf); /* This must return -ENOMEM, else there is a bug */ @@ -1172,8 +1142,6 @@ static int trace_fprobe_show(struct seq_file *m, struct dyn_event *ev) seq_putc(m, 't'); else seq_putc(m, 'f'); - if (trace_fprobe_is_return(tf) && tf->fp.nr_maxactive) - seq_printf(m, "%d", tf->fp.nr_maxactive); seq_printf(m, ":%s/%s", trace_probe_group_name(&tf->tp), trace_probe_name(&tf->tp)); From patchwork Fri Jan 12 10:17:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187692 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp79389dyc; Fri, 12 Jan 2024 02:22:27 -0800 (PST) X-Google-Smtp-Source: AGHT+IGAGYRo1qn7GLVoMzwp/hy1boCouly5jWzwpuPwziLZg7/1MdNz6357rpCxUkLgKuSMM7c+ X-Received: by 2002:a05:620a:b16:b0:783:597:9991 with SMTP id t22-20020a05620a0b1600b0078305979991mr1278681qkg.71.1705054946826; Fri, 12 Jan 2024 02:22:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054946; cv=none; d=google.com; s=arc-20160816; b=otRsY5YqXNbXWraETyz9E8aobTpe/Ozgt2+3OtSZzW/1EYXNnH9gKp3CGFnOf1HJ28 S50vWkEYWv9MNtIw4JptYRv8rrvDcRvsVtd4hTeBgNquQ3lYqbhkkNOjV3W4UMZRJI6K A7eeJiVWBKWHrAcPPeIjkZdqvSlKpakrW3xXxu63fUL0RzvuZrEfvroBHbB8kOtq064C Ff02eQybA4gBdXmLiURI9fR+yTs7te7nM6z+bX2liIw+CqwIgIX3mTunCZIlt1pC6z10 nA1LQa0NFskknGy4SLTjB1uor0gcrXj8Pmz6ZhRPULzzcAmvZ9wqKMxS393AcbAsMkco /6Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=NRt0pRDVMb2HoTOOgHKbTFmgFt039oUBjnBj47jnGM0=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=VlgRJDMgpDChykBCgOs1+R21SPPtqb92AhlxhpDYqYES7U71tfTqz3dhJvhuPNvfC0 IadSuRP+tCZGpvOJXqlEQkZCx4mUby3AEk9RRPis4GZhRQ24sihg77di2M1lLfAai/1p yKtXxUCf03yLuHcjZOCc19pR7TU1cjg1pYD2e/p4esYnAoalQZza+RP8+pynHYqxWNcb UIjxysF7/QYG+Sy8OkxoQg12NV9f8Lru02rvevYENhFyYgk8ZftEOY6sG8MEhAc/dezM Bmf4uW7rFa3nEJCnZwovDg9GKB7fSPZ1ZgG6DgNutabKfjkvkMJ2xuknkz2C/idNdHrK LslQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=oHntNWEp; spf=pass (google.com: domain of linux-kernel+bounces-24583-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24583-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id y9-20020a05620a09c900b0078314baf856si2529804qky.576.2024.01.12.02.22.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:22:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24583-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=oHntNWEp; spf=pass (google.com: domain of linux-kernel+bounces-24583-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24583-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 9B5501C239A7 for ; Fri, 12 Jan 2024 10:22:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9CA8467E69; Fri, 12 Jan 2024 10:17:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oHntNWEp" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B2895DF13; Fri, 12 Jan 2024 10:17:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D954FC433C7; Fri, 12 Jan 2024 10:17:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054655; bh=AhqKrJz5NqAxQZJTxzvS35/o9mTQRDXLSjKOCvX5P6o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oHntNWEpuogcwl9tr0Ka8NLsEt/vpMYgc61WMKrpbo49S/UMwYcu+xn8so+T9qLnH a+RMcD7xvauld+JOEc/SE7H8UYO6ffhY2WlBB/IVqJBYwWBPAlXHDmzqOwJ6+cRC0P asxQZ8bfMWEzRMHgDC7RTPYAZhgNzCNkgEGUld9B4o4V3u1jroOUsya6LW5JNHo51R U2VnyvQDj7GkX6r+CullAOymbCpfs1++/RyF9dguLU8x8rutC5ctjeqCxX114OIHQa EkzQWrfUTAc7rrxtFTymhNo0QSZrH1x4uPo02TJXWj9CGnw7Q/sKkyk8eGx2qz+6yQ IM+fKHpzD6iPQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 34/36] selftests: ftrace: Remove obsolate maxactive syntax check Date: Fri, 12 Jan 2024 19:17:29 +0900 Message-Id: <170505464937.459169.10120853928494235861.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879695789530402 X-GMAIL-MSGID: 1787879695789530402 From: Masami Hiramatsu (Google) Since the fprobe event does not support maxactive anymore, stop testing the maxactive syntax error checking. Signed-off-by: Masami Hiramatsu (Google) --- .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc index 20e42c030095..66516073ff27 100644 --- a/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc +++ b/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc @@ -16,9 +16,7 @@ aarch64) REG=%r0 ;; esac -check_error 'f^100 vfs_read' # MAXACT_NO_KPROBE -check_error 'f^1a111 vfs_read' # BAD_MAXACT -check_error 'f^100000 vfs_read' # MAXACT_TOO_BIG +check_error 'f^100 vfs_read' # BAD_MAXACT check_error 'f ^non_exist_func' # BAD_PROBE_ADDR (enoent) check_error 'f ^vfs_read+10' # BAD_PROBE_ADDR From patchwork Fri Jan 12 10:17:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187693 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp79492dyc; Fri, 12 Jan 2024 02:22:42 -0800 (PST) X-Google-Smtp-Source: AGHT+IEVHy004w0TRtZ/YFXtF6A5JtDZZ8wB5fhzB3TQKzuOWQKahLM+pK6Ye5J1IPANmWIGSWcN X-Received: by 2002:a05:6a20:43a6:b0:19a:8770:9944 with SMTP id i38-20020a056a2043a600b0019a87709944mr797407pzl.31.1705054962422; Fri, 12 Jan 2024 02:22:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054962; cv=none; d=google.com; s=arc-20160816; b=sS+DL2RSjW9OGZAmJhzMISpbgkx+f50SzPxA8Nhn/pUJdn1zpdpB+sFofrQ42/1/aT +0fICEMfjeg655gmfXeG3CoOOVyFs3hXLFERjPiYruYwOJFk8J0A5RRlYDQeGjH9RWTd y4sS9Pl8nUUIJDfh4x5jl6dqLzpBZJ/oPSZ+gMsO3rzAoh8OaU8DxYHO9odrvCTwVqeu nznH1hX2mZcRKcINebKFHkjps2GlYk5SmBrxDso00bjnSwV91DL5AVCzkEuWiNIqYu2A YKuo93EC/8vu3o04AFp/XzptnVyMoJzavJ89TZhJ7DdP1dFVmK4/Rd/eSfwtTY50ZH/i pWmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=mV5TwTTFofo7Hr0MUgPsm9fSGisdc/rZSPC/dKdQApI=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=VVMW+FNO4ORlp12TFF5CeGrHLFcAz7hEhNL/AZgsgruls/+hRKhHvbCmQarND6yl6I GaPhst8WyMUuJNjtt1B6L5BQvm6ukeiE6yCoIGjZ7YQKVQBqEa3zr+OYSJ+E9/ltWoTE Zk+8sCSIEIG6oS4/OWEzmbrh9t1Tdg9Jfy+KyPxbsnnRDRd0mZazFFDpJt40da/09qpr MGv0kwxj1hlpUsEYdhNCrz5xtKj5eryHrNE/D/n30b95mnwg4Ucfknoq9WQVmuPxihVR MJk/+wZHzcNwkUTGSn8e+QY/bXf2M1NqgISsXsd+MIlBIoEzqPtkRPv7385xSNsGZ4X+ aezQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=csJkroGg; spf=pass (google.com: domain of linux-kernel+bounces-24584-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24584-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id f23-20020a635557000000b005ce107e6eb6si3043892pgm.657.2024.01.12.02.22.42 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:22:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24584-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=csJkroGg; spf=pass (google.com: domain of linux-kernel+bounces-24584-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24584-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 3076028F790 for ; Fri, 12 Jan 2024 10:22:42 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E74EA67E77; Fri, 12 Jan 2024 10:17:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="csJkroGg" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F1ED67E63; Fri, 12 Jan 2024 10:17:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3F329C433C7; Fri, 12 Jan 2024 10:17:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054666; bh=MTl3WOEWmkU7N1ZYT3IdZA3wW+cclx8nbkw1UKe91tQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=csJkroGgBq1azwpb1XOb9cRStOMc4Q/al//mb/AMOWptfL9RsQRqpDzr+DYCpwg2g P5julnK6EjveKGaVzDmqg+pBEex5n5lCKh9R3oevfXqiloSUI7UrFHzzbLz+YPtSMN spsTXCGu8jK8id08AomrpJCyRI12fpNCtWJINYaw/AiarZ7jwdq48eRCZIwomD/sai 0d8ZXS2SqhuRmq5r3uK2bf7v5YIX++WiaCbgeKkIurURyRyHhARSO/Tf18yGKncin4 i2J6Wq52XWSq6kx1iQdQcUk+pXsdnAIyuPFY6WKHje4CaiLHyGQlre1bsFNZez3zs3 nT5QnA9x6StcA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 35/36] selftests/ftrace: Add a test case for repeating register/unregister fprobe Date: Fri, 12 Jan 2024 19:17:41 +0900 Message-Id: <170505466094.459169.2134061568012803420.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879712516440632 X-GMAIL-MSGID: 1787879712516440632 From: Masami Hiramatsu (Google) This test case repeats define and undefine the fprobe dynamic event to ensure that the fprobe does not cause any issue with such operations. Signed-off-by: Masami Hiramatsu (Google) --- .../test.d/dynevent/add_remove_fprobe_repeat.tc | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc diff --git a/tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc b/tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc new file mode 100644 index 000000000000..b4ad09237e2a --- /dev/null +++ b/tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc @@ -0,0 +1,19 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 +# description: Generic dynamic event - Repeating add/remove fprobe events +# requires: dynamic_events "f[:[/][]] [%return] []":README + +echo 0 > events/enable +echo > dynamic_events + +PLACE=$FUNCTION_FORK +REPEAT_TIMES=64 + +for i in `seq 1 $REPEAT_TIMES`; do + echo "f:myevent $PLACE" >> dynamic_events + grep -q myevent dynamic_events + test -d events/fprobes/myevent + echo > dynamic_events +done + +clear_trace From patchwork Fri Jan 12 10:17:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 187694 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp79596dyc; Fri, 12 Jan 2024 02:22:58 -0800 (PST) X-Google-Smtp-Source: AGHT+IExQgbfHvqRCsFqMM9HVVHysXsDOVpgDzfShzIlFjnIeYidhQ+k1c3njjkYMZGPyXL1xPTz X-Received: by 2002:a05:6214:d0b:b0:680:d83:dc9c with SMTP id 11-20020a0562140d0b00b006800d83dc9cmr686566qvh.101.1705054978241; Fri, 12 Jan 2024 02:22:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705054978; cv=none; d=google.com; s=arc-20160816; b=WT5HMuFhHFg5f4p9dnnqmmOHdkSKq5oEvagZCK7PJ7vqxwcQCTChS0anSgq6i4X1T/ gG+oAhEXltcVKZBdUzmk5Z4ee6o+80xMypvedJpxJ8sdF6fA4fY2ML3Jwng+0yrFYfWL rVTUf8Py8gY7zdYIntmD54uyQBsUfx8CAhM6rp0t3gP2tZvxA0USgg15LsdB5MnvtNac BwZnDo8WIJwxwsPD89pjBfDlciVg52UJHnE571ebhT32aMDwrGUle6adOM/RoHlCXaB7 Lc17kixmkB/I831hZ1rQb3I+dahyzH9q/Fe6gVwKe4BCe4m1pAY2O1UPCeMtKRjZYOdK Qg5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=g/RVfnmZcQdfRZB+YRyiXpHH94CCdK1xBgexCLFpl6s=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=IWWYBSkIplVw0EYT/FCVMYra8E/Iwa5HXBiwJV7tYhsdBiiOpTpXB2QPSP+HZtGeuA DS1Ad2Uer06zgz0eBefDu+LTrVoZDfDsbyFyLt4vZ47Ww4KAb1u7zEdmOvs67k2amMTY ve6cLh+L69BiBFe2kR5H3ahMun5yrWIhcN+QIwebfXa1xAhQQ73shvVNiuWvwRvwSD6Q 6zVp/tleQWDyV/izLJ+adS6rTrWOjmVoyodELSX2lKvJVB7juUZMrY5tUU2E7/oZi7+W 7kDtT6ad4p0dbie6niJIxY4BsZ94svbe13guxfg8uLWEwrg0Y/8bc3rlIusQ6Nc85tHi itcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=f2z9gtUA; spf=pass (google.com: domain of linux-kernel+bounces-24585-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24585-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id r5-20020a0ccc05000000b0067f6e471793si2537717qvk.553.2024.01.12.02.22.58 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 02:22:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-24585-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=f2z9gtUA; spf=pass (google.com: domain of linux-kernel+bounces-24585-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-24585-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 0EE811C24EBF for ; Fri, 12 Jan 2024 10:22:58 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0E4876280A; Fri, 12 Jan 2024 10:18:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f2z9gtUA" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 404BC5D90A; Fri, 12 Jan 2024 10:17:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3262C433F1; Fri, 12 Jan 2024 10:17:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705054678; bh=8t7YVZmoVUR7dCk/2nFP4SmQbrkuMwJVZH30PnGw0sc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f2z9gtUAWIjEDs9pfVTmI6Zy2S9KhTEAr4uRe1HUHIokgTpq4u59CeZFnN5+AkPWh CGwyaj9gjzVtULbi8P2ZnK6nNyNfgbOG+jHrd0LMn9s6rkd9QYmbuwmkYRfSYO5msU DZaPnMVoglrPX4PuMRm7leKECaoBqVAfKqpdqkowhtekw8Bibu6PQJ8b7Ha0NT7X5q yI2S+C2yYbL7pWtiGwge8L9JBeYDByEbKb8BVgQN2r3Vq+lMZtc6BqBIX9FBTXimSv ApgS1tIfuEj08EoSVUNxW3tBxXbifc73LWam6TelHgIivaByLTXrSd6uA1g6ibIkGX Dub11J9U2j5vg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v6 36/36] Documentation: probes: Update fprobe on function-graph tracer Date: Fri, 12 Jan 2024 19:17:52 +0900 Message-Id: <170505467231.459169.5422331995958897132.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170505424954.459169.10630626365737237288.stgit@devnote2> References: <170505424954.459169.10630626365737237288.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787879728882069082 X-GMAIL-MSGID: 1787879728882069082 From: Masami Hiramatsu (Google) Update fprobe documentation for the new fprobe on function-graph tracer. This includes some bahvior changes and pt_regs to ftrace_regs interface change. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Update @fregs parameter explanation. --- Documentation/trace/fprobe.rst | 42 ++++++++++++++++++++++++++-------------- 1 file changed, 27 insertions(+), 15 deletions(-) diff --git a/Documentation/trace/fprobe.rst b/Documentation/trace/fprobe.rst index 196f52386aaa..f58bdc64504f 100644 --- a/Documentation/trace/fprobe.rst +++ b/Documentation/trace/fprobe.rst @@ -9,9 +9,10 @@ Fprobe - Function entry/exit probe Introduction ============ -Fprobe is a function entry/exit probe mechanism based on ftrace. -Instead of using ftrace full feature, if you only want to attach callbacks -on function entry and exit, similar to the kprobes and kretprobes, you can +Fprobe is a function entry/exit probe mechanism based on the function-graph +tracer. +Instead of tracing all functions, if you want to attach callbacks on specific +function entry and exit, similar to the kprobes and kretprobes, you can use fprobe. Compared with kprobes and kretprobes, fprobe gives faster instrumentation for multiple functions with single handler. This document describes how to use fprobe. @@ -91,12 +92,14 @@ The prototype of the entry/exit callback function are as follows: .. code-block:: c - int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data); + int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); - void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data); + void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); -Note that the @entry_ip is saved at function entry and passed to exit handler. -If the entry callback function returns !0, the corresponding exit callback will be cancelled. +Note that the @entry_ip is saved at function entry and passed to exit +handler. +If the entry callback function returns !0, the corresponding exit callback +will be cancelled. @fp This is the address of `fprobe` data structure related to this handler. @@ -112,12 +115,10 @@ If the entry callback function returns !0, the corresponding exit callback will This is the return address that the traced function will return to, somewhere in the caller. This can be used at both entry and exit. -@regs - This is the `pt_regs` data structure at the entry and exit. Note that - the instruction pointer of @regs may be different from the @entry_ip - in the entry_handler. If you need traced instruction pointer, you need - to use @entry_ip. On the other hand, in the exit_handler, the instruction - pointer of @regs is set to the current return address. +@fregs + This is the `ftrace_regs` data structure at the entry and exit. This + includes the function parameters, or the return values. So user can + access thos values via appropriate `ftrace_regs_*` APIs. @entry_data This is a local storage to share the data between entry and exit handlers. @@ -125,6 +126,17 @@ If the entry callback function returns !0, the corresponding exit callback will and `entry_data_size` field when registering the fprobe, the storage is allocated and passed to both `entry_handler` and `exit_handler`. +Entry data size and exit handlers on the same function +====================================================== + +Since the entry data is passed via per-task stack and it is has limited size, +the entry data size per probe is limited to `15 * sizeof(long)`. You also need +to take care that the different fprobes are probing on the same function, this +limit becomes smaller. The entry data size is aligned to `sizeof(long)` and +each fprobe which has exit handler uses a `sizeof(long)` space on the stack, +you should keep the number of fprobes on the same function as small as +possible. + Share the callbacks with kprobes ================================ @@ -165,8 +177,8 @@ This counter counts up when; - fprobe fails to take ftrace_recursion lock. This usually means that a function which is traced by other ftrace users is called from the entry_handler. - - fprobe fails to setup the function exit because of the shortage of rethook - (the shadow stack for hooking the function return.) + - fprobe fails to setup the function exit because of failing to allocate the + data buffer from the per-task shadow stack. The `fprobe::nmissed` field counts up in both cases. Therefore, the former skips both of entry and exit callback and the latter skips the exit