From patchwork Mon Nov 27 13:53:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170142 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3127086vqx; Mon, 27 Nov 2023 05:53:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IHDIHhBCJAyPIa7eKfT56aWp8yLzRU8EVWUVfnkimDDJV721o5tJRWcCUVPdiaVXT2EqNJq X-Received: by 2002:a17:902:db06:b0:1cf:c376:6d7f with SMTP id m6-20020a170902db0600b001cfc3766d7fmr7413175plx.42.1701093211438; Mon, 27 Nov 2023 05:53:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093211; cv=none; d=google.com; s=arc-20160816; b=fL+dEDgDuvBN7D5KENyG9k4wBf5PVp0tvlpDj4OG4RYzsRWsbgpwoOkaI0/ZYYheN3 BpLYCr5orYEA34rQsL6KcSDEWwUVcRR8owlnVzlvdT+RsqBQEZGpdzxq8hvyLHwpt+dS ETX6s5Tv76NJ+FUjbgeWREgh+wnUYpFFetjtS7luQU1V6PXq5NiE73mxezILR2fggT+D 5ILTTjhRHVxVCwexuPjTuvLFXYvZvre+Lad96Sjay+UKGVm7/qhW5fMBC+Qx0ftaKXP2 RdXmyxdN3RPe0PZgXARRWhLeD/yN+WMx4C9hbCuXR1BT35CnZ9J1eInaJF1FpcxIZMB8 wTaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=u/kik4VLGZC2GW07eIANh56Vc+hGBXm7uOb3XXHxgl4=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=bXvvHNbLZQC+dlRUPKblFltTSa45ss1QcpW2wYr3LPPMJ0Ok6lshD5qY1BXiFp99RR d4GQfRAjcJUsWFMaZ0THPVFvlsdbgN9QY5dXRrQvrE2KKoZmUgF4NbAS9ktCLnFa7eN6 I92oUuLi5xpHpo7KwevPe2GsbXEeWUcRgNJ6+BlyUkiRL4g3aiJ8nMesWPFX9z49WB2W Ag0voCVmhDI2io5kloaccFPn4shANN+caTXXg38UJgZ8gjD8bZikCUGWTUA1pu4nHy69 8jlHNMagD2rpYdXOGC23lIrozeQEgrJSLMmjQ+tB1DaPFRODfkwd0qi8WLApSZaymOZo QX6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uIZrQVq+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id j18-20020a170902da9200b001cf9d88a0desi8793298plx.242.2023.11.27.05.53.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:53:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uIZrQVq+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 244A18093F41; Mon, 27 Nov 2023 05:53:28 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233276AbjK0NxK (ORCPT + 99 others); Mon, 27 Nov 2023 08:53:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233286AbjK0NxH (ORCPT ); Mon, 27 Nov 2023 08:53:07 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF3A3D56 for ; Mon, 27 Nov 2023 05:53:12 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 69C9DC433C7; Mon, 27 Nov 2023 13:53:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093192; bh=HAB8fcsYhkztOqkqxJN8HQ+Use/Y/czjonQXwZoidcQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uIZrQVq+lXnVL0iTGzyTMigq0VYprpHHea7QEp0DUwqMa/mw57jtjUHOuti+awSJI Je9LXpbgbtOLePDnB/Qz/KxfFX0yg5V4HRpbz28in5M48FLLl9mdf1LW4JVfeXuIwS WN6aWPwufQbpYnFM+zX77ASxb2IhAm9VHFafxmf7FUTMeZPiQQETk7gTUmELI16Gfb 6mFoEcALesSrJHfNdtncl1RuVu8+Eto6THI5tIWy+ss28SCcMHqCy48yIg2PPoKYUR sNGBfgYDxoGWi3j9rPKiz88S5awEMyujyg4PCrRmq8UkIbad/nrr/ThSG+BXKYUMFG QE+Aeg8ZUhFKA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 01/33] tracing: Add a comment about ftrace_regs definition Date: Mon, 27 Nov 2023 22:53:05 +0900 Message-Id: <170109318538.343914.8426711105240287815.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:53:28 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725515040974212 X-GMAIL-MSGID: 1783725515040974212 From: Masami Hiramatsu (Google) To clarify what will be expected on ftrace_regs, add a comment to the architecture independent definition of the ftrace_regs. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Add instruction pointer Changes in v2: - newly added. --- include/linux/ftrace.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index e8921871ef9a..8b48fc621ea0 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -118,6 +118,32 @@ extern int ftrace_enabled; #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS +/** + * ftrace_regs - ftrace partial/optimal register set + * + * ftrace_regs represents a group of registers which is used at the + * function entry and exit. There are three types of registers. + * + * - Registers for passing the parameters to callee, including the stack + * pointer. (e.g. rcx, rdx, rdi, rsi, r8, r9 and rsp on x86_64) + * - Registers for passing the return values to caller. + * (e.g. rax and rdx on x86_64) + * - Registers for hooking the function call and return including the + * frame pointer (the frame pointer is architecture/config dependent) + * (e.g. rip, rbp and rsp for x86_64) + * + * Also, architecture dependent fields can be used for internal process. + * (e.g. orig_ax on x86_64) + * + * On the function entry, those registers will be restored except for + * the stack pointer, so that user can change the function parameters + * and instruction pointer (e.g. live patching.) + * On the function exit, only registers which is used for return values + * are restored. + * + * NOTE: user *must not* access regs directly, only do it via APIs, because + * the member can be changed according to the architecture. + */ struct ftrace_regs { struct pt_regs regs; }; From patchwork Mon Nov 27 13:53:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170143 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3127166vqx; Mon, 27 Nov 2023 05:53:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjNDUFBTaj5E89N32zw1sShQDxF+ILbzyxAXvl6/FYI0eagUZNdogcAV+rkymL263WGADf X-Received: by 2002:a05:6a20:8f01:b0:18c:a983:a5f2 with SMTP id b1-20020a056a208f0100b0018ca983a5f2mr2822208pzk.29.1701093216738; Mon, 27 Nov 2023 05:53:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093216; cv=none; d=google.com; s=arc-20160816; b=wfQOTdhR0qBEuX/qaJqFpGdAQL8FOHfPl5lKt0SPG/2iDIPlszcas5Gi8iMlqan1PW 9nP47+ahQvQyqUbjd4qpEc+ytQ53QZvsM7Rp34LInH7qOY4IYcYcW8gMjor7uEGDsSHC ULBdA0pmArqWNyHgjCHU2Jrj/yzyDxMZH90OVY1TlcwCnSnYrQC49ffEP8N84YBf7rxu 1y7i558C+b5sDiWAP36S4rSki+bOA2e7ujBcB0IXfiCO3lwPX50DsvYibCJG6P9a1IAC 4cJTXB6LX97uJUKgtufTghP8UuuSyt1V7+FmT0he6ktbBaZPBESXMsFl8GumuuINm0iG jEKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=6u8M/yEtWeoJVCEoLcNsgpoGnmlGQaiNnoviOD6fD5k=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=GGAHjJQzBrXf0hTyYkkdMfZYslSUR1FXJtUSyjFvZEMCf4efdTeZYvkCjlQyLZLjni YmGVpQq71ZMkiHXx81wzM0UA0gy5nCzR7yGITfeoFKg3EgKeL5SMDJUpCFtDlv76V7wc FTEK9xTdrAHviVpKH/twU/PIEmGRkhqqeov9Q45FSgYI1zuzahyojgnnDZWJ9kn9aDoM UNKmC2H9SPUCiU1yJw/5ardNYShAmP30nLOpc/HKCwVyJrcTn5Sztl5Vdrz/93UEw9iF rwt5SLqgu3qGVY1g1xpBJ5kRQKUqwMzBDtH/WjbV6SKM3dWI3OUZAkzOGoKPZcrIrxQp 4l/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="Vm/jERzZ"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id j15-20020a63e74f000000b005c2786b7e32si10214295pgk.812.2023.11.27.05.53.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:53:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="Vm/jERzZ"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 8C7D3809870F; Mon, 27 Nov 2023 05:53:35 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233286AbjK0NxW (ORCPT + 99 others); Mon, 27 Nov 2023 08:53:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233338AbjK0NxS (ORCPT ); Mon, 27 Nov 2023 08:53:18 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3E07D72 for ; Mon, 27 Nov 2023 05:53:24 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B0EBEC433C9; Mon, 27 Nov 2023 13:53:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093204; bh=dL9JJoim7F8avDTUtPFPrZxdt2I1oZt3h3FTr1Er5xQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vm/jERzZ2fHzywwmEj0R+oEtJZprum1Ef+4LZK687lXtAzXVoPj/i0HSav4xiHpC/ 43HhdUl37/3LO0QaLmPk1cdYzEujLXJPP6D2PiBDOnbp8EBzPe/gfaq0aBwNg1Ttfk fgC5VT4tJ2No2tFIq7DCYvY31An+d7uQDDEXW9YEhbrj8Gr8sgEd8K7YhgWPJ3T9Tt m8dU9mof7YbNAb7z8Tj9tMxjGJF3yXZCMNWjFXJoUc2JFZTLqqRs8gZzz36fvwdUR0 Lhp0CGCQxJs++ZgqkXnddxvG7Low112u8Q3a66+uAMHVqXSKT6l1/xI2jUfQmf2VAT JTYZT9oouhJUw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 02/33] x86: tracing: Add ftrace_regs definition in the header Date: Mon, 27 Nov 2023 22:53:18 +0900 Message-Id: <170109319767.343914.9766180271697584732.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:53:35 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725520600056577 X-GMAIL-MSGID: 1783725520600056577 From: Masami Hiramatsu (Google) Add ftrace_regs definition for x86_64 in the ftrace header to clarify what register will be accessible from ftrace_regs. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Add rip to be saved. Changes in v2: - Newly added. --- arch/x86/include/asm/ftrace.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index 897cf02c20b1..415cf7a2ec2c 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -36,6 +36,12 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) #ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS struct ftrace_regs { + /* + * On the x86_64, the ftrace_regs saves; + * rax, rcx, rdx, rdi, rsi, r8, r9, rbp, rip and rsp. + * Also orig_ax is used for passing direct trampoline address. + * x86_32 doesn't support ftrace_regs. + */ struct pt_regs regs; }; From patchwork Mon Nov 27 13:53:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170144 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3127361vqx; Mon, 27 Nov 2023 05:53:52 -0800 (PST) X-Google-Smtp-Source: AGHT+IFfDqAsMary8ZOqhhCWirLDK57oO+rzZW0ibD+7gqEAL7sm7mzB4U5UA/tNryfwqPMgW8Mu X-Received: by 2002:a17:902:d303:b0:1cc:2f70:4865 with SMTP id b3-20020a170902d30300b001cc2f704865mr10862811plc.26.1701093232552; Mon, 27 Nov 2023 05:53:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093232; cv=none; d=google.com; s=arc-20160816; b=0rGuNmlkpzCGM3T1cUhUqVP/9epnA0PV4D9XvPuYNNf/DazJKXtgnjlSS4zaQBjsb1 mkw+EvQsVxs8ilkCUFwPLhddf35z/xEB/K3Vk5i0HN2D6VXYaN5df6gBk4R93CXWWI8l mwY63VqltsHXo0rFWBMZFdJ5eYh+OXq4A+/sMLJHSh7125bUM2UhlSfOEeQbRGexoV0w Hgi9b55Ff2++ybEgGIeePHst3XfV7UoT1LnI0XLZ4yV/bbiAaCvcyJ8pszzDmbUbqNOi 6A8IS35GkAG5AYJZb15i6egHoMn+iBXKe4h4RR4dv2X41gVSfPWyKOmqE5U7XO4mLZ7H Ul8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=qfk82260fwHivQEmABc475lIDB13RydsA/UW3+hrbac=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=EIa68Teby3lGfWi7+qnon8NYZXpcfg49UyaKBIiqQ88yFyvBORX/msaA3uTkQxZUQW gngnVm5t8JTDohe54O8+y+MC9Dm9FgRtJ/mnowe3aMgvaVWwSlmor5TPOKr37A4eAoW5 3O33fK6oobDP9jI4XlDYvD5hEoxAM3QFr0qmCYqvk4tvU53ynwk9u5cvDMDdxsjTN4cf bAS5cYcPHLrvMjNXI08FGNJbjLNoo/LfpFYVLamEEf7PJh3NVdOJSYlc5MvjseN31D1S rexGm9krXe6eWzpNT6bPocl2f9bdHVybPxQ1dct2DcWsVWRXhyWnM7YEPawkLcyGiSwm 7qCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=L4MHBN57; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id o13-20020a170902778d00b001cfcf1faaa6si2026971pll.206.2023.11.27.05.53.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:53:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=L4MHBN57; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 40EE2801DA09; Mon, 27 Nov 2023 05:53:48 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233342AbjK0Nxj (ORCPT + 99 others); Mon, 27 Nov 2023 08:53:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233349AbjK0Nxc (ORCPT ); Mon, 27 Nov 2023 08:53:32 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9497410F6 for ; Mon, 27 Nov 2023 05:53:37 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1814BC433C9; Mon, 27 Nov 2023 13:53:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093217; bh=EOdwMS7vJXCXxyNePs9bFMY5mRinN5nN+9oKZfWkeWA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L4MHBN57G3QZY8ZAh54PDFudFE/St4hEVitLMygv1Z4kN+/CH7s0Ubc9aIiSMnpWw gN8AOtl4+2kJlP8xj/QCVHu7xMdGJbhNKMnvyychWczEQQ/qlfXzpbalfCqtA/EeFB zNs6si1q0pdoOYUndWF9NKh1+Jg7Ii43OKR4M0nQ3n+GYr8UTFk/emjyVFIZ2uubgx gWS3Aja/nmDUGqncUIc6Rae++58FMe3Jb6ytLBJVWIIAUXIPQNrvWwIMKnpTAFKPz8 kUqE75Z+ZCFdXELpXEZQKSWEvS5N53wfCXvcCnOQrm2hN7ltjMteO7Rysr+4hgyxlA K1KjVb0sr1ChA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 03/33] seq_buf: Export seq_buf_puts() Date: Mon, 27 Nov 2023 22:53:30 +0900 Message-Id: <170109320996.343914.1584440210780959121.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:53:48 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725537494630271 X-GMAIL-MSGID: 1783725537494630271 From: Christophe JAILLET Mark seq_buf_puts() which is part of the seq_buf API to be exported to kernel loadable GPL modules. Link: https://lkml.kernel.org/r/b9e3737f66ec2450221b492048ce0d9c65c84953.1698861216.git.christophe.jaillet@wanadoo.fr Signed-off-by: Christophe JAILLET Signed-off-by: Steven Rostedt (Google) Signed-off-by: Masami Hiramatsu (Google) --- lib/seq_buf.c | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/seq_buf.c b/lib/seq_buf.c index 45c450f423fa..46a1b00c3815 100644 --- a/lib/seq_buf.c +++ b/lib/seq_buf.c @@ -189,6 +189,7 @@ int seq_buf_puts(struct seq_buf *s, const char *str) seq_buf_set_overflow(s); return -1; } +EXPORT_SYMBOL_GPL(seq_buf_puts); /** * seq_buf_putc - sequence printing of simple character From patchwork Mon Nov 27 13:53:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170145 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3127545vqx; Mon, 27 Nov 2023 05:54:09 -0800 (PST) X-Google-Smtp-Source: AGHT+IEDofu3l7SENPtr6ZzrF2r4MiOf5Zp0PcosJzQNXtlfK+iOYz2arav7vzjfbKCydA8N10vY X-Received: by 2002:a17:902:e888:b0:1cf:c349:65ad with SMTP id w8-20020a170902e88800b001cfc34965admr4617465plg.34.1701093249370; Mon, 27 Nov 2023 05:54:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093249; cv=none; d=google.com; s=arc-20160816; b=yWIAPS6gJ8yy7ubLpgx0ikD9gD5NB0s5W9okcGCnW6KzBQZL2EvbI5zTh8S6f2yRyc n7IcLg7+RFp3H7e+lBGG4Kd8VxVeAmE0Rafr+yQv9YNmHeNAP/+3DbqAfcTEheiseniz U7xdbC0xnjcSZuNIgUsj8AcHmoTsHvdUIPq+8hsrS3Vc7ddlgIS7ySjQ5ZbUFokeenjh 5SfbfxhL6tilf4165Mrbff0Nhv7kjynBv+YvYBhfXo5AGcLzUXyPWkDky58ttaFVtCOh vlq6EnSmPgI4PEuUIcGVv8lCHLbmbpHYZMcXbtZGJIOAtTAZcbkvxANPoR/xU5CfRfBP jgzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=q61lus1nT4A5GtGMeJ/7KxvuB+maOoUqYP+3LFK9nEU=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=gcsH8yzp/dlFQp5EwVoaw3l4kUNNgxBWkkczwWC9t1GllwTBb61UZIGFJZy0gkEUhq q4ZUUXR/ANAeFCv3SkV78FBqhU5FVjUEEt23n5EaSB5xTojNjYBO9l4q8u/VnC5n6MP+ MRKmccYT5t9Vb8+BS29j9i6BCYmFP8klBTyrnor7/O220tgiYOzHg/wGRpDvhKjBwVEX IeuMljFildqp6rTekEC5blMcC0aq9xDfUUiSbWPZ4kx83k2uMTqUlgfJWEjadF8tvMU8 G35Yd/DnU/qlWQ2SNLO3zGMOjnXNKFx1NPLuOT3E0ustKtZmH4NsRY2Doo+mGYZkhq3v nRwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=rX4Fs0k3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id q22-20020a170902bd9600b001c9c967e77esi9137817pls.207.2023.11.27.05.54.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:54:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=rX4Fs0k3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 27BD58094D50; Mon, 27 Nov 2023 05:54:04 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233430AbjK0Nxy (ORCPT + 99 others); Mon, 27 Nov 2023 08:53:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233444AbjK0Nxp (ORCPT ); Mon, 27 Nov 2023 08:53:45 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF8C7D63 for ; Mon, 27 Nov 2023 05:53:50 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 95A14C433CD; Mon, 27 Nov 2023 13:53:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093230; bh=BaKMcq+s6VngWMd3ngPTESGkVeznx5ygK1pn7TSuURA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rX4Fs0k30d6BdyybeD8Gs8B4S0yfPlwedAGmfnOiLE3Ju297UKFf/iC1vum6DEYn+ PKXNMt3sfSTmLdieyu9GSms0PUrYZVNAZxWMYjXAihprA+MJz4IlJdISNFOMObdlZD GZtg1Uv6O2wW5selTcQu5ZmS55V5tuKzQi4tiVxLYjQukv4z7gHl+War8+Y4EeM2jn 3XEutvqP5gdMUaBTA7Y/Gak+caPHBddTrMWjCYhqoiojkAglqA009WhdzR94/E/Aq1 ah7b3m3fAEmDVMqlo1A8j8tXd9sDcYLZgPCCbNSeeHiUUDEXqh2nO0S20edLZ9zRj4 pKTwrqpxAaLiQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 04/33] function_graph: Convert ret_stack to a series of longs Date: Mon, 27 Nov 2023 22:53:43 +0900 Message-Id: <170109322264.343914.17065521139612510215.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:54:04 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725555172792695 X-GMAIL-MSGID: 1783725555172792695 From: Steven Rostedt (VMware) In order to make it possible to have multiple callbacks registered with the function_graph tracer, the retstack needs to be converted from an array of ftrace_ret_stack structures to an array of longs. This will allow to store the list of callbacks on the stack for the return side of the functions. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/sched.h | 2 - kernel/trace/fgraph.c | 124 ++++++++++++++++++++++++++++--------------------- 2 files changed, 71 insertions(+), 55 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 77f01ac385f7..3af00d726847 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1386,7 +1386,7 @@ struct task_struct { int curr_ret_depth; /* Stack of return addresses for return function tracing: */ - struct ftrace_ret_stack *ret_stack; + unsigned long *ret_stack; /* Timestamp for last schedule: */ unsigned long long ftrace_timestamp; diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index c83c005e654e..30edeb6d4aa9 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -25,6 +25,18 @@ #define ASSIGN_OPS_HASH(opsname, val) #endif +#define FGRAPH_RET_SIZE sizeof(struct ftrace_ret_stack) +#define FGRAPH_RET_INDEX (ALIGN(FGRAPH_RET_SIZE, sizeof(long)) / sizeof(long)) +#define SHADOW_STACK_SIZE (PAGE_SIZE) +#define SHADOW_STACK_INDEX \ + (ALIGN(SHADOW_STACK_SIZE, sizeof(long)) / sizeof(long)) +/* Leave on a buffer at the end */ +#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - FGRAPH_RET_INDEX) + +#define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) +#define RET_STACK_INC(c) ({ c += FGRAPH_RET_INDEX; }) +#define RET_STACK_DEC(c) ({ c -= FGRAPH_RET_INDEX; }) + DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; @@ -69,6 +81,7 @@ static int ftrace_push_return_trace(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp) { + struct ftrace_ret_stack *ret_stack; unsigned long long calltime; int index; @@ -85,23 +98,25 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, smp_rmb(); /* The return trace stack is full */ - if (current->curr_ret_stack == FTRACE_RETFUNC_DEPTH - 1) { + if (current->curr_ret_stack >= SHADOW_STACK_MAX_INDEX) { atomic_inc(¤t->trace_overrun); return -EBUSY; } calltime = trace_clock_local(); - index = ++current->curr_ret_stack; + index = current->curr_ret_stack; + RET_STACK_INC(current->curr_ret_stack); + ret_stack = RET_STACK(current, index); barrier(); - current->ret_stack[index].ret = ret; - current->ret_stack[index].func = func; - current->ret_stack[index].calltime = calltime; + ret_stack->ret = ret; + ret_stack->func = func; + ret_stack->calltime = calltime; #ifdef HAVE_FUNCTION_GRAPH_FP_TEST - current->ret_stack[index].fp = frame_pointer; + ret_stack->fp = frame_pointer; #endif #ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR - current->ret_stack[index].retp = retp; + ret_stack->retp = retp; #endif return 0; } @@ -148,7 +163,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, return 0; out_ret: - current->curr_ret_stack--; + RET_STACK_DEC(current->curr_ret_stack); out: current->curr_ret_depth--; return -EBUSY; @@ -159,11 +174,13 @@ static void ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, unsigned long frame_pointer) { + struct ftrace_ret_stack *ret_stack; int index; index = current->curr_ret_stack; + RET_STACK_DEC(index); - if (unlikely(index < 0 || index >= FTRACE_RETFUNC_DEPTH)) { + if (unlikely(index < 0 || index > SHADOW_STACK_MAX_INDEX)) { ftrace_graph_stop(); WARN_ON(1); /* Might as well panic, otherwise we have no where to go */ @@ -171,6 +188,7 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, return; } + ret_stack = RET_STACK(current, index); #ifdef HAVE_FUNCTION_GRAPH_FP_TEST /* * The arch may choose to record the frame pointer used @@ -186,22 +204,22 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, * Note, -mfentry does not use frame pointers, and this test * is not needed if CC_USING_FENTRY is set. */ - if (unlikely(current->ret_stack[index].fp != frame_pointer)) { + if (unlikely(ret_stack->fp != frame_pointer)) { ftrace_graph_stop(); WARN(1, "Bad frame pointer: expected %lx, received %lx\n" " from func %ps return to %lx\n", current->ret_stack[index].fp, frame_pointer, - (void *)current->ret_stack[index].func, - current->ret_stack[index].ret); + (void *)ret_stack->func, + ret_stack->ret); *ret = (unsigned long)panic; return; } #endif - *ret = current->ret_stack[index].ret; - trace->func = current->ret_stack[index].func; - trace->calltime = current->ret_stack[index].calltime; + *ret = ret_stack->ret; + trace->func = ret_stack->func; + trace->calltime = ret_stack->calltime; trace->overrun = atomic_read(¤t->trace_overrun); trace->depth = current->curr_ret_depth--; /* @@ -262,7 +280,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs * curr_ret_stack is after that. */ barrier(); - current->curr_ret_stack--; + RET_STACK_DEC(current->curr_ret_stack); if (unlikely(!ret)) { ftrace_graph_stop(); @@ -305,12 +323,13 @@ unsigned long ftrace_return_to_handler(unsigned long frame_pointer) struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int idx) { - idx = task->curr_ret_stack - idx; + int index = task->curr_ret_stack; - if (idx >= 0 && idx <= task->curr_ret_stack) - return &task->ret_stack[idx]; + index -= FGRAPH_RET_INDEX * (idx + 1); + if (index < 0) + return NULL; - return NULL; + return RET_STACK(task, index); } /** @@ -332,18 +351,20 @@ ftrace_graph_get_ret_stack(struct task_struct *task, int idx) unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp) { + struct ftrace_ret_stack *ret_stack; int index = task->curr_ret_stack; int i; if (ret != (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) return ret; - if (index < 0) - return ret; + RET_STACK_DEC(index); - for (i = 0; i <= index; i++) - if (task->ret_stack[i].retp == retp) - return task->ret_stack[i].ret; + for (i = index; i >= 0; RET_STACK_DEC(i)) { + ret_stack = RET_STACK(task, i); + if (ret_stack->retp == retp) + return ret_stack->ret; + } return ret; } @@ -357,14 +378,15 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, return ret; task_idx = task->curr_ret_stack; + RET_STACK_DEC(task_idx); if (!task->ret_stack || task_idx < *idx) return ret; task_idx -= *idx; - (*idx)++; + RET_STACK_INC(*idx); - return task->ret_stack[task_idx].ret; + return RET_STACK(task, task_idx); } #endif /* HAVE_FUNCTION_GRAPH_RET_ADDR_PTR */ @@ -402,7 +424,7 @@ trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub; static trace_func_graph_ent_t __ftrace_graph_entry = ftrace_graph_entry_stub; /* Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. */ -static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) +static int alloc_retstack_tasklist(unsigned long **ret_stack_list) { int i; int ret = 0; @@ -410,10 +432,7 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) struct task_struct *g, *t; for (i = 0; i < FTRACE_RETSTACK_ALLOC_SIZE; i++) { - ret_stack_list[i] = - kmalloc_array(FTRACE_RETFUNC_DEPTH, - sizeof(struct ftrace_ret_stack), - GFP_KERNEL); + ret_stack_list[i] = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); if (!ret_stack_list[i]) { start = 0; end = i; @@ -431,9 +450,9 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) if (t->ret_stack == NULL) { atomic_set(&t->trace_overrun, 0); - t->curr_ret_stack = -1; + t->curr_ret_stack = 0; t->curr_ret_depth = -1; - /* Make sure the tasks see the -1 first: */ + /* Make sure the tasks see the 0 first: */ smp_wmb(); t->ret_stack = ret_stack_list[start++]; } @@ -453,6 +472,7 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, struct task_struct *next, unsigned int prev_state) { + struct ftrace_ret_stack *ret_stack; unsigned long long timestamp; int index; @@ -477,8 +497,11 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, */ timestamp -= next->ftrace_timestamp; - for (index = next->curr_ret_stack; index >= 0; index--) - next->ret_stack[index].calltime += timestamp; + for (index = next->curr_ret_stack - FGRAPH_RET_INDEX; index >= 0; ) { + ret_stack = RET_STACK(next, index); + ret_stack->calltime += timestamp; + index -= FGRAPH_RET_INDEX; + } } static int ftrace_graph_entry_test(struct ftrace_graph_ent *trace) @@ -521,10 +544,10 @@ void update_function_graph_func(void) ftrace_graph_entry = __ftrace_graph_entry; } -static DEFINE_PER_CPU(struct ftrace_ret_stack *, idle_ret_stack); +static DEFINE_PER_CPU(unsigned long *, idle_ret_stack); static void -graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack) +graph_init_task(struct task_struct *t, unsigned long *ret_stack) { atomic_set(&t->trace_overrun, 0); t->ftrace_timestamp = 0; @@ -539,7 +562,7 @@ graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack) */ void ftrace_graph_init_idle_task(struct task_struct *t, int cpu) { - t->curr_ret_stack = -1; + t->curr_ret_stack = 0; t->curr_ret_depth = -1; /* * The idle task has no parent, it either has its own @@ -549,14 +572,11 @@ void ftrace_graph_init_idle_task(struct task_struct *t, int cpu) WARN_ON(t->ret_stack != per_cpu(idle_ret_stack, cpu)); if (ftrace_graph_active) { - struct ftrace_ret_stack *ret_stack; + unsigned long *ret_stack; ret_stack = per_cpu(idle_ret_stack, cpu); if (!ret_stack) { - ret_stack = - kmalloc_array(FTRACE_RETFUNC_DEPTH, - sizeof(struct ftrace_ret_stack), - GFP_KERNEL); + ret_stack = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); if (!ret_stack) return; per_cpu(idle_ret_stack, cpu) = ret_stack; @@ -570,15 +590,13 @@ void ftrace_graph_init_task(struct task_struct *t) { /* Make sure we do not use the parent ret_stack */ t->ret_stack = NULL; - t->curr_ret_stack = -1; + t->curr_ret_stack = 0; t->curr_ret_depth = -1; if (ftrace_graph_active) { - struct ftrace_ret_stack *ret_stack; + unsigned long *ret_stack; - ret_stack = kmalloc_array(FTRACE_RETFUNC_DEPTH, - sizeof(struct ftrace_ret_stack), - GFP_KERNEL); + ret_stack = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); if (!ret_stack) return; graph_init_task(t, ret_stack); @@ -587,7 +605,7 @@ void ftrace_graph_init_task(struct task_struct *t) void ftrace_graph_exit_task(struct task_struct *t) { - struct ftrace_ret_stack *ret_stack = t->ret_stack; + unsigned long *ret_stack = t->ret_stack; t->ret_stack = NULL; /* NULL must become visible to IRQs before we free it: */ @@ -599,12 +617,10 @@ void ftrace_graph_exit_task(struct task_struct *t) /* Allocate a return stack for each task */ static int start_graph_tracing(void) { - struct ftrace_ret_stack **ret_stack_list; + unsigned long **ret_stack_list; int ret, cpu; - ret_stack_list = kmalloc_array(FTRACE_RETSTACK_ALLOC_SIZE, - sizeof(struct ftrace_ret_stack *), - GFP_KERNEL); + ret_stack_list = kmalloc(SHADOW_STACK_SIZE, GFP_KERNEL); if (!ret_stack_list) return -ENOMEM; From patchwork Mon Nov 27 13:53:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170146 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3127776vqx; Mon, 27 Nov 2023 05:54:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IFLZeUE9q7qPlonEkVv8SM8Jex0hrowEtZCTovJvIccIc5/b9NQ8kE5o3oULPwf0V4OTTss X-Received: by 2002:a17:903:110e:b0:1cf:9e88:100d with SMTP id n14-20020a170903110e00b001cf9e88100dmr13782948plh.59.1701093270769; Mon, 27 Nov 2023 05:54:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093270; cv=none; d=google.com; s=arc-20160816; b=QEQbMjGn69XKMKeOGWAVn1rVtBclpJlG8KoyQW2d/VqjK4YC8pG0kkHI1Q1XSj3nr/ /2UWA3N4VL+BsRixxZKDjAkIxaLa+JzGRKpldnR0ShClTpS/cPERIKAc6vmk8ltcRE+D 1RZ+nxpVlCmevNMzvCyPW4eSLLXuAerVNKAXh0tdHc+HjKmmXwXtiKRQR5LmphksRB0H KHHZkUBXDpILEaA70CKGJFjnnTJ6BCZ8nXfT0KabFrVH1PUh3gt86jK1cGf5K1qbosd5 OzHvJne4izVt0V3F2yZd6JZm3wrua/SM0s9Y7goAqMntUYfpT4Bofq8AtEJnBk3c8F6T UKGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=OESyYZVxbmArs5V4zMGrIL0/uPgjACK9BAVI1aKMbU0=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=umHH1zdlv6simvMW1SeaVnT0NYlxNpawWdolOgZQR5CYPH2N4jTwXdQ4UjNkq3kU2p ZRsxwMd4wpFa0gCeir5rTcrwLhDUT77YnV7UcO3slKayfihRx2NnsGX27BOqUlpc1fDO D7qsB2Ol80ElM5suKj+Om5/AT9slSUMBUG5DF5MGT7k0ykY9MhEt76sn/h18CbuODOMI alr7RVqc5HjGjLb60cOKTv/akoUcvJEctiGOXbdUwTppbHOZe8cpAq7KVNkIH6LOBi5k iUhuioKPDW+OIamXwzUwJy5Edr37V5+wVxgshCAhtR2p+VjSqa6PUtdcVxjVjNFdEaLr oiGQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=GYUOOnwv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id h14-20020a63c00e000000b005b834096959si9847895pgg.851.2023.11.27.05.54.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:54:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=GYUOOnwv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 87F2B8094D56; Mon, 27 Nov 2023 05:54:22 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233392AbjK0NyG (ORCPT + 99 others); Mon, 27 Nov 2023 08:54:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233300AbjK0Nx5 (ORCPT ); Mon, 27 Nov 2023 08:53:57 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5063F136 for ; Mon, 27 Nov 2023 05:54:03 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1D003C433CA; Mon, 27 Nov 2023 13:53:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093242; bh=BJTb9S/48BMXgKg9JdebJFjIf4uCQaHDkO4DGbu+/d0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GYUOOnwvQwsNlxSNuewCevm+q5XiHdJpOfdaSmMN+SXiZXPA0yXdYnswPHthV9OJ4 JWBFYOgT8yMoFGbfKTKWNFudYzy/VhLwKXMNYuJOs2xiqW2i0GHesk4ftCGA42hC+s sGcLQk4JU8qlQcd5e7qtQlNs3N4pxxBRldmtfsJnCtJipJS04rFhshaGrH9uCSMvol subLIaiFHSNXNaFX9EnViMCbyRJ81TfNSB2wgpxqA4X5lQC2bUe8z6DCxj6/ixLGNc ukeLgFfVWLMw8KXT/voig76MqTg3d+3Z/jaLvW0Wu/VwtvF+hMIVB2M/TuDhKFPyWf jrOWe0WQcF8bw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 05/33] fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long Date: Mon, 27 Nov 2023 22:53:56 +0900 Message-Id: <170109323585.343914.8827157689000519661.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:54:22 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725577269798034 X-GMAIL-MSGID: 1783725577269798034 From: Steven Rostedt (VMware) Instead of using "ALIGN()", use BUILD_BUG_ON() as the structures should always be divisible by sizeof(long). Link: http://lkml.kernel.org/r/20190524111144.GI2589@hirez.programming.kicks-ass.net Suggested-by: Peter Zijlstra Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- kernel/trace/fgraph.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 30edeb6d4aa9..837daf929d2a 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -26,10 +26,9 @@ #endif #define FGRAPH_RET_SIZE sizeof(struct ftrace_ret_stack) -#define FGRAPH_RET_INDEX (ALIGN(FGRAPH_RET_SIZE, sizeof(long)) / sizeof(long)) +#define FGRAPH_RET_INDEX (FGRAPH_RET_SIZE / sizeof(long)) #define SHADOW_STACK_SIZE (PAGE_SIZE) -#define SHADOW_STACK_INDEX \ - (ALIGN(SHADOW_STACK_SIZE, sizeof(long)) / sizeof(long)) +#define SHADOW_STACK_INDEX (SHADOW_STACK_SIZE / sizeof(long)) /* Leave on a buffer at the end */ #define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - FGRAPH_RET_INDEX) @@ -91,6 +90,8 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, if (!current->ret_stack) return -EBUSY; + BUILD_BUG_ON(SHADOW_STACK_SIZE % sizeof(long)); + /* * We must make sure the ret_stack is tested before we read * anything else. @@ -325,6 +326,8 @@ ftrace_graph_get_ret_stack(struct task_struct *task, int idx) { int index = task->curr_ret_stack; + BUILD_BUG_ON(FGRAPH_RET_SIZE % sizeof(long)); + index -= FGRAPH_RET_INDEX * (idx + 1); if (index < 0) return NULL; From patchwork Mon Nov 27 13:54:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170147 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3127863vqx; Mon, 27 Nov 2023 05:54:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IGtrP/z+bD/9zFv1pBpKCLDi4hXfXzo0+mhijDL12DjPG/7nQAOrb7wmv3KC3UDf9saQ7TF X-Received: by 2002:a05:6a20:548b:b0:18b:cea3:c34d with SMTP id i11-20020a056a20548b00b0018bcea3c34dmr10621355pzk.32.1701093278399; Mon, 27 Nov 2023 05:54:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093278; cv=none; d=google.com; s=arc-20160816; b=jxYUDZYCU53Lz00C9ONEk7l+AUYgVn/CDnvwbxboin1HGb0DbGusOmgy2YavIDnZW3 L2VK0Ww3zxqxkFpdsLswZB6FHbXMmV3XHDGECk+vrJWZIiH7ibTnRDp5HNwq7W4t14/I QoFbDQExWOgzFAFqzGTfPFXWIzGgqat4CBGHcEhA3L2/DdKHWD3w0DASic2k6huX0lsB HhlLfT3wpfCX1F7UoUKWxlSSAIxFPYjMgl5A75QGqj9PdG4y251vWY5FK2P1DXRN2ENM VmNSet56GgfavXIeY4n48n3dtBHUDV+IzgLfIrKWmyrQW0hKyuTOwRchYVrkkxkOeHwM 8THA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=USeyGjolvInMzVJ66tLimsGzxCbXQfhEBdfvr5j4pqU=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=zgKjUGWd5RhfIKzR73OgiBARwYWiW85KWragBtXC1p3f/cWzKpuDZzIgDeHIoDXZ7Z 00N0QJiLjJYccHSL5Cx0Tl682sQ/VoL8pYAmf1qrhINlLqQqesXXhDWryd6QE+yB0g/+ ZDK6sXy4ZgbjRmzyOnT+ZQn8qcCJmWYYvBx9nQl0LSSfPGPco8Z++wWD07znBi3mGB8V dmo74HJ1hZbIuWqTmisx2nPOgzgdaSi/no9BlE+1MwLmg3iAMzLsYsABPGg6wW8uJpnG syobyDKR5M9Rq+G7DGz6by+zyQqGbcDlhyPqdMslv92mDGjjienCLevXpNqqLCshBP5Z XT7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=i1Gsc2hz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id s198-20020a632ccf000000b005bdc49ba91bsi9631883pgs.151.2023.11.27.05.54.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:54:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=i1Gsc2hz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 53430809ABC7; Mon, 27 Nov 2023 05:54:37 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233582AbjK0Ny1 (ORCPT + 99 others); Mon, 27 Nov 2023 08:54:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233264AbjK0NyK (ORCPT ); Mon, 27 Nov 2023 08:54:10 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89ADAD4D for ; Mon, 27 Nov 2023 05:54:15 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4759DC433CB; Mon, 27 Nov 2023 13:54:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093255; bh=51Cq89Jg5fqfgwiJmEALH7euiYhrlJJ1/IPjogh9mj0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i1Gsc2hzftsRB5mtHmfgDLsiBQ637hCqTNT8OoJeKcDYp1pbRJMwrPIffPZvcNZ9L RtV8iWVrRDRYCnJJD6iPvNQGlvVFPkxNTuzWnbany6BXzHp5MyYLsgVSfZaDACZGXm DyV1UiaX3xUV0p8OCj9C/Y32Q+7uJHw/c6h9GtZMG9m/5+o+164NEUeQ2jt/cy1mB5 E5Rq66EcEaRFmfKvFQSl0Rk7BTV70WKaw0iOV8tWm71tPqU2ySZJVGiykBKpYfmnrC M9TIVWZ/FQxTJ1sr/RSPeSMd4kYwdeOcJiOmyPKkGH8Uu59PC+pHctV4MDMtmkhjZc WEpgvj7iuHvyw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 06/33] function_graph: Add an array structure that will allow multiple callbacks Date: Mon, 27 Nov 2023 22:54:08 +0900 Message-Id: <170109324834.343914.5305076823761174091.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:54:37 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725585590378816 X-GMAIL-MSGID: 1783725585590378816 From: Steven Rostedt (VMware) Add an array structure that will eventually allow the function graph tracer to have up to 16 simultaneous callbacks attached. It's an array of 16 fgraph_ops pointers, that is assigned when one is registered. On entry of a function the entry of the first item in the array is called, and if it returns zero, then the callback returns non zero if it wants the return callback to be called on exit of the function. The array will simplify the process of having more than one callback attached to the same function, as its index into the array can be stored on the shadow stack. We need to only save the index, because this will allow the fgraph_ops to be freed before the function returns (which may happen if the function call schedule for a long time). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Remove unneeded brace. --- kernel/trace/fgraph.c | 114 +++++++++++++++++++++++++++++++++++-------------- 1 file changed, 81 insertions(+), 33 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 837daf929d2a..86df3ca6964f 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -39,6 +39,11 @@ DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; +static int fgraph_array_cnt; +#define FGRAPH_ARRAY_SIZE 16 + +static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; + /* Both enabled by default (can be cleared by function_graph tracer flags */ static bool fgraph_sleep_time = true; @@ -62,6 +67,20 @@ int __weak ftrace_disable_ftrace_graph_caller(void) } #endif +int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) +{ + return 0; +} + +static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace) +{ +} + +static struct fgraph_ops fgraph_stub = { + .entryfunc = ftrace_graph_entry_stub, + .retfunc = ftrace_graph_ret_stub, +}; + /** * ftrace_graph_stop - set to permanently disable function graph tracing * @@ -159,7 +178,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, goto out; /* Only trace if the calling function expects to */ - if (!ftrace_graph_entry(&trace)) + if (!fgraph_array[0]->entryfunc(&trace)) goto out_ret; return 0; @@ -274,7 +293,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs trace.retval = fgraph_ret_regs_return_value(ret_regs); #endif trace.rettime = trace_clock_local(); - ftrace_graph_return(&trace); + fgraph_array[0]->retfunc(&trace); /* * The ftrace_graph_return() may still access the current * ret_stack structure, we need to make sure the update of @@ -410,11 +429,6 @@ void ftrace_graph_sleep_time_control(bool enable) fgraph_sleep_time = enable; } -int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) -{ - return 0; -} - /* * Simply points to ftrace_stub, but with the proper protocol. * Defined by the linker script in linux/vmlinux.lds.h @@ -652,37 +666,54 @@ static int start_graph_tracing(void) int register_ftrace_graph(struct fgraph_ops *gops) { int ret = 0; + int i; mutex_lock(&ftrace_lock); - /* we currently allow only one tracer registered at a time */ - if (ftrace_graph_active) { + if (!fgraph_array[0]) { + /* The array must always have real data on it */ + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) + fgraph_array[i] = &fgraph_stub; + } + + /* Look for an available spot */ + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { + if (fgraph_array[i] == &fgraph_stub) + break; + } + if (i >= FGRAPH_ARRAY_SIZE) { ret = -EBUSY; goto out; } - register_pm_notifier(&ftrace_suspend_notifier); + fgraph_array[i] = gops; + if (i + 1 > fgraph_array_cnt) + fgraph_array_cnt = i + 1; ftrace_graph_active++; - ret = start_graph_tracing(); - if (ret) { - ftrace_graph_active--; - goto out; - } - ftrace_graph_return = gops->retfunc; + if (ftrace_graph_active == 1) { + register_pm_notifier(&ftrace_suspend_notifier); + ret = start_graph_tracing(); + if (ret) { + ftrace_graph_active--; + goto out; + } + + ftrace_graph_return = gops->retfunc; - /* - * Update the indirect function to the entryfunc, and the - * function that gets called to the entry_test first. Then - * call the update fgraph entry function to determine if - * the entryfunc should be called directly or not. - */ - __ftrace_graph_entry = gops->entryfunc; - ftrace_graph_entry = ftrace_graph_entry_test; - update_function_graph_func(); + /* + * Update the indirect function to the entryfunc, and the + * function that gets called to the entry_test first. Then + * call the update fgraph entry function to determine if + * the entryfunc should be called directly or not. + */ + __ftrace_graph_entry = gops->entryfunc; + ftrace_graph_entry = ftrace_graph_entry_test; + update_function_graph_func(); - ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); + ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); + } out: mutex_unlock(&ftrace_lock); return ret; @@ -690,19 +721,36 @@ int register_ftrace_graph(struct fgraph_ops *gops) void unregister_ftrace_graph(struct fgraph_ops *gops) { + int i; + mutex_lock(&ftrace_lock); if (unlikely(!ftrace_graph_active)) goto out; - ftrace_graph_active--; - ftrace_graph_return = ftrace_stub_graph; - ftrace_graph_entry = ftrace_graph_entry_stub; - __ftrace_graph_entry = ftrace_graph_entry_stub; - ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); - unregister_pm_notifier(&ftrace_suspend_notifier); - unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); + for (i = 0; i < fgraph_array_cnt; i++) + if (gops == fgraph_array[i]) + break; + if (i >= fgraph_array_cnt) + goto out; + fgraph_array[i] = &fgraph_stub; + if (i + 1 == fgraph_array_cnt) { + for (; i >= 0; i--) + if (fgraph_array[i] != &fgraph_stub) + break; + fgraph_array_cnt = i + 1; + } + + ftrace_graph_active--; + if (!ftrace_graph_active) { + ftrace_graph_return = ftrace_stub_graph; + ftrace_graph_entry = ftrace_graph_entry_stub; + __ftrace_graph_entry = ftrace_graph_entry_stub; + ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); + unregister_pm_notifier(&ftrace_suspend_notifier); + unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); + } out: mutex_unlock(&ftrace_lock); } From patchwork Mon Nov 27 13:54:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170148 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3128401vqx; Mon, 27 Nov 2023 05:55:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IFdJZvXDn2l7LEJOCihUs0bRxNrsQdDHQuqasX3ZdRqCzWONE1taZfPgcFVpmgy3EyxSyFD X-Received: by 2002:a05:6a20:5497:b0:18c:4105:9a9b with SMTP id i23-20020a056a20549700b0018c41059a9bmr6137406pzk.12.1701093324831; Mon, 27 Nov 2023 05:55:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093324; cv=none; d=google.com; s=arc-20160816; b=vR9Vj0aHqZ59BuPFuLpMK3JS87wTITdQy3XF/l9Gh+utVVxvg8QKtBH9oKyWl4UybL rfszJX2HOynXCUUHK0YezvkUeQ98BfGFnQWaJCshCNxTrDqFD8BZzXdSMz795JDqlQtA hR/mit+RgHSbuf4GlrY9nJYW177CyuHEPIuXgLGb0QMH1kLCwgToWz883exyy8/8ai0q GZ3cnSWLvzTUeWB37B/+vrInOEMP32ZLzfV56ha1N/Gz9h98b4yJyt/2Ybpilh/RbZqt EodBo8gNew3nip6y8Sg3DHCilZ0aGEAidEGplDenO2qZxxn/xRqhIZA90D6XMFIIdUog bFfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=VhRYc1ru/7AODdorQn5peZB/+EM5J4hdBj5iHP1ncXY=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=tM800QzpYP5LOqHNkLX9/a/4gfB5CAQ6jhy96HcgrXyIUHY0l37G2VYm5RmuWkuJR8 QgA78f5xxb8lid5/R2QV8nx7131cH3e1wfoFeLnsezXA/xoToNYlUGpPxGxKxh+5o1Ly B7cGJ4cCnL7VT+bLg8QspTG9+cZrJOq/yj439wUuIcM+q6AkOAYrx9Kj29thqlWLcVhU xLUXAMkcbjJM6OUayJfRLQ3cuWRUqkfWZI8cFRRNTl2e6CY/Dgetqbm2aPV2QI0HfsXG Rlkl1QLm2IfdBO5nT2BO8Wep2MyJhQwn4lZ3YZ75xMjSK3+1kFVKQA9bCaklY/vMDD9w 0qEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JLAVKChh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id h2-20020a056a00230200b006cd8f0ed07csi2104588pfh.191.2023.11.27.05.55.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:55:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JLAVKChh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 9FCF2809D3EA; Mon, 27 Nov 2023 05:55:23 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233257AbjK0NzI (ORCPT + 99 others); Mon, 27 Nov 2023 08:55:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233366AbjK0Ny2 (ORCPT ); Mon, 27 Nov 2023 08:54:28 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA4DB1BD3 for ; Mon, 27 Nov 2023 05:54:27 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89E34C433CA; Mon, 27 Nov 2023 13:54:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093267; bh=APQJdEZ6pt+ekGviwAieGT/uClX2gyf+ahHvsmx9rEQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JLAVKChh5o9/eIRDCUpybrwDO8Q4MOmtXtguDMK36BRP4BedUgK3EC98Ch0QzcNdp c2VFNm2iXC8x6crFt0SASedd5n2e71Wy539Bwk/iUSwBMHhKEmfm4CkzpnVcpBlHvv AqpJqI4fnAZL9v4kglG5zdSrlM8fJCYG6t3nByikmD2ZnuOYqiX7GOgc3V0GoActAO T8P99Zx6UZ97hA9nnyhTs5AXO3XcotxTIaSyRWN5hzaj+FDEqx1Aq/SYchi/Vmw20C OetstjrxjLDBixEcF6rlgGMmocagMob9Yc7YvIR4xwewEgDN9lmd7Fri4xKJ7tLtfw lG40JCjTBvw4A== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 07/33] function_graph: Allow multiple users to attach to function graph Date: Mon, 27 Nov 2023 22:54:20 +0900 Message-Id: <170109326053.343914.15283950231492519459.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:55:23 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725634496324582 X-GMAIL-MSGID: 1783725634496324582 From: Steven Rostedt (VMware) Allow for multiple users to attach to function graph tracer at the same time. Only 16 simultaneous users can attach to the tracer. This is because there's an array that stores the pointers to the attached fgraph_ops. When a function being traced is entered, each of the ftrace_ops entryfunc is called and if it returns non zero, its index into the array will be added to the shadow stack. On exit of the function being traced, the shadow stack will contain the indexes of the ftrace_ops on the array that want their retfunc to be called. Because a function may sleep for a long time (if a task sleeps itself), the return of the function may be literally days later. If the ftrace_ops is removed, its place on the array is replaced with a ftrace_ops that contains the stub functions and that will be called when the function finally returns. If another ftrace_ops is added that happens to get the same index into the array, its return function may be called. But that's actually the way things current work with the old function graph tracer. If one tracer is removed and another is added, the new one will get the return calls of the function traced by the previous one, thus this is not a regression. This can be fixed by adding a counter to each time the array item is updated and save that on the shadow stack as well, such that it won't be called if the index saved does not match the index on the array. Note, being able to filter functions when both are called is not completely handled yet, but that shouldn't be too hard to manage. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Check return value of the ftrace_pop_return_trace() instead of 'ret' since 'ret' is set to the address of panic(). - Fix typo and make lines shorter than 76 chars in description. --- kernel/trace/fgraph.c | 332 +++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 280 insertions(+), 52 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 86df3ca6964f..8aba93be11b2 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -27,23 +27,144 @@ #define FGRAPH_RET_SIZE sizeof(struct ftrace_ret_stack) #define FGRAPH_RET_INDEX (FGRAPH_RET_SIZE / sizeof(long)) + +/* + * On entry to a function (via function_graph_enter()), a new ftrace_ret_stack + * is allocated on the task's ret_stack, then each fgraph_ops on the + * fgraph_array[]'s entryfunc is called and if that returns non-zero, the + * index into the fgraph_array[] for that fgraph_ops is added to the ret_stack. + * As the associated ftrace_ret_stack saved for those fgraph_ops needs to + * be found, the index to it is also added to the ret_stack along with the + * index of the fgraph_array[] to each fgraph_ops that needs their retfunc + * called. + * + * The top of the ret_stack (when not empty) will always have a reference + * to the last ftrace_ret_stack saved. All references to the + * ftrace_ret_stack has the format of: + * + * bits: 0 - 13 Index in words from the previous ftrace_ret_stack + * bits: 14 - 15 Type of storage + * 0 - reserved + * 1 - fgraph_array index + * For fgraph_array_index: + * bits: 16 - 23 The fgraph_ops fgraph_array index + * + * That is, at the end of function_graph_enter, if the first and forth + * fgraph_ops on the fgraph_array[] (index 0 and 3) needs their retfunc called + * on the return of the function being traced, this is what will be on the + * task's shadow ret_stack: (the stack grows upward) + * + * | | <- task->curr_ret_stack + * +----------------------------------+ + * | (3 << FGRAPH_ARRAY_SHIFT)|(2) | ( 3 for index of fourth fgraph_ops) + * +----------------------------------+ + * | (0 << FGRAPH_ARRAY_SHIFT)|(1) | ( 0 for index of first fgraph_ops) + * +----------------------------------+ + * | struct ftrace_ret_stack | + * | (stores the saved ret pointer) | + * +----------------------------------+ + * | (X) | (N) | ( N words away from previous ret_stack) + * | | + * + * If a backtrace is required, and the real return pointer needs to be + * fetched, then it looks at the task's curr_ret_stack index, if it + * is greater than zero, it would subtact one, and then mask the value + * on the ret_stack by FGRAPH_RET_INDEX_MASK and subtract FGRAPH_RET_INDEX + * from that, to get the index of the ftrace_ret_stack structure stored + * on the shadow stack. + */ + +#define FGRAPH_RET_INDEX_SIZE 14 +#define FGRAPH_RET_INDEX_MASK ((1 << FGRAPH_RET_INDEX_SIZE) - 1) + + +#define FGRAPH_TYPE_SIZE 2 +#define FGRAPH_TYPE_MASK ((1 << FGRAPH_TYPE_SIZE) - 1) +#define FGRAPH_TYPE_SHIFT FGRAPH_RET_INDEX_SIZE + +enum { + FGRAPH_TYPE_RESERVED = 0, + FGRAPH_TYPE_ARRAY = 1, +}; + +#define FGRAPH_ARRAY_SIZE 16 +#define FGRAPH_ARRAY_MASK ((1 << FGRAPH_ARRAY_SIZE) - 1) +#define FGRAPH_ARRAY_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) + +/* Currently the max stack index can't be more than register callers */ +#define FGRAPH_MAX_INDEX FGRAPH_ARRAY_SIZE + +#define FGRAPH_FRAME_SIZE (FGRAPH_RET_SIZE + FGRAPH_ARRAY_SIZE * (sizeof(long))) +#define FGRAPH_FRAME_INDEX (ALIGN(FGRAPH_FRAME_SIZE, \ + sizeof(long)) / sizeof(long)) #define SHADOW_STACK_SIZE (PAGE_SIZE) #define SHADOW_STACK_INDEX (SHADOW_STACK_SIZE / sizeof(long)) /* Leave on a buffer at the end */ -#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - FGRAPH_RET_INDEX) +#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - (FGRAPH_RET_INDEX + 1)) #define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) -#define RET_STACK_INC(c) ({ c += FGRAPH_RET_INDEX; }) -#define RET_STACK_DEC(c) ({ c -= FGRAPH_RET_INDEX; }) DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; static int fgraph_array_cnt; -#define FGRAPH_ARRAY_SIZE 16 static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; +static inline int get_ret_stack_index(struct task_struct *t, int offset) +{ + return current->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; +} + +static inline int get_fgraph_type(struct task_struct *t, int offset) +{ + return (current->ret_stack[offset] >> FGRAPH_TYPE_SHIFT) & + FGRAPH_TYPE_MASK; +} + +static inline int get_fgraph_array(struct task_struct *t, int offset) +{ + return (current->ret_stack[offset] >> FGRAPH_ARRAY_SHIFT) & + FGRAPH_ARRAY_MASK; +} + +/* + * @offset: The index into @t->ret_stack to find the ret_stack entry + * @index: Where to place the index into @t->ret_stack of that entry + * + * Calling this with: + * + * offset = task->curr_ret_stack; + * do { + * ret_stack = get_ret_stack(task, offset, &offset); + * } while (ret_stack); + * + * Will iterate through all the ret_stack entries from curr_ret_stack + * down to the first one. + */ +static inline struct ftrace_ret_stack * +get_ret_stack(struct task_struct *t, int offset, int *index) +{ + int idx; + + BUILD_BUG_ON(FGRAPH_RET_SIZE % sizeof(long)); + + if (offset <= 0) + return NULL; + + idx = get_ret_stack_index(t, offset - 1); + + if (idx <= 0 || idx > FGRAPH_MAX_INDEX) + return NULL; + + offset -= idx + FGRAPH_RET_INDEX; + if (offset < 0) + return NULL; + + *index = offset; + return RET_STACK(t, offset); +} + /* Both enabled by default (can be cleared by function_graph tracer flags */ static bool fgraph_sleep_time = true; @@ -126,9 +247,34 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, calltime = trace_clock_local(); index = current->curr_ret_stack; - RET_STACK_INC(current->curr_ret_stack); + /* ret offset = 1 ; type = reserved */ + current->ret_stack[index + FGRAPH_RET_INDEX] = 1; ret_stack = RET_STACK(current, index); + ret_stack->ret = ret; + /* + * The unwinders expect curr_ret_stack to point to either zero + * or an index where to find the next ret_stack. Even though the + * ret stack might be bogus, we want to write the ret and the + * index to find the ret_stack before we increment the stack point. + * If an interrupt comes in now before we increment the curr_ret_stack + * it may blow away what we wrote. But that's fine, because the + * index will still be correct (even though the 'ret' won't be). + * What we worry about is the index being correct after we increment + * the curr_ret_stack and before we update that index, as if an + * interrupt comes in and does an unwind stack dump, it will need + * at least a correct index! + */ barrier(); + current->curr_ret_stack += FGRAPH_RET_INDEX + 1; + /* + * This next barrier is to ensure that an interrupt coming in + * will not corrupt what we are about to write. + */ + barrier(); + + /* Still keep it reserved even if an interrupt came in */ + current->ret_stack[index + FGRAPH_RET_INDEX] = 1; + ret_stack->ret = ret; ret_stack->func = func; ret_stack->calltime = calltime; @@ -159,6 +305,12 @@ int function_graph_enter(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp) { struct ftrace_graph_ent trace; + int offset; + int start; + int type; + int val; + int cnt = 0; + int i; #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS /* @@ -177,38 +329,87 @@ int function_graph_enter(unsigned long ret, unsigned long func, if (ftrace_push_return_trace(ret, func, frame_pointer, retp)) goto out; - /* Only trace if the calling function expects to */ - if (!fgraph_array[0]->entryfunc(&trace)) + /* Use start for the distance to ret_stack (skipping over reserve) */ + start = offset = current->curr_ret_stack - 2; + + for (i = 0; i < fgraph_array_cnt; i++) { + struct fgraph_ops *gops = fgraph_array[i]; + + if (gops == &fgraph_stub) + continue; + + if ((offset == start) && + (current->curr_ret_stack >= SHADOW_STACK_INDEX - 1)) { + atomic_inc(¤t->trace_overrun); + break; + } + if (fgraph_array[i]->entryfunc(&trace)) { + offset = current->curr_ret_stack; + /* Check the top level stored word */ + type = get_fgraph_type(current, offset - 1); + + val = (i << FGRAPH_ARRAY_SHIFT) | + (FGRAPH_TYPE_ARRAY << FGRAPH_TYPE_SHIFT) | + ((offset - start) - 1); + + /* We can reuse the top word if it is reserved */ + if (type == FGRAPH_TYPE_RESERVED) { + current->ret_stack[offset - 1] = val; + cnt++; + continue; + } + val++; + + current->ret_stack[offset] = val; + /* + * Write the value before we increment, so that + * if an interrupt comes in after we increment + * it will still see the value and skip over + * this. + */ + barrier(); + current->curr_ret_stack++; + /* + * Have to write again, in case an interrupt + * came in before the increment and after we + * wrote the value. + */ + barrier(); + current->ret_stack[offset] = val; + cnt++; + } + } + + if (!cnt) goto out_ret; return 0; out_ret: - RET_STACK_DEC(current->curr_ret_stack); + current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; out: current->curr_ret_depth--; return -EBUSY; } /* Retrieve a function return address to the trace stack on thread info.*/ -static void +static struct ftrace_ret_stack * ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, unsigned long frame_pointer) { struct ftrace_ret_stack *ret_stack; int index; - index = current->curr_ret_stack; - RET_STACK_DEC(index); + ret_stack = get_ret_stack(current, current->curr_ret_stack, &index); - if (unlikely(index < 0 || index > SHADOW_STACK_MAX_INDEX)) { + if (unlikely(!ret_stack)) { ftrace_graph_stop(); - WARN_ON(1); + WARN(1, "Bad function graph ret_stack pointer: %d", + current->curr_ret_stack); /* Might as well panic, otherwise we have no where to go */ *ret = (unsigned long)panic; - return; + return NULL; } - ret_stack = RET_STACK(current, index); #ifdef HAVE_FUNCTION_GRAPH_FP_TEST /* * The arch may choose to record the frame pointer used @@ -228,12 +429,12 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, ftrace_graph_stop(); WARN(1, "Bad frame pointer: expected %lx, received %lx\n" " from func %ps return to %lx\n", - current->ret_stack[index].fp, + ret_stack->fp, frame_pointer, (void *)ret_stack->func, ret_stack->ret); *ret = (unsigned long)panic; - return; + return NULL; } #endif @@ -241,13 +442,15 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, trace->func = ret_stack->func; trace->calltime = ret_stack->calltime; trace->overrun = atomic_read(¤t->trace_overrun); - trace->depth = current->curr_ret_depth--; + trace->depth = current->curr_ret_depth; /* * We still want to trace interrupts coming in if * max_depth is set to 1. Make sure the decrement is * seen before ftrace_graph_return. */ barrier(); + + return ret_stack; } /* @@ -285,30 +488,47 @@ struct fgraph_ret_regs; static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs, unsigned long frame_pointer) { + struct ftrace_ret_stack *ret_stack; struct ftrace_graph_ret trace; unsigned long ret; + int offset; + int index; + int idx; + int i; + + ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer); + + if (unlikely(!ret_stack)) { + ftrace_graph_stop(); + WARN_ON(1); + /* Might as well panic. What else to do? */ + return (unsigned long)panic; + } - ftrace_pop_return_trace(&trace, &ret, frame_pointer); + trace.rettime = trace_clock_local(); #ifdef CONFIG_FUNCTION_GRAPH_RETVAL trace.retval = fgraph_ret_regs_return_value(ret_regs); #endif - trace.rettime = trace_clock_local(); - fgraph_array[0]->retfunc(&trace); + + offset = current->curr_ret_stack - 1; + index = get_ret_stack_index(current, offset); + + /* index has to be at least one! Optimize for it */ + i = 0; + do { + idx = get_fgraph_array(current, offset - i); + fgraph_array[idx]->retfunc(&trace); + i++; + } while (i < index); + /* * The ftrace_graph_return() may still access the current * ret_stack structure, we need to make sure the update of * curr_ret_stack is after that. */ barrier(); - RET_STACK_DEC(current->curr_ret_stack); - - if (unlikely(!ret)) { - ftrace_graph_stop(); - WARN_ON(1); - /* Might as well panic. What else to do? */ - ret = (unsigned long)panic; - } - + current->curr_ret_stack -= index + FGRAPH_RET_INDEX; + current->curr_ret_depth--; return ret; } @@ -343,15 +563,17 @@ unsigned long ftrace_return_to_handler(unsigned long frame_pointer) struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int idx) { + struct ftrace_ret_stack *ret_stack = NULL; int index = task->curr_ret_stack; - BUILD_BUG_ON(FGRAPH_RET_SIZE % sizeof(long)); - - index -= FGRAPH_RET_INDEX * (idx + 1); if (index < 0) return NULL; - return RET_STACK(task, index); + do { + ret_stack = get_ret_stack(task, index, &index); + } while (ret_stack && --idx >= 0); + + return ret_stack; } /** @@ -374,16 +596,15 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp) { struct ftrace_ret_stack *ret_stack; - int index = task->curr_ret_stack; - int i; + int i = task->curr_ret_stack; if (ret != (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) return ret; - RET_STACK_DEC(index); - - for (i = index; i >= 0; RET_STACK_DEC(i)) { - ret_stack = RET_STACK(task, i); + while (i > 0) { + ret_stack = get_ret_stack(current, i, &i); + if (!ret_stack) + break; if (ret_stack->retp == retp) return ret_stack->ret; } @@ -394,21 +615,26 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp) { - int task_idx; + struct ftrace_ret_stack *ret_stack; + int task_idx = task->curr_ret_stack; + int i; if (ret != (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) return ret; - task_idx = task->curr_ret_stack; - RET_STACK_DEC(task_idx); - - if (!task->ret_stack || task_idx < *idx) + if (!idx) return ret; - task_idx -= *idx; - RET_STACK_INC(*idx); + i = *idx; + do { + ret_stack = get_ret_stack(task, task_idx, &task_idx); + i--; + } while (i >= 0 && ret_stack); + + if (ret_stack) + return ret_stack->ret; - return RET_STACK(task, task_idx); + return ret; } #endif /* HAVE_FUNCTION_GRAPH_RET_ADDR_PTR */ @@ -514,10 +740,10 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, */ timestamp -= next->ftrace_timestamp; - for (index = next->curr_ret_stack - FGRAPH_RET_INDEX; index >= 0; ) { - ret_stack = RET_STACK(next, index); - ret_stack->calltime += timestamp; - index -= FGRAPH_RET_INDEX; + for (index = next->curr_ret_stack; index > 0; ) { + ret_stack = get_ret_stack(next, index, &index); + if (ret_stack) + ret_stack->calltime += timestamp; } } @@ -568,6 +794,8 @@ graph_init_task(struct task_struct *t, unsigned long *ret_stack) { atomic_set(&t->trace_overrun, 0); t->ftrace_timestamp = 0; + t->curr_ret_stack = 0; + t->curr_ret_depth = -1; /* make curr_ret_stack visible before we add the ret_stack */ smp_wmb(); t->ret_stack = ret_stack; From patchwork Mon Nov 27 13:54:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170149 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3128910vqx; Mon, 27 Nov 2023 05:56:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IF0/DGRvtOxLqSZeBP59+K3tMn25zo+MyOsv3TA1uzbyA8HPUlvwHBSg43f7P/tU1wV/zA4 X-Received: by 2002:a05:6a20:6a0e:b0:18a:d7a8:5e5e with SMTP id p14-20020a056a206a0e00b0018ad7a85e5emr9436394pzk.41.1701093369875; Mon, 27 Nov 2023 05:56:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093369; cv=none; d=google.com; s=arc-20160816; b=P+qElrezstpPbuh4jrqynA1WFgQo5k+ejwrf43nDumISa9TQry0ChqGWcKoi9SykfI nnLrgjnLv3TcNfbn1JTsFOJi2b0e6+AX+SnqN4ox1yHCfhABsPF1nW1ejeUEKW5LAGsj aiRaiX2fV7lKzalTF8fJeU9rBR/OoLmP9eLVOYtsyQRW29dEcuUsuhtdUggeo/bHk0OA oFSaBEPGxd/bZiQn7Ow3VZWfHhmzt/Kk8SFm0U4rr1saaaY0QoK+zfa31AGHly3la+EB xvcnx9NrV6thKi7Ss82bdmjLOpDhx0hTi5WnDQZfZ0O6Gj922KePfNcg5REtrJd0mvY2 2Keg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=BKS9fUT4VtivkcruO+8vIPrD/eLeaRjrcB7hbsjRBQs=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=m0lXAeLsQYCrnJvVRLPcy1rSOSHKRWm28G6agH5odr6kInviaYfaH0bmodLFjbCniD 8X2RV6HTy6wwNS2qEmxD/F3jj1XGpuVL1n8kbyklxPfjDZq/eOk2Axa8ANT8zmFjeopr /i/kdkwrs1ecXREWYUqkHGSNhU9ideQKeIhsn8dFoy8aF8KlfrZoPWCZHVvy6MR5JwNo 5gnv5RiMAFq4YK3khF5x625MSIJXsKl5ZRs/gP7PrfyFt5H2zxbyPwpZ9G1Foqenufx6 Qap/hhED6IiTLe6DxKIsa8LD1bIkeo5cXbkoFmBTqbHhuwz27lI/vlLq9WbS4YT4kMdP Ra4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=f1YsI9ZU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id s3-20020a635243000000b005c1b2d93e52si7048136pgl.368.2023.11.27.05.56.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:56:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=f1YsI9ZU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id C4FE3801BAF1; Mon, 27 Nov 2023 05:56:05 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233276AbjK0Nzh (ORCPT + 99 others); Mon, 27 Nov 2023 08:55:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233635AbjK0NzD (ORCPT ); Mon, 27 Nov 2023 08:55:03 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B8181FD5 for ; Mon, 27 Nov 2023 05:54:40 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7157C433C8; Mon, 27 Nov 2023 13:54:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093279; bh=nRRPhz4LGfnAyoZaXqAlhWt3IaLLSNsOo0cAG6Q6RJE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f1YsI9ZU7dkEoyTRm4WwjIOPKUUNMYUmkdiNLDBr+ojVXTDoA05ksskf4CFbllwh4 L1sOuAqwaIX39Y9h5WMA/6pMT6lQ9BtM3xW1KwWPT4/fpKXQ5xRGyKG8ZszpS1TPyA ifm9uJahf9CNImFGk0VhH3T0BJtosJul6GEor2wD8EZddc87+bnCnhfGONcqhCU+h2 KX8hA6M5Xb/xlzMwIFRQxeqJqJC73IwPuKRR9//zqPd5VI5j9JZWXRMddz1wKyo4ah q1DmdJBPZUE5o3gGdSXz3qC/HonUXLtGYndHbNcb8Qqjd1TiXUsAcbq/Cfm+VFZ99u juQB3ptl8+O1Q== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 08/33] function_graph: Remove logic around ftrace_graph_entry and return Date: Mon, 27 Nov 2023 22:54:33 +0900 Message-Id: <170109327280.343914.16318101741773152293.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:56:05 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725681564256537 X-GMAIL-MSGID: 1783725681564256537 From: Steven Rostedt (VMware) The function pointers ftrace_graph_entry and ftrace_graph_return are no longer called via the function_graph tracer. Instead, an array structure is now used that will allow for multiple users of the function_graph infrastructure. The variables are still used by the architecture code for non dynamic ftrace configs, where a test is made against them to see if they point to the default stub function or not. This is how the static function tracing knows to call into the function graph tracer infrastructure or not. Two new stub functions are made. entry_run() and return_run(). The ftrace_graph_entry and ftrace_graph_return are set to them respectively when the function graph tracer is enabled, and this will trigger the architecture specific function graph code to be executed. This also requires checking the global_ops hash for all calls into the function_graph tracer. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Fix typo and make lines shorter than 76 chars in the description. - Remove unneeded return from return_run() function. --- kernel/trace/fgraph.c | 71 +++++++++++----------------------------- kernel/trace/ftrace.c | 2 - kernel/trace/ftrace_internal.h | 2 - 3 files changed, 19 insertions(+), 56 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 8aba93be11b2..97a9ffb8bb4c 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -128,6 +128,17 @@ static inline int get_fgraph_array(struct task_struct *t, int offset) FGRAPH_ARRAY_MASK; } +/* ftrace_graph_entry set to this to tell some archs to run function graph */ +static int entry_run(struct ftrace_graph_ent *trace) +{ + return 0; +} + +/* ftrace_graph_return set to this to tell some archs to run function graph */ +static void return_run(struct ftrace_graph_ret *trace) +{ +} + /* * @offset: The index into @t->ret_stack to find the ret_stack entry * @index: Where to place the index into @t->ret_stack of that entry @@ -323,6 +334,10 @@ int function_graph_enter(unsigned long ret, unsigned long func, ftrace_find_rec_direct(ret - MCOUNT_INSN_SIZE)) return -EBUSY; #endif + + if (!ftrace_ops_test(&global_ops, func, NULL)) + return -EBUSY; + trace.func = func; trace.depth = ++current->curr_ret_depth; @@ -664,7 +679,6 @@ extern void ftrace_stub_graph(struct ftrace_graph_ret *); /* The callbacks that hook a function */ trace_func_graph_ret_t ftrace_graph_return = ftrace_stub_graph; trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub; -static trace_func_graph_ent_t __ftrace_graph_entry = ftrace_graph_entry_stub; /* Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. */ static int alloc_retstack_tasklist(unsigned long **ret_stack_list) @@ -747,46 +761,6 @@ ftrace_graph_probe_sched_switch(void *ignore, bool preempt, } } -static int ftrace_graph_entry_test(struct ftrace_graph_ent *trace) -{ - if (!ftrace_ops_test(&global_ops, trace->func, NULL)) - return 0; - return __ftrace_graph_entry(trace); -} - -/* - * The function graph tracer should only trace the functions defined - * by set_ftrace_filter and set_ftrace_notrace. If another function - * tracer ops is registered, the graph tracer requires testing the - * function against the global ops, and not just trace any function - * that any ftrace_ops registered. - */ -void update_function_graph_func(void) -{ - struct ftrace_ops *op; - bool do_test = false; - - /* - * The graph and global ops share the same set of functions - * to test. If any other ops is on the list, then - * the graph tracing needs to test if its the function - * it should call. - */ - do_for_each_ftrace_op(op, ftrace_ops_list) { - if (op != &global_ops && op != &graph_ops && - op != &ftrace_list_end) { - do_test = true; - /* in double loop, break out with goto */ - goto out; - } - } while_for_each_ftrace_op(op); - out: - if (do_test) - ftrace_graph_entry = ftrace_graph_entry_test; - else - ftrace_graph_entry = __ftrace_graph_entry; -} - static DEFINE_PER_CPU(unsigned long *, idle_ret_stack); static void @@ -927,18 +901,12 @@ int register_ftrace_graph(struct fgraph_ops *gops) ftrace_graph_active--; goto out; } - - ftrace_graph_return = gops->retfunc; - /* - * Update the indirect function to the entryfunc, and the - * function that gets called to the entry_test first. Then - * call the update fgraph entry function to determine if - * the entryfunc should be called directly or not. + * Some archs just test to see if these are not + * the default function */ - __ftrace_graph_entry = gops->entryfunc; - ftrace_graph_entry = ftrace_graph_entry_test; - update_function_graph_func(); + ftrace_graph_return = return_run; + ftrace_graph_entry = entry_run; ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); } @@ -974,7 +942,6 @@ void unregister_ftrace_graph(struct fgraph_ops *gops) if (!ftrace_graph_active) { ftrace_graph_return = ftrace_stub_graph; ftrace_graph_entry = ftrace_graph_entry_stub; - __ftrace_graph_entry = ftrace_graph_entry_stub; ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); unregister_pm_notifier(&ftrace_suspend_notifier); unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 8de8bec5f366..fd64021ec52f 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -235,8 +235,6 @@ static void update_ftrace_function(void) func = ftrace_ops_list_func; } - update_function_graph_func(); - /* If there's no change, then do nothing more here */ if (ftrace_trace_function == func) return; diff --git a/kernel/trace/ftrace_internal.h b/kernel/trace/ftrace_internal.h index 5012c04f92c0..19eddcb91584 100644 --- a/kernel/trace/ftrace_internal.h +++ b/kernel/trace/ftrace_internal.h @@ -42,10 +42,8 @@ ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs) #ifdef CONFIG_FUNCTION_GRAPH_TRACER extern int ftrace_graph_active; -void update_function_graph_func(void); #else /* !CONFIG_FUNCTION_GRAPH_TRACER */ # define ftrace_graph_active 0 -static inline void update_function_graph_func(void) { } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #else /* !CONFIG_FUNCTION_TRACER */ From patchwork Mon Nov 27 13:54:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170150 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3129131vqx; Mon, 27 Nov 2023 05:56:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IE9Cwukkux+e3NZeQ+JHQgZ70UBgMBS3X7Q7y2QotdUHbe0TYEDmXIjR1Pf14hQqCstYJrf X-Received: by 2002:a05:6808:3083:b0:3b8:5e09:8c40 with SMTP id bl3-20020a056808308300b003b85e098c40mr9011380oib.2.1701093390402; Mon, 27 Nov 2023 05:56:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093390; cv=none; d=google.com; s=arc-20160816; b=Cz3Ejgn7ON83h1Tn+uGkUHyaHb9yTemgZyG01UiQ5IhdeFza8R/72FuwkRVVQ3UHfi KqXqv6JxDPRwBmOcTtWcRqO4+R/F+3uDJNCqgfz8z2L6QQoOs6D3PjbYioUkpI/2a2yk k1FdCRZXJMx8arrgQs5FXMvURy8vPw8kHZe5t//3dgFQJqcFem3tpgLLibBPzQ1puCo3 Puf+zU51/kq9nv+97CLxwEvcV6N7+imJSjRmuLVhoezuJOZXboEviso6vOzKXjOsEHqo FJeiWn4CuTRc6++uxx+8+v3P21U1pcqKIUuxt6K38oe26qLAERl4vSrkWCChQyh5lBt6 7vBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=FKKArjL7Va1yzI65kbkBqvFpAsWrpV7G0ac5YwhivwQ=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=ABhlf0XKwju/8K4Hf3WUm1CcKfRnqR2ZRCCimKcS55RlheZaoZWMF5rGiQUcG/ONC+ qvNe0eBdQigBpn4/Fwff4SrSjBBUYPU9pmNhLNYr1GwUjLax5tgM4xIzud3so+dGyb5Z Zsovd1cVUoUOT6+OYZUt2MDROaRhmGuXrAmYiIgPQ8IEQLUrXOjUV7OzDo/2Z75f2ys7 rdv5E1JGQE8XAQiqW1HPSE/Rv5mgjPi6UqsV9PIfbSvI4ziNTK3BFxbBImTyHNjt4qcw kvALzdYN2xdN1YuxFj52D8mpu2EfS2xU/LoB1cS6Xup1xMBfKhuUgpS3pUNxNWlnuBsr CCFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Ks4zvepA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id s9-20020a635249000000b005bcfc4fffe4si9844448pgl.187.2023.11.27.05.56.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:56:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Ks4zvepA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id DED828023998; Mon, 27 Nov 2023 05:56:26 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233346AbjK0N4G (ORCPT + 99 others); Mon, 27 Nov 2023 08:56:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233668AbjK0Nzn (ORCPT ); Mon, 27 Nov 2023 08:55:43 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DF9E10D4 for ; Mon, 27 Nov 2023 05:54:52 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 227B9C433C9; Mon, 27 Nov 2023 13:54:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093291; bh=Hck/pzthcgyXVS4LRGcEA1R2Rs+2TUlc55hcaIgz5Ls=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ks4zvepAJzOvp3gk7NSL4Kc6WCKQtqcv+njESbhr30Oh3fDLt5Z1V+ol4u50DqgmW yQE7Fj0T5/8BOlEbEy4FozBm374HvuBxRnV55jf9m15uTkMSicYt1CyKi4GCF68LJ2 5fks6szb47So0OdoZd94e8TVDeon1TAdsIdi9kY6yr4zbyC2qewBEIAsM7JHEz/0Tt rkqiICy+lCd0vXDdRsxAEI0gTRIhlwl7ZDuA5XGcmg7T9jk5dDUlLuAPXlM3/0PJa0 /u78qb4qtTt48s7whopujwkq6y0Tm7aCsqjMvntwEGpFIRO5BIARsvdbMe82U6qMx2 IYZivgMSI8URA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 09/33] ftrace/function_graph: Pass fgraph_ops to function graph callbacks Date: Mon, 27 Nov 2023 22:54:45 +0900 Message-Id: <170109328499.343914.17985524634561630393.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:56:27 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725703132869015 X-GMAIL-MSGID: 1783725703132869015 From: Steven Rostedt (VMware) Pass the fgraph_ops structure to the function graph callbacks. This will allow callbacks to add a descriptor to a fgraph_ops private field that wil be added in the future and use it for the callbacks. This will be useful when more than one callback can be registered to the function graph tracer. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - cleanup to set argument name on function prototype. --- include/linux/ftrace.h | 10 +++++++--- kernel/trace/fgraph.c | 16 +++++++++------- kernel/trace/ftrace.c | 6 ++++-- kernel/trace/trace.h | 4 ++-- kernel/trace/trace_functions_graph.c | 11 +++++++---- kernel/trace/trace_irqsoff.c | 6 ++++-- kernel/trace/trace_sched_wakeup.c | 6 ++++-- kernel/trace/trace_selftest.c | 5 +++-- 8 files changed, 40 insertions(+), 24 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 8b48fc621ea0..bc216cdc742d 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1055,11 +1055,15 @@ struct ftrace_graph_ret { unsigned long long rettime; } __packed; +struct fgraph_ops; + /* Type of the callback handlers for tracing function graph*/ -typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return */ -typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *); /* entry */ +typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *, + struct fgraph_ops *); /* return */ +typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *, + struct fgraph_ops *); /* entry */ -extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace); +extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); #ifdef CONFIG_FUNCTION_GRAPH_TRACER diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 97a9ffb8bb4c..62c35d6d95f9 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -129,13 +129,13 @@ static inline int get_fgraph_array(struct task_struct *t, int offset) } /* ftrace_graph_entry set to this to tell some archs to run function graph */ -static int entry_run(struct ftrace_graph_ent *trace) +static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops) { return 0; } /* ftrace_graph_return set to this to tell some archs to run function graph */ -static void return_run(struct ftrace_graph_ret *trace) +static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops) { } @@ -199,12 +199,14 @@ int __weak ftrace_disable_ftrace_graph_caller(void) } #endif -int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) +int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { return 0; } -static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace) +static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { } @@ -358,7 +360,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, atomic_inc(¤t->trace_overrun); break; } - if (fgraph_array[i]->entryfunc(&trace)) { + if (fgraph_array[i]->entryfunc(&trace, fgraph_array[i])) { offset = current->curr_ret_stack; /* Check the top level stored word */ type = get_fgraph_type(current, offset - 1); @@ -532,7 +534,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs i = 0; do { idx = get_fgraph_array(current, offset - i); - fgraph_array[idx]->retfunc(&trace); + fgraph_array[idx]->retfunc(&trace, fgraph_array[idx]); i++; } while (i < index); @@ -674,7 +676,7 @@ void ftrace_graph_sleep_time_control(bool enable) * Simply points to ftrace_stub, but with the proper protocol. * Defined by the linker script in linux/vmlinux.lds.h */ -extern void ftrace_stub_graph(struct ftrace_graph_ret *); +void ftrace_stub_graph(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); /* The callbacks that hook a function */ trace_func_graph_ret_t ftrace_graph_return = ftrace_stub_graph; diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index fd64021ec52f..7ff5c454622a 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -815,7 +815,8 @@ void ftrace_graph_graph_time_control(bool enable) fgraph_graph_time = enable; } -static int profile_graph_entry(struct ftrace_graph_ent *trace) +static int profile_graph_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { struct ftrace_ret_stack *ret_stack; @@ -832,7 +833,8 @@ static int profile_graph_entry(struct ftrace_graph_ent *trace) return 1; } -static void profile_graph_return(struct ftrace_graph_ret *trace) +static void profile_graph_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { struct ftrace_ret_stack *ret_stack; struct ftrace_profile_stat *stat; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 77debe53f07c..ada49bf1fbc8 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -670,8 +670,8 @@ void trace_latency_header(struct seq_file *m); void trace_default_header(struct seq_file *m); void print_trace_header(struct seq_file *m, struct trace_iterator *iter); -void trace_graph_return(struct ftrace_graph_ret *trace); -int trace_graph_entry(struct ftrace_graph_ent *trace); +void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); +int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); void set_graph_array(struct trace_array *tr); void tracing_start_cmdline_record(void); diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index c35fbaab2a47..b7b142b65299 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -129,7 +129,8 @@ static inline int ftrace_graph_ignore_irqs(void) return in_hardirq(); } -int trace_graph_entry(struct ftrace_graph_ent *trace) +int trace_graph_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { struct trace_array *tr = graph_array; struct trace_array_cpu *data; @@ -238,7 +239,8 @@ void __trace_graph_return(struct trace_array *tr, trace_buffer_unlock_commit_nostack(buffer, event); } -void trace_graph_return(struct ftrace_graph_ret *trace) +void trace_graph_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { struct trace_array *tr = graph_array; struct trace_array_cpu *data; @@ -275,7 +277,8 @@ void set_graph_array(struct trace_array *tr) smp_mb(); } -static void trace_graph_thresh_return(struct ftrace_graph_ret *trace) +static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { ftrace_graph_addr_finish(trace); @@ -288,7 +291,7 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace) (trace->rettime - trace->calltime < tracing_thresh)) return; else - trace_graph_return(trace); + trace_graph_return(trace, gops); } static struct fgraph_ops funcgraph_thresh_ops = { diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index ba37f768e2f2..5478f4c4f708 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -175,7 +175,8 @@ static int irqsoff_display_graph(struct trace_array *tr, int set) return start_irqsoff_tracer(irqsoff_trace, set); } -static int irqsoff_graph_entry(struct ftrace_graph_ent *trace) +static int irqsoff_graph_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { struct trace_array *tr = irqsoff_trace; struct trace_array_cpu *data; @@ -205,7 +206,8 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace) return ret; } -static void irqsoff_graph_return(struct ftrace_graph_ret *trace) +static void irqsoff_graph_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { struct trace_array *tr = irqsoff_trace; struct trace_array_cpu *data; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 0469a04a355f..49bcc812652c 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -112,7 +112,8 @@ static int wakeup_display_graph(struct trace_array *tr, int set) return start_func_tracer(tr, set); } -static int wakeup_graph_entry(struct ftrace_graph_ent *trace) +static int wakeup_graph_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { struct trace_array *tr = wakeup_trace; struct trace_array_cpu *data; @@ -141,7 +142,8 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace) return ret; } -static void wakeup_graph_return(struct ftrace_graph_ret *trace) +static void wakeup_graph_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) { struct trace_array *tr = wakeup_trace; struct trace_array_cpu *data; diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 529590499b1f..914331d8242c 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -762,7 +762,8 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr) static unsigned int graph_hang_thresh; /* Wrap the real function entry probe to avoid possible hanging */ -static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace) +static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) { /* This is harmlessly racy, we want to approximately detect a hang */ if (unlikely(++graph_hang_thresh > GRAPH_MAX_FUNC_TEST)) { @@ -776,7 +777,7 @@ static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace) return 0; } - return trace_graph_entry(trace); + return trace_graph_entry(trace, gops); } static struct fgraph_ops fgraph_ops __initdata = { From patchwork Mon Nov 27 13:54:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170151 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3129281vqx; Mon, 27 Nov 2023 05:56:44 -0800 (PST) X-Google-Smtp-Source: AGHT+IHxJzAW/SnbZD1KipPprbtNdT2sORYjceDVPI8E1G0T0IrbaLFOKnvSLC9rZxRVoLv5zmtZ X-Received: by 2002:a17:90b:1d90:b0:285:b6be:d89c with SMTP id pf16-20020a17090b1d9000b00285b6bed89cmr6536585pjb.7.1701093404169; Mon, 27 Nov 2023 05:56:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093404; cv=none; d=google.com; s=arc-20160816; b=LF/Emqhx9kHZcaGmeNYlTnY/LkS3/eTjFmMy7V5gqHNJy70BVZyxKYjJse2LB6bPNt cemLjuBYTJiYw+OplyshsyBqdpiQxckIO8BQnNw+x1rKQ8zf/ph1MFMn9U+ptf7BfkL1 DwVWpeectNV5Ta8J982hFBgaIBTwZgSj8e2sCvKXITMHLklWoPf9MxE91KRxMhy/W1Bn 8AmX+8CEpZu+fzOB89QIXUknm4nO/Soqe3MbDg+4Y6o0wVVCeE3bNy47rJZT/M0YI+oX 3JYt39RzJ14bX5RVDq+BGa2+0J+R6YXaexzR0Off5qv9g9XbITb26AH43yuZM9eIyJb0 2Jxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Ydj2ZLCyli4elPM9xrdEdfg/GbZVxI9qwW8e2UGm8Ic=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=FDimsuanyoIGpHcrWLhWA2Xw4b95OFy7XJKTFnjuEuymSGInbeVt8rhULFwuWxFkPE ib59OUpQFErJCk7wV3RS3tBvkbvvcwEBbvtHTHOTkfVhx/HrAmQGeCzIFOAaDf7WC/uV UFUDtGLgWJvLXbIb0zhR/pxvPNf4f+3H9UNOfkb8FEBn75iZioPko9Bpjuu2BkXo2Pp4 XR3w7vtW+4q7p65/3VhKLU7c+t+w1OhIGEOfjb/rWOelfAO7hsPOgULO/y1VQ4QfFNX+ ShnlDbNKHSL+1aQcBYSBBxSUzqujAXwg1sDreihChn9bDUwtqXsHQqZuQx0K435DQXwb hiQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=WqPILvhT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id j3-20020a17090aeb0300b0028586287439si7893174pjz.93.2023.11.27.05.56.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:56:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=WqPILvhT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id C3A71801B93B; Mon, 27 Nov 2023 05:56:36 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233494AbjK0N41 (ORCPT + 99 others); Mon, 27 Nov 2023 08:56:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233469AbjK0N4H (ORCPT ); Mon, 27 Nov 2023 08:56:07 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 970E52723 for ; Mon, 27 Nov 2023 05:55:04 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 500FCC32775; Mon, 27 Nov 2023 13:55:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093304; bh=LSs+IZL/FtnEeVrTh2ICOQTvjZCkIBYpTmM3ej8JbXc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WqPILvhTObw/CKuZ8Yay92yNuc9S2MBh/GIk6QKLPFqz/qq2F1+tPDo7k19OGXszF 9d+EkngUzTRApKaHuCqGIaLXt+slZaHBLf0jpzZFcNt4ZiCOex20r9jIQVSjkJBTxJ Q6M84e37RXWAlpL65KFazvehTKRzinY2LDcHk8ApkVQAqyqkFQq7XX5Z4PB4xxM/oR 1VZVDUcWbToLJWCVA0MuPI32AhL2n14dPJzpAKfvlmgQPgdgFJ/ghl0fA9ieiySzx4 Gi0vBb1aowN8qRyq2pr3JOC4IlnqoHJO9OIwtqikDi+LFv9MtUPCsYSxpBbhAoiByM UxGYC/Y6fuq4Q== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 10/33] ftrace: Allow function_graph tracer to be enabled in instances Date: Mon, 27 Nov 2023 22:54:57 +0900 Message-Id: <170109329728.343914.5492808588885335966.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:56:37 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725717013252984 X-GMAIL-MSGID: 1783725717013252984 From: Steven Rostedt (VMware) Now that function graph tracing can handle more than one user, allow it to be enabled in the ftrace instances. Note, the filtering of the functions is still joined by the top level set_ftrace_filter and friends, as well as the graph and nograph files. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Fix to remove set_graph_array() completely. --- include/linux/ftrace.h | 1 + kernel/trace/ftrace.c | 1 + kernel/trace/trace.h | 13 ++++++- kernel/trace/trace_functions.c | 8 ++++ kernel/trace/trace_functions_graph.c | 65 +++++++++++++++++++++------------- kernel/trace/trace_selftest.c | 4 +- 6 files changed, 64 insertions(+), 28 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index bc216cdc742d..0955baccbb87 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1070,6 +1070,7 @@ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph struct fgraph_ops { trace_func_graph_ent_t entryfunc; trace_func_graph_ret_t retfunc; + void *private; }; /* diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 7ff5c454622a..83fbfb7b48f8 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -7319,6 +7319,7 @@ __init void ftrace_init_global_array_ops(struct trace_array *tr) tr->ops = &global_ops; tr->ops->private = tr; ftrace_init_trace_array(tr); + init_array_fgraph_ops(tr); } void ftrace_init_array_ops(struct trace_array *tr, ftrace_func_t func) diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index ada49bf1fbc8..febb9c6d01c7 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -395,6 +395,9 @@ struct trace_array { struct ftrace_ops *ops; struct trace_pid_list __rcu *function_pids; struct trace_pid_list __rcu *function_no_pids; +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + struct fgraph_ops *gops; +#endif #ifdef CONFIG_DYNAMIC_FTRACE /* All of these are protected by the ftrace_lock */ struct list_head func_probes; @@ -672,7 +675,6 @@ void print_trace_header(struct seq_file *m, struct trace_iterator *iter); void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); -void set_graph_array(struct trace_array *tr); void tracing_start_cmdline_record(void); void tracing_stop_cmdline_record(void); @@ -883,6 +885,9 @@ extern int __trace_graph_entry(struct trace_array *tr, extern void __trace_graph_return(struct trace_array *tr, struct ftrace_graph_ret *trace, unsigned int trace_ctx); +extern void init_array_fgraph_ops(struct trace_array *tr); +extern int allocate_fgraph_ops(struct trace_array *tr); +extern void free_fgraph_ops(struct trace_array *tr); #ifdef CONFIG_DYNAMIC_FTRACE extern struct ftrace_hash __rcu *ftrace_graph_hash; @@ -995,6 +1000,12 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags) { return TRACE_TYPE_UNHANDLED; } +static inline void init_array_fgraph_ops(struct trace_array *tr) { } +static inline int allocate_fgraph_ops(struct trace_array *tr) +{ + return 0; +} +static inline void free_fgraph_ops(struct trace_array *tr) { } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ extern struct list_head ftrace_pids; diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c index 9f1bfbe105e8..8e8da0d0ee52 100644 --- a/kernel/trace/trace_functions.c +++ b/kernel/trace/trace_functions.c @@ -80,6 +80,7 @@ void ftrace_free_ftrace_ops(struct trace_array *tr) int ftrace_create_function_files(struct trace_array *tr, struct dentry *parent) { + int ret; /* * The top level array uses the "global_ops", and the files are * created on boot up. @@ -90,6 +91,12 @@ int ftrace_create_function_files(struct trace_array *tr, if (!tr->ops) return -EINVAL; + ret = allocate_fgraph_ops(tr); + if (ret) { + kfree(tr->ops); + return ret; + } + ftrace_create_filter_files(tr->ops, parent); return 0; @@ -99,6 +106,7 @@ void ftrace_destroy_function_files(struct trace_array *tr) { ftrace_destroy_filter_files(tr->ops); ftrace_free_ftrace_ops(tr); + free_fgraph_ops(tr); } static ftrace_func_t select_trace_function(u32 flags_val) diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index b7b142b65299..9ccc904a7703 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -83,8 +83,6 @@ static struct tracer_flags tracer_flags = { .opts = trace_opts }; -static struct trace_array *graph_array; - /* * DURATION column is being also used to display IRQ signs, * following values are used by print_graph_irq and others @@ -132,7 +130,7 @@ static inline int ftrace_graph_ignore_irqs(void) int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops) { - struct trace_array *tr = graph_array; + struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; unsigned int trace_ctx; @@ -242,7 +240,7 @@ void __trace_graph_return(struct trace_array *tr, void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { - struct trace_array *tr = graph_array; + struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; unsigned int trace_ctx; @@ -268,15 +266,6 @@ void trace_graph_return(struct ftrace_graph_ret *trace, local_irq_restore(flags); } -void set_graph_array(struct trace_array *tr) -{ - graph_array = tr; - - /* Make graph_array visible before we start tracing */ - - smp_mb(); -} - static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { @@ -294,25 +283,53 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, trace_graph_return(trace, gops); } -static struct fgraph_ops funcgraph_thresh_ops = { - .entryfunc = &trace_graph_entry, - .retfunc = &trace_graph_thresh_return, -}; - static struct fgraph_ops funcgraph_ops = { .entryfunc = &trace_graph_entry, .retfunc = &trace_graph_return, }; +int allocate_fgraph_ops(struct trace_array *tr) +{ + struct fgraph_ops *gops; + + gops = kzalloc(sizeof(*gops), GFP_KERNEL); + if (!gops) + return -ENOMEM; + + gops->entryfunc = &trace_graph_entry; + gops->retfunc = &trace_graph_return; + + tr->gops = gops; + gops->private = tr; + return 0; +} + +void free_fgraph_ops(struct trace_array *tr) +{ + kfree(tr->gops); +} + +__init void init_array_fgraph_ops(struct trace_array *tr) +{ + tr->gops = &funcgraph_ops; + funcgraph_ops.private = tr; +} + static int graph_trace_init(struct trace_array *tr) { int ret; - set_graph_array(tr); + tr->gops->entryfunc = trace_graph_entry; + if (tracing_thresh) - ret = register_ftrace_graph(&funcgraph_thresh_ops); + tr->gops->retfunc = trace_graph_thresh_return; else - ret = register_ftrace_graph(&funcgraph_ops); + tr->gops->retfunc = trace_graph_return; + + /* Make gops functions are visible before we start tracing */ + smp_mb(); + + ret = register_ftrace_graph(tr->gops); if (ret) return ret; tracing_start_cmdline_record(); @@ -323,10 +340,7 @@ static int graph_trace_init(struct trace_array *tr) static void graph_trace_reset(struct trace_array *tr) { tracing_stop_cmdline_record(); - if (tracing_thresh) - unregister_ftrace_graph(&funcgraph_thresh_ops); - else - unregister_ftrace_graph(&funcgraph_ops); + unregister_ftrace_graph(tr->gops); } static int graph_trace_update_thresh(struct trace_array *tr) @@ -1365,6 +1379,7 @@ static struct tracer graph_trace __tracer_data = { .print_header = print_graph_headers, .flags = &tracer_flags, .set_flag = func_graph_set_flag, + .allow_instances = true, #ifdef CONFIG_FTRACE_SELFTEST .selftest = trace_selftest_startup_function_graph, #endif diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 914331d8242c..f0758afa2f7d 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -813,7 +813,7 @@ trace_selftest_startup_function_graph(struct tracer *trace, * to detect and recover from possible hangs */ tracing_reset_online_cpus(&tr->array_buffer); - set_graph_array(tr); + fgraph_ops.private = tr; ret = register_ftrace_graph(&fgraph_ops); if (ret) { warn_failed_init_tracer(trace, ret); @@ -856,7 +856,7 @@ trace_selftest_startup_function_graph(struct tracer *trace, cond_resched(); tracing_reset_online_cpus(&tr->array_buffer); - set_graph_array(tr); + fgraph_ops.private = tr; /* * Some archs *cough*PowerPC*cough* add characters to the From patchwork Mon Nov 27 13:55:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170152 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3129456vqx; Mon, 27 Nov 2023 05:57:01 -0800 (PST) X-Google-Smtp-Source: AGHT+IH/bpvogMFPRpRwzkHollgGHcOUJ5ko7RtP7mfStVfII7uHZsJEJ1Rm9ZXiABpacv2xBQM3 X-Received: by 2002:a17:902:9883:b0:1ce:5dd8:2f31 with SMTP id s3-20020a170902988300b001ce5dd82f31mr8922588plp.39.1701093421396; Mon, 27 Nov 2023 05:57:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093421; cv=none; d=google.com; s=arc-20160816; b=dOxap5Cv3VUocu39riIe20MyfLXvV5cJIqo4zWu8ozCIl2oQ6Iw8Kr9HEB7cOrF/yC kfRbsDbhw7Do8qKGuSE2h9x2zJpwaoGx+TU644sfyVvk1CRElFhXT1jr3ag+cDWZxQ1D bEGHTlqKiFo8F7YAarLNmHh2CKQ7OEpj/0KifwQTtM2aZKsvezsnKBVLry5SIIRKnaIX I9xG8aQV6XEqGQSwTRGp1H0AjD6UDxGVC5m4iuHaJRv8e/F1PcBfYp+M0LyK02GPEb4S Oy7Fl9RP1T+dJPBfo8YwAtoJxnLP6wSQjIK+eEyesJWgSYZMXntB1wzKoPjWz8Aqur3s iJVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8FG1EfTvjghHx0E3OCkHobbbNVNqEcUAtmh9UlmGtc4=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=H4KJwTwX6Q0ZwJzd7j6SvrdKVuQxpDHUh7LNedKmAZo+8xH3SwzW3weFwxoWDcsb+j Gn8+daUfYMvWzoJanZu/XgdtWCoSyaPycxMT78MTnVCKQzItSUSPbf+RyO2gj4pumVuz 9IA5SpYo1UC1I+39Y7dvNPpswsWEX1mDLLhfKB+45q4msXVqOh0RkzUkz3nmnJukhqpe Ydy8RirZOh8kNNwcJL+AEkznP2z4JRLyXesUrEZVo8EGUnjHt21YEnDnAwy0hudE8PUM FZ5evwvDHCOUz8PKOp8i+Uu38H6pvHY/GyEXhNCpNYVyRZeTBoksYXOnyTQRLXnzqKth qQpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=InfdeYoK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id b14-20020a170902d50e00b001cfcd4d8f2fsi2406155plg.351.2023.11.27.05.57.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:57:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=InfdeYoK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 526D4801B49B; Mon, 27 Nov 2023 05:56:56 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233725AbjK0N4o (ORCPT + 99 others); Mon, 27 Nov 2023 08:56:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233369AbjK0N4Z (ORCPT ); Mon, 27 Nov 2023 08:56:25 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA09D1BB for ; Mon, 27 Nov 2023 05:55:16 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D2E7C433C7; Mon, 27 Nov 2023 13:55:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093316; bh=6ERBwQiaxIIDNTVWPQkxDhl1p/zo1tlsKma+ZLC6cPI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=InfdeYoK/y2VllHqqyuUR/zrdyimXufJGapAUBhCgPsMsg+n2oWKxEP69Q0bFwZfg es9Po9cmiJra9aMWLxdJuaV9GEdtUAQFzv+BUqNkRr/35hUX//I6etAnPdUB7DN98f 85XsG6sFG0Y3zxabNvXz4Sni/cECM3KJNgULHeeE3fHxGcrX8z57C7IPle5aSjMw9P 6+mAsEc138Qq7PP4+nfKNIoxgL201mwpru0JMw/KIx6I3d/V0N56O7Bug6mnGuK6/Z jD0ihsZ/71U9oFMqWGto01EiqeTidq/EwOGY9kZ3vlbNoc+Hv5BQl7/QghOGyGH0PI joYwrKeqYlybg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 11/33] ftrace: Allow ftrace startup flags exist without dynamic ftrace Date: Mon, 27 Nov 2023 22:55:09 +0900 Message-Id: <170109330957.343914.4603643031632118062.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:56:56 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725735315487853 X-GMAIL-MSGID: 1783725735315487853 From: Steven Rostedt (VMware) Some of the flags for ftrace_startup() may be exposed even when CONFIG_DYNAMIC_FTRACE is not configured in. This is fine as the difference between dynamic ftrace and static ftrace is done within the internals of ftrace itself. No need to have use cases fail to compile because dynamic ftrace is disabled. This change is needed to move some of the logic of what is passed to ftrace_startup() out of the parameters of ftrace_startup(). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/ftrace.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 0955baccbb87..7b08169aa51d 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -538,6 +538,15 @@ static inline void stack_tracer_disable(void) { } static inline void stack_tracer_enable(void) { } #endif +enum { + FTRACE_UPDATE_CALLS = (1 << 0), + FTRACE_DISABLE_CALLS = (1 << 1), + FTRACE_UPDATE_TRACE_FUNC = (1 << 2), + FTRACE_START_FUNC_RET = (1 << 3), + FTRACE_STOP_FUNC_RET = (1 << 4), + FTRACE_MAY_SLEEP = (1 << 5), +}; + #ifdef CONFIG_DYNAMIC_FTRACE void ftrace_arch_code_modify_prepare(void); @@ -632,15 +641,6 @@ void ftrace_set_global_notrace(unsigned char *buf, int len, int reset); void ftrace_free_filter(struct ftrace_ops *ops); void ftrace_ops_set_global_filter(struct ftrace_ops *ops); -enum { - FTRACE_UPDATE_CALLS = (1 << 0), - FTRACE_DISABLE_CALLS = (1 << 1), - FTRACE_UPDATE_TRACE_FUNC = (1 << 2), - FTRACE_START_FUNC_RET = (1 << 3), - FTRACE_STOP_FUNC_RET = (1 << 4), - FTRACE_MAY_SLEEP = (1 << 5), -}; - /* * The FTRACE_UPDATE_* enum is used to pass information back * from the ftrace_update_record() and ftrace_test_record() From patchwork Mon Nov 27 13:55:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170153 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3129781vqx; Mon, 27 Nov 2023 05:57:28 -0800 (PST) X-Google-Smtp-Source: AGHT+IF86mpU9HQ6tkM0Ofsmtvra0p+QkX0ezO0VS13vNswCRbya3bC7cZXRmQ2ZiAC2fKEQ2O3X X-Received: by 2002:a05:6a21:328a:b0:187:a75d:29dd with SMTP id yt10-20020a056a21328a00b00187a75d29ddmr16448899pzb.40.1701093447972; Mon, 27 Nov 2023 05:57:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093447; cv=none; d=google.com; s=arc-20160816; b=CcSEn73udsx9cGmulLGrYGjzcla0q+bV8hExRVUnRZvi72sO8JKKNzaxMK9F0nAU2m eoLm3cpZZu6kadsItJM7GNHjr2Vvl5O+PMFtq6fwt6t+VakKo+akGhzn6aFwDur9kflm qlcgvHWvwsEf5aQOz1NqLsSSSk3AwoBbnVOPQEwoYX23PZeUGhpAVaKsoaMBPX40B43C uKFESwutiZlstDRoa7em1rOfqiHwnDQCLEZvB6eG4G7ICKMymyud3p/JUdeKllIXGfpf UJRDVGyD9UzIul0i78bLoRNIeB1/JgjwQOC6pvnwNPxxkuRuyr9L5uMrqmsjL5AYLSit pN/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=65FVUKoVW9nX+/08sNalwZkACb8hz8A4Pc7PtKwxwGE=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=0OV0chif8RmPCmlth0MbS3eCo+ptX3os7jSW51pK/ctZ7wPvsC9rzlT8chQLJi/Be/ UtowRkgmtO8yD+a48o+0ucAaokq8Dyu6ehQ8j9PIBYiGyhZXxxf7rO3pZnxELFALG7x2 DVZQuk2F7fbj2Gj/BAnVitht+kvVxMaDXInU+1G9MHPVGCQS/WPU++jQ51bXCw5qeVvQ MBcpX9ycGLNKh8LfvLS9q4161+JG1fejKyfKf08CniLgQSGS3g0+1q59Otz5SvU3AuYY HvRldkwkl+NCW/oSuXMVpcSNFGZnS3Yu7f6tzQK1/hX301WCw2gUALo+Bh0AALtGPqaF ykvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=X50qELQF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id j191-20020a6380c8000000b0057d7cff25b8si9270487pgd.198.2023.11.27.05.57.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:57:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=X50qELQF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id AC6878095649; Mon, 27 Nov 2023 05:57:18 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233706AbjK0N5I (ORCPT + 99 others); Mon, 27 Nov 2023 08:57:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233485AbjK0N4n (ORCPT ); Mon, 27 Nov 2023 08:56:43 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D52E51BF3 for ; Mon, 27 Nov 2023 05:55:29 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC3A1C433CC; Mon, 27 Nov 2023 13:55:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093329; bh=x4kbqy3vD2+2lvNsZJh/4kGG7FwFv/tPFxqZltoQwQ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X50qELQFmQKMqN/A0D+DNk1GKRpGiwsfmxU6mhRgaRQLTg/BrYeKXPRnV/f6iUHQE kmEuBzg5FN6ehf4+8GG7Ea0lQF4/FRylUYFkbpaIHlbDrhFk1iGs1UlS7dlIXaSnGV U4Tw68oMPKIyaY0tcdWn3C5bBbjpu8iB/gxapFyje7xq9v0uPOtfIvKnGl297qaRqs wXbWFQmRheAE2CgZJmc9zdptIwroKELY1Xtau3OnvEXpKiPnOA1dUZ2LJLmHO5AWBq wJJ5pqQVWAp+TlBno/+V2Zydb+txPvmTD4FzseCRRCSJN1w7hLtnZqxP1M/iAQvYHE o4GEdChWqC6lg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 12/33] function_graph: Have the instances use their own ftrace_ops for filtering Date: Mon, 27 Nov 2023 22:55:22 +0900 Message-Id: <170109332175.343914.6080879486450909526.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:57:18 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725763500393094 X-GMAIL-MSGID: 1783725763500393094 From: Steven Rostedt (VMware) Allow for instances to have their own ftrace_ops part of the fgraph_ops that makes the funtion_graph tracer filter on the set_ftrace_filter file of the instance and not the top instance. This also change how the function_graph handles multiple instances on the shadow stack. Previously we use ARRAY type entries to record which one is enabled, and this makes it a bitmap of the fgraph_array's indexes. Previous function_graph_enter() expects calling back from prepare_ftrace_return() function which is called back only once if it is enabled. But this introduces different ftrace_ops for each fgraph instance and those are called from ftrace_graph_func() one by one. Thus we can not loop on the fgraph_array(), and need to reuse the ret_stack pushed by the previous instance. Finding the ret_stack is easy because we can check the ret_stack->func. But that is not enough for the self- recursive tail-call case. Thus fgraph uses the bitmap entry to find it is already set (this means that entry is for previous tail call). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Pass current fgraph_ops to the new entry handler (function_graph_enter_ops) if fgraph use ftrace. - Add fgraph_ops::idx in this patch. - Replace the array type with the bitmap type so that it can record which fgraph is called. - Fix some helper function to use passed task_struct instead of current. - Reduce the ret-index size to 1024 words. - Make the ret-index directly points the ret_stack. - Fix ftrace_graph_ret_addr() to handle tail-call case correctly. Changes in v2: - Use ftrace_graph_func and FTRACE_OPS_GRAPH_STUB instead of ftrace_stub and FTRACE_OPS_FL_STUB for new ftrace based fgraph. --- arch/arm64/kernel/ftrace.c | 19 ++ arch/x86/kernel/ftrace.c | 19 ++ include/linux/ftrace.h | 7 + kernel/trace/fgraph.c | 364 ++++++++++++++++++++-------------- kernel/trace/ftrace.c | 6 - kernel/trace/trace.h | 16 + kernel/trace/trace_functions.c | 2 kernel/trace/trace_functions_graph.c | 8 + 8 files changed, 277 insertions(+), 164 deletions(-) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index a650f5e11fc5..205937e04ece 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -481,7 +481,24 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { - prepare_ftrace_return(ip, &fregs->lr, fregs->fp); + unsigned long *parent = &fregs->lr; + struct fgraph_ops *gops = container_of(op, struct fgraph_ops, ops); + int bit; + + if (unlikely(ftrace_graph_is_dead())) + return; + + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + + bit = ftrace_test_recursion_trylock(ip, *parent); + if (bit < 0) + return; + + if (!function_graph_enter_ops(*parent, ip, fregs->fp, parent, gops)) + *parent = (unsigned long)&return_to_handler; + + ftrace_test_recursion_unlock(bit); } #else /* diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 12df54ff0e81..845e29b4254f 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -657,9 +657,24 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { struct pt_regs *regs = &fregs->regs; - unsigned long *stack = (unsigned long *)kernel_stack_pointer(regs); + unsigned long *parent = (unsigned long *)kernel_stack_pointer(regs); + struct fgraph_ops *gops = container_of(op, struct fgraph_ops, ops); + int bit; + + if (unlikely(ftrace_graph_is_dead())) + return; + + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; - prepare_ftrace_return(ip, (unsigned long *)stack, 0); + bit = ftrace_test_recursion_trylock(ip, *parent); + if (bit < 0) + return; + + if (!function_graph_enter_ops(*parent, ip, 0, parent, gops)) + *parent = (unsigned long)&return_to_handler; + + ftrace_test_recursion_unlock(bit); } #endif diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 7b08169aa51d..c431a33fe789 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1070,7 +1070,9 @@ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph struct fgraph_ops { trace_func_graph_ent_t entryfunc; trace_func_graph_ret_t retfunc; + struct ftrace_ops ops; /* for the hash lists */ void *private; + int idx; }; /* @@ -1104,6 +1106,11 @@ extern int function_graph_enter(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp); +extern int +function_graph_enter_ops(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct fgraph_ops *gops); + struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int idx); diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 62c35d6d95f9..32c03d6ffd17 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -7,6 +7,7 @@ * * Highly modified by Steven Rostedt (VMware). */ +#include #include #include #include @@ -17,22 +18,15 @@ #include "ftrace_internal.h" #include "trace.h" -#ifdef CONFIG_DYNAMIC_FTRACE -#define ASSIGN_OPS_HASH(opsname, val) \ - .func_hash = val, \ - .local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock), -#else -#define ASSIGN_OPS_HASH(opsname, val) -#endif - #define FGRAPH_RET_SIZE sizeof(struct ftrace_ret_stack) #define FGRAPH_RET_INDEX (FGRAPH_RET_SIZE / sizeof(long)) /* * On entry to a function (via function_graph_enter()), a new ftrace_ret_stack - * is allocated on the task's ret_stack, then each fgraph_ops on the - * fgraph_array[]'s entryfunc is called and if that returns non-zero, the - * index into the fgraph_array[] for that fgraph_ops is added to the ret_stack. + * is allocated on the task's ret_stack with indexes entry, then each + * fgraph_ops on the fgraph_array[]'s entryfunc is called and if that returns + * non-zero, the index into the fgraph_array[] for that fgraph_ops is recorded + * on the indexes entry as a bit flag. * As the associated ftrace_ret_stack saved for those fgraph_ops needs to * be found, the index to it is also added to the ret_stack along with the * index of the fgraph_array[] to each fgraph_ops that needs their retfunc @@ -42,61 +36,59 @@ * to the last ftrace_ret_stack saved. All references to the * ftrace_ret_stack has the format of: * - * bits: 0 - 13 Index in words from the previous ftrace_ret_stack - * bits: 14 - 15 Type of storage + * bits: 0 - 9 offset in words from the previous ftrace_ret_stack + * (bitmap type should have FGRAPH_RET_INDEX always) + * bits: 10 - 11 Type of storage * 0 - reserved - * 1 - fgraph_array index - * For fgraph_array_index: - * bits: 16 - 23 The fgraph_ops fgraph_array index + * 1 - bitmap of fgraph_array index + * + * For bitmap of fgraph_array index + * bits: 12 - 27 The bitmap of fgraph_ops fgraph_array index * * That is, at the end of function_graph_enter, if the first and forth * fgraph_ops on the fgraph_array[] (index 0 and 3) needs their retfunc called * on the return of the function being traced, this is what will be on the * task's shadow ret_stack: (the stack grows upward) * - * | | <- task->curr_ret_stack - * +----------------------------------+ - * | (3 << FGRAPH_ARRAY_SHIFT)|(2) | ( 3 for index of fourth fgraph_ops) - * +----------------------------------+ - * | (0 << FGRAPH_ARRAY_SHIFT)|(1) | ( 0 for index of first fgraph_ops) - * +----------------------------------+ - * | struct ftrace_ret_stack | - * | (stores the saved ret pointer) | - * +----------------------------------+ - * | (X) | (N) | ( N words away from previous ret_stack) - * | | + * | | <- task->curr_ret_stack + * +--------------------------------------------+ + * | bitmap_type(bitmap:(BIT(3)|BIT(0)), | + * | offset:FGRAPH_RET_INDEX) | <- the offset is from here + * +--------------------------------------------+ + * | struct ftrace_ret_stack | + * | (stores the saved ret pointer) | <- the offset points here + * +--------------------------------------------+ + * | (X) | (N) | ( N words away from + * | | previous ret_stack) * * If a backtrace is required, and the real return pointer needs to be * fetched, then it looks at the task's curr_ret_stack index, if it - * is greater than zero, it would subtact one, and then mask the value - * on the ret_stack by FGRAPH_RET_INDEX_MASK and subtract FGRAPH_RET_INDEX - * from that, to get the index of the ftrace_ret_stack structure stored - * on the shadow stack. + * is greater than zero (reserved, or right before poped), it would mask + * the value by FGRAPH_RET_INDEX_MASK to get the offset index of the + * ftrace_ret_stack structure stored on the shadow stack. */ -#define FGRAPH_RET_INDEX_SIZE 14 -#define FGRAPH_RET_INDEX_MASK ((1 << FGRAPH_RET_INDEX_SIZE) - 1) - +#define FGRAPH_RET_INDEX_SIZE 10 +#define FGRAPH_RET_INDEX_MASK GENMASK(FGRAPH_RET_INDEX_SIZE - 1, 0) #define FGRAPH_TYPE_SIZE 2 -#define FGRAPH_TYPE_MASK ((1 << FGRAPH_TYPE_SIZE) - 1) +#define FGRAPH_TYPE_MASK GENMASK(FGRAPH_TYPE_SIZE - 1, 0) #define FGRAPH_TYPE_SHIFT FGRAPH_RET_INDEX_SIZE enum { FGRAPH_TYPE_RESERVED = 0, - FGRAPH_TYPE_ARRAY = 1, + FGRAPH_TYPE_BITMAP = 1, }; -#define FGRAPH_ARRAY_SIZE 16 -#define FGRAPH_ARRAY_MASK ((1 << FGRAPH_ARRAY_SIZE) - 1) -#define FGRAPH_ARRAY_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) +#define FGRAPH_INDEX_SIZE 16 +#define FGRAPH_INDEX_MASK GENMASK(FGRAPH_INDEX_SIZE - 1, 0) +#define FGRAPH_INDEX_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) /* Currently the max stack index can't be more than register callers */ -#define FGRAPH_MAX_INDEX FGRAPH_ARRAY_SIZE +#define FGRAPH_MAX_INDEX (FGRAPH_INDEX_SIZE + FGRAPH_RET_INDEX) + +#define FGRAPH_ARRAY_SIZE FGRAPH_INDEX_SIZE -#define FGRAPH_FRAME_SIZE (FGRAPH_RET_SIZE + FGRAPH_ARRAY_SIZE * (sizeof(long))) -#define FGRAPH_FRAME_INDEX (ALIGN(FGRAPH_FRAME_SIZE, \ - sizeof(long)) / sizeof(long)) #define SHADOW_STACK_SIZE (PAGE_SIZE) #define SHADOW_STACK_INDEX (SHADOW_STACK_SIZE / sizeof(long)) /* Leave on a buffer at the end */ @@ -113,19 +105,36 @@ static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; static inline int get_ret_stack_index(struct task_struct *t, int offset) { - return current->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; + return t->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; } static inline int get_fgraph_type(struct task_struct *t, int offset) { - return (current->ret_stack[offset] >> FGRAPH_TYPE_SHIFT) & - FGRAPH_TYPE_MASK; + return (t->ret_stack[offset] >> FGRAPH_TYPE_SHIFT) & FGRAPH_TYPE_MASK; +} + +static inline unsigned long +get_fgraph_index_bitmap(struct task_struct *t, int offset) +{ + return (t->ret_stack[offset] >> FGRAPH_INDEX_SHIFT) & FGRAPH_INDEX_MASK; +} + +static inline void +set_fgraph_index_bitmap(struct task_struct *t, int offset, unsigned long bitmap) +{ + t->ret_stack[offset] = (bitmap << FGRAPH_INDEX_SHIFT) | + (FGRAPH_TYPE_BITMAP << FGRAPH_TYPE_SHIFT) | FGRAPH_RET_INDEX; +} + +static inline bool is_fgraph_index_set(struct task_struct *t, int offset, int idx) +{ + return !!(get_fgraph_index_bitmap(t, offset) & BIT(idx)); } -static inline int get_fgraph_array(struct task_struct *t, int offset) +static inline void +add_fgraph_index_bitmap(struct task_struct *t, int offset, unsigned long bitmap) { - return (current->ret_stack[offset] >> FGRAPH_ARRAY_SHIFT) & - FGRAPH_ARRAY_MASK; + t->ret_stack[offset] |= (bitmap << FGRAPH_INDEX_SHIFT); } /* ftrace_graph_entry set to this to tell some archs to run function graph */ @@ -163,12 +172,12 @@ get_ret_stack(struct task_struct *t, int offset, int *index) if (offset <= 0) return NULL; - idx = get_ret_stack_index(t, offset - 1); + idx = get_ret_stack_index(t, --offset); if (idx <= 0 || idx > FGRAPH_MAX_INDEX) return NULL; - offset -= idx + FGRAPH_RET_INDEX; + offset -= idx; if (offset < 0) return NULL; @@ -231,10 +240,12 @@ void ftrace_graph_stop(void) /* Add a function return address to the trace stack on thread info.*/ static int ftrace_push_return_trace(unsigned long ret, unsigned long func, - unsigned long frame_pointer, unsigned long *retp) + unsigned long frame_pointer, unsigned long *retp, + int fgraph_idx) { struct ftrace_ret_stack *ret_stack; unsigned long long calltime; + unsigned long val; int index; if (unlikely(ftrace_graph_is_dead())) @@ -243,6 +254,27 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, if (!current->ret_stack) return -EBUSY; + if (ret == (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) { + /* + * In this case, the previous fgraph callback already pushed the + * ret_stack, or @func is called by tail-call. Usual tail-call can + * be detected if ret_stack::func is not @func, but for the self- + * recursive tail-call case needs to check whether the @fgraph_idx + * is already recorded or not. + */ + ret_stack = get_ret_stack(current, current->curr_ret_stack, &index); + if ((ret_stack && ret_stack->func == func) && + !is_fgraph_index_set(current, index + FGRAPH_RET_INDEX, fgraph_idx)) { + return index + FGRAPH_RET_INDEX; + } + /* + * But, !ret_stack is simply something wrong because it can not + * return anywhere. + */ + WARN_ON_ONCE(!ret_stack); + } + val = (FGRAPH_TYPE_RESERVED << FGRAPH_TYPE_SHIFT) | FGRAPH_RET_INDEX; + BUILD_BUG_ON(SHADOW_STACK_SIZE % sizeof(long)); /* @@ -252,17 +284,19 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, smp_rmb(); /* The return trace stack is full */ - if (current->curr_ret_stack >= SHADOW_STACK_MAX_INDEX) { + if (current->curr_ret_stack + FGRAPH_RET_INDEX >= SHADOW_STACK_MAX_INDEX) { atomic_inc(¤t->trace_overrun); return -EBUSY; } calltime = trace_clock_local(); - index = current->curr_ret_stack; - /* ret offset = 1 ; type = reserved */ - current->ret_stack[index + FGRAPH_RET_INDEX] = 1; + index = READ_ONCE(current->curr_ret_stack); ret_stack = RET_STACK(current, index); + index += FGRAPH_RET_INDEX; + + /* ret offset = FGRAPH_RET_INDEX ; type = reserved */ + current->ret_stack[index] = val; ret_stack->ret = ret; /* * The unwinders expect curr_ret_stack to point to either zero @@ -278,7 +312,7 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, * at least a correct index! */ barrier(); - current->curr_ret_stack += FGRAPH_RET_INDEX + 1; + current->curr_ret_stack = index + 1; /* * This next barrier is to ensure that an interrupt coming in * will not corrupt what we are about to write. @@ -286,7 +320,7 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, barrier(); /* Still keep it reserved even if an interrupt came in */ - current->ret_stack[index + FGRAPH_RET_INDEX] = 1; + current->ret_stack[index] = val; ret_stack->ret = ret; ret_stack->func = func; @@ -297,7 +331,7 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, #ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR ret_stack->retp = retp; #endif - return 0; + return index; } /* @@ -314,15 +348,13 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, # define MCOUNT_INSN_SIZE 0 #endif +/* If the caller does not use ftrace, call this function. */ int function_graph_enter(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp) { struct ftrace_graph_ent trace; - int offset; - int start; - int type; - int val; - int cnt = 0; + unsigned long bitmap = 0; + int index; int i; #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS @@ -337,69 +369,29 @@ int function_graph_enter(unsigned long ret, unsigned long func, return -EBUSY; #endif - if (!ftrace_ops_test(&global_ops, func, NULL)) - return -EBUSY; - trace.func = func; trace.depth = ++current->curr_ret_depth; - if (ftrace_push_return_trace(ret, func, frame_pointer, retp)) + index = ftrace_push_return_trace(ret, func, frame_pointer, retp, 0); + if (index < 0) goto out; - /* Use start for the distance to ret_stack (skipping over reserve) */ - start = offset = current->curr_ret_stack - 2; - for (i = 0; i < fgraph_array_cnt; i++) { struct fgraph_ops *gops = fgraph_array[i]; if (gops == &fgraph_stub) continue; - if ((offset == start) && - (current->curr_ret_stack >= SHADOW_STACK_INDEX - 1)) { - atomic_inc(¤t->trace_overrun); - break; - } - if (fgraph_array[i]->entryfunc(&trace, fgraph_array[i])) { - offset = current->curr_ret_stack; - /* Check the top level stored word */ - type = get_fgraph_type(current, offset - 1); - - val = (i << FGRAPH_ARRAY_SHIFT) | - (FGRAPH_TYPE_ARRAY << FGRAPH_TYPE_SHIFT) | - ((offset - start) - 1); - - /* We can reuse the top word if it is reserved */ - if (type == FGRAPH_TYPE_RESERVED) { - current->ret_stack[offset - 1] = val; - cnt++; - continue; - } - val++; - - current->ret_stack[offset] = val; - /* - * Write the value before we increment, so that - * if an interrupt comes in after we increment - * it will still see the value and skip over - * this. - */ - barrier(); - current->curr_ret_stack++; - /* - * Have to write again, in case an interrupt - * came in before the increment and after we - * wrote the value. - */ - barrier(); - current->ret_stack[offset] = val; - cnt++; - } + if (ftrace_ops_test(&gops->ops, func, NULL) && + gops->entryfunc(&trace, gops)) + bitmap |= BIT(i); } - if (!cnt) + if (!bitmap) goto out_ret; + set_fgraph_index_bitmap(current, index, bitmap); + return 0; out_ret: current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; @@ -408,15 +400,51 @@ int function_graph_enter(unsigned long ret, unsigned long func, return -EBUSY; } +/* This is called from ftrace_graph_func() via ftrace */ +int function_graph_enter_ops(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct fgraph_ops *gops) +{ + struct ftrace_graph_ent trace; + int index; + int type; + + + /* Use start for the distance to ret_stack (skipping over reserve) */ + index = ftrace_push_return_trace(ret, func, frame_pointer, retp, gops->idx); + if (index < 0) + return index; + type = get_fgraph_type(current, index); + + /* This is the first ret_stack for this fentry */ + if (type == FGRAPH_TYPE_RESERVED) + ++current->curr_ret_depth; + + trace.func = func; + trace.depth = current->curr_ret_depth; + if (gops->entryfunc(&trace, gops)) { + if (type == FGRAPH_TYPE_RESERVED) + set_fgraph_index_bitmap(current, index, BIT(gops->idx)); + else + add_fgraph_index_bitmap(current, index, BIT(gops->idx)); + return 0; + } + + if (type == FGRAPH_TYPE_RESERVED) { + current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; + current->curr_ret_depth--; + } + return -EBUSY; +} + /* Retrieve a function return address to the trace stack on thread info.*/ static struct ftrace_ret_stack * ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, - unsigned long frame_pointer) + unsigned long frame_pointer, int *index) { struct ftrace_ret_stack *ret_stack; - int index; - ret_stack = get_ret_stack(current, current->curr_ret_stack, &index); + ret_stack = get_ret_stack(current, current->curr_ret_stack, index); if (unlikely(!ret_stack)) { ftrace_graph_stop(); @@ -455,6 +483,7 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, } #endif + *index += FGRAPH_RET_INDEX; *ret = ret_stack->ret; trace->func = ret_stack->func; trace->calltime = ret_stack->calltime; @@ -507,13 +536,12 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs { struct ftrace_ret_stack *ret_stack; struct ftrace_graph_ret trace; + unsigned long bitmap; unsigned long ret; - int offset; int index; - int idx; int i; - ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer); + ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer, &index); if (unlikely(!ret_stack)) { ftrace_graph_stop(); @@ -527,16 +555,17 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs trace.retval = fgraph_ret_regs_return_value(ret_regs); #endif - offset = current->curr_ret_stack - 1; - index = get_ret_stack_index(current, offset); + bitmap = get_fgraph_index_bitmap(current, index); + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { + struct fgraph_ops *gops = fgraph_array[i]; - /* index has to be at least one! Optimize for it */ - i = 0; - do { - idx = get_fgraph_array(current, offset - i); - fgraph_array[idx]->retfunc(&trace, fgraph_array[idx]); - i++; - } while (i < index); + if (!(bitmap & BIT(i))) + continue; + if (gops == &fgraph_stub) + continue; + + gops->retfunc(&trace, gops); + } /* * The ftrace_graph_return() may still access the current @@ -544,7 +573,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs * curr_ret_stack is after that. */ barrier(); - current->curr_ret_stack -= index + FGRAPH_RET_INDEX; + current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; current->curr_ret_depth--; return ret; } @@ -622,7 +651,17 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, ret_stack = get_ret_stack(current, i, &i); if (!ret_stack) break; - if (ret_stack->retp == retp) + /* + * For the tail-call, there would be 2 or more ftrace_ret_stacks on + * the ret_stack, which records "return_to_handler" as the return + * address excpt for the last one. + * But on the real stack, there should be 1 entry because tail-call + * reuses the return address on the stack and jump to the next function. + * Thus we will continue to find real return address. + */ + if (ret_stack->retp == retp && + ret_stack->ret != + (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) return ret_stack->ret; } @@ -645,6 +684,9 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, i = *idx; do { ret_stack = get_ret_stack(task, task_idx, &task_idx); + if (ret_stack && ret_stack->ret == + (unsigned long)dereference_kernel_function_descriptor(return_to_handler)) + continue; i--; } while (i >= 0 && ret_stack); @@ -655,17 +697,25 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, } #endif /* HAVE_FUNCTION_GRAPH_RET_ADDR_PTR */ -static struct ftrace_ops graph_ops = { - .func = ftrace_graph_func, - .flags = FTRACE_OPS_FL_INITIALIZED | - FTRACE_OPS_FL_PID | - FTRACE_OPS_GRAPH_STUB, +void fgraph_init_ops(struct ftrace_ops *dst_ops, + struct ftrace_ops *src_ops) +{ + dst_ops->func = ftrace_graph_func; + dst_ops->flags = FTRACE_OPS_FL_PID | FTRACE_OPS_GRAPH_STUB; + #ifdef FTRACE_GRAPH_TRAMP_ADDR - .trampoline = FTRACE_GRAPH_TRAMP_ADDR, + dst_ops->trampoline = FTRACE_GRAPH_TRAMP_ADDR; /* trampoline_size is only needed for dynamically allocated tramps */ #endif - ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash) -}; + +#ifdef CONFIG_DYNAMIC_FTRACE + if (src_ops) { + dst_ops->func_hash = &src_ops->local_hash; + mutex_init(&dst_ops->local_hash.regex_lock); + dst_ops->flags |= FTRACE_OPS_FL_INITIALIZED; + } +#endif +} void ftrace_graph_sleep_time_control(bool enable) { @@ -869,11 +919,20 @@ static int start_graph_tracing(void) int register_ftrace_graph(struct fgraph_ops *gops) { + int command = 0; int ret = 0; int i; mutex_lock(&ftrace_lock); + if (!gops->ops.func) { + gops->ops.flags |= FTRACE_OPS_GRAPH_STUB; + gops->ops.func = ftrace_graph_func; +#ifdef FTRACE_GRAPH_TRAMP_ADDR + gops->ops.trampoline = FTRACE_GRAPH_TRAMP_ADDR; +#endif + } + if (!fgraph_array[0]) { /* The array must always have real data on it */ for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) @@ -893,6 +952,7 @@ int register_ftrace_graph(struct fgraph_ops *gops) fgraph_array[i] = gops; if (i + 1 > fgraph_array_cnt) fgraph_array_cnt = i + 1; + gops->idx = i; ftrace_graph_active++; @@ -909,9 +969,10 @@ int register_ftrace_graph(struct fgraph_ops *gops) */ ftrace_graph_return = return_run; ftrace_graph_entry = entry_run; - - ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); + command = FTRACE_START_FUNC_RET; } + + ret = ftrace_startup(&gops->ops, command); out: mutex_unlock(&ftrace_lock); return ret; @@ -919,6 +980,7 @@ int register_ftrace_graph(struct fgraph_ops *gops) void unregister_ftrace_graph(struct fgraph_ops *gops) { + int command = 0; int i; mutex_lock(&ftrace_lock); @@ -926,25 +988,29 @@ void unregister_ftrace_graph(struct fgraph_ops *gops) if (unlikely(!ftrace_graph_active)) goto out; - for (i = 0; i < fgraph_array_cnt; i++) - if (gops == fgraph_array[i]) - break; - if (i >= fgraph_array_cnt) + if (unlikely(gops->idx < 0 || gops->idx >= fgraph_array_cnt)) goto out; - fgraph_array[i] = &fgraph_stub; - if (i + 1 == fgraph_array_cnt) { - for (; i >= 0; i--) - if (fgraph_array[i] != &fgraph_stub) - break; + WARN_ON_ONCE(fgraph_array[gops->idx] != gops); + + fgraph_array[gops->idx] = &fgraph_stub; + if (gops->idx + 1 == fgraph_array_cnt) { + i = gops->idx; + while (i >= 0 && fgraph_array[i] == &fgraph_stub) + i--; fgraph_array_cnt = i + 1; } ftrace_graph_active--; + + if (!ftrace_graph_active) + command = FTRACE_STOP_FUNC_RET; + + ftrace_shutdown(&gops->ops, command); + if (!ftrace_graph_active) { ftrace_graph_return = ftrace_stub_graph; ftrace_graph_entry = ftrace_graph_entry_stub; - ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); unregister_pm_notifier(&ftrace_suspend_notifier); unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); } diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 83fbfb7b48f8..c4cc2a9d0047 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -3050,6 +3050,8 @@ int ftrace_startup(struct ftrace_ops *ops, int command) if (unlikely(ftrace_disabled)) return -ENODEV; + ftrace_ops_init(ops); + ret = __register_ftrace_function(ops); if (ret) return ret; @@ -7319,7 +7321,7 @@ __init void ftrace_init_global_array_ops(struct trace_array *tr) tr->ops = &global_ops; tr->ops->private = tr; ftrace_init_trace_array(tr); - init_array_fgraph_ops(tr); + init_array_fgraph_ops(tr, tr->ops); } void ftrace_init_array_ops(struct trace_array *tr, ftrace_func_t func) @@ -8051,7 +8053,7 @@ static int register_ftrace_function_nolock(struct ftrace_ops *ops) */ int register_ftrace_function(struct ftrace_ops *ops) { - int ret; + int ret = -1; lock_direct_mutex(); ret = prepare_direct_functions_for_ipmodify(ops); diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index febb9c6d01c7..f77322e3b177 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -885,8 +885,8 @@ extern int __trace_graph_entry(struct trace_array *tr, extern void __trace_graph_return(struct trace_array *tr, struct ftrace_graph_ret *trace, unsigned int trace_ctx); -extern void init_array_fgraph_ops(struct trace_array *tr); -extern int allocate_fgraph_ops(struct trace_array *tr); +extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops); +extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops); extern void free_fgraph_ops(struct trace_array *tr); #ifdef CONFIG_DYNAMIC_FTRACE @@ -969,6 +969,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) preempt_enable_notrace(); return ret; } + #else static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) { @@ -994,18 +995,19 @@ static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) (fgraph_max_depth && trace->depth >= fgraph_max_depth); } +void fgraph_init_ops(struct ftrace_ops *dst_ops, + struct ftrace_ops *src_ops); + #else /* CONFIG_FUNCTION_GRAPH_TRACER */ static inline enum print_line_t print_graph_function_flags(struct trace_iterator *iter, u32 flags) { return TRACE_TYPE_UNHANDLED; } -static inline void init_array_fgraph_ops(struct trace_array *tr) { } -static inline int allocate_fgraph_ops(struct trace_array *tr) -{ - return 0; -} static inline void free_fgraph_ops(struct trace_array *tr) { } +/* ftrace_ops may not be defined */ +#define init_array_fgraph_ops(tr, ops) do { } while (0) +#define allocate_fgraph_ops(tr, ops) ({ 0; }) #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ extern struct list_head ftrace_pids; diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c index 8e8da0d0ee52..13bf2415245d 100644 --- a/kernel/trace/trace_functions.c +++ b/kernel/trace/trace_functions.c @@ -91,7 +91,7 @@ int ftrace_create_function_files(struct trace_array *tr, if (!tr->ops) return -EINVAL; - ret = allocate_fgraph_ops(tr); + ret = allocate_fgraph_ops(tr, tr->ops); if (ret) { kfree(tr->ops); return ret; diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 9ccc904a7703..7f30652f0e97 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -288,7 +288,7 @@ static struct fgraph_ops funcgraph_ops = { .retfunc = &trace_graph_return, }; -int allocate_fgraph_ops(struct trace_array *tr) +int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops) { struct fgraph_ops *gops; @@ -301,6 +301,9 @@ int allocate_fgraph_ops(struct trace_array *tr) tr->gops = gops; gops->private = tr; + + fgraph_init_ops(&gops->ops, ops); + return 0; } @@ -309,10 +312,11 @@ void free_fgraph_ops(struct trace_array *tr) kfree(tr->gops); } -__init void init_array_fgraph_ops(struct trace_array *tr) +__init void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops) { tr->gops = &funcgraph_ops; funcgraph_ops.private = tr; + fgraph_init_ops(&tr->gops->ops, ops); } static int graph_trace_init(struct trace_array *tr) From patchwork Mon Nov 27 13:55:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170154 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3131131vqx; Mon, 27 Nov 2023 05:59:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IHeAYG4G+tsWTjsz3LhgSjZfG1W9941xSVJT7p77BQ5BUE00Ki2Dk1tZ/kIMCCUfgV+1/qn X-Received: by 2002:a05:6a20:1483:b0:18b:cdc2:be57 with SMTP id o3-20020a056a20148300b0018bcdc2be57mr8850621pzi.27.1701093574834; Mon, 27 Nov 2023 05:59:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093574; cv=none; d=google.com; s=arc-20160816; b=YB2klaaVCr/dXC2c0/ETTEdeTNFV25LM2ezZP2ROuDv8MLYRSARA8/f3aGxzC2vlbH DI4osb7XwKYOz0/xpp4cylvLBuxILSR43NwdDVZmWaM3ztisQYoFsv5PU4lrDGjzacUJ QGnUmOcwG9kGg0d9Do464gM/DlgLPc8Xf9vKlI7hW2KbpiYjwPfgi2iPduiqgCDYdfIf li7ad1N8u9YNgTtv69OcpGKvXrIkXaO5L1ZdqPJHl3LZyCXidvOx9q00GDL81RYVKDh/ plEdPY+tQssh2qi+Z2KMhn6qWP9sxI/kH+rUDhxm2OXLNMeiq4BwV4gCnnojDY/Hmnjf P/cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=L8KZOgGZZ7Ag+48aCrymzdJ0C4IdR7oWD7e9itsHC2w=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=ES6qdeYFPkFH4lOBIhb2R4M57xSTp9nwREB4WRBMPYuw295FGAWW7t6xsltU+UptGW UCXMfRuevgAd2UZM6Hp+P/3LWYdE8/vbyjpN4zytoJ9uyUVGDVtykvIn0ub9OqVDHE4/ NDw4e8wQWUaPz+UcxpamO+vh1WrN3+Jt2Fs41H/NjSdhDQ1N129/FycbTCLyh0rsAR7b 7Q896TE7/1MIGzncP1u4R7bVr6rw8i/YhlBjXulkcAXYmnNbesMBZhdMS+mWjaMe309h g5bJ8DBfVUmayCNNdU/Klqm6vUFoi0UMnFm3wk6+kp79chD9YKExU980AwuyEDjOLqoU ZZEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Kv++UdQ6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id by30-20020a056a02059e00b005b881b4ea84si10720154pgb.428.2023.11.27.05.59.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 05:59:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Kv++UdQ6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 276E7801B788; Mon, 27 Nov 2023 05:59:32 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233605AbjK0N7A (ORCPT + 99 others); Mon, 27 Nov 2023 08:59:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233633AbjK0N46 (ORCPT ); Mon, 27 Nov 2023 08:56:58 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D04E91FCB for ; Mon, 27 Nov 2023 05:55:41 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 93558C433C8; Mon, 27 Nov 2023 13:55:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093341; bh=rbdyYv2cADbTcYuV3s/DjPfP1trpns+IyVpVY0+ZJ5k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kv++UdQ6s9iXkFQ4RgISC5/YTjkTX6N6/HKcrMigXKKD2fRdHKu33WeltGUwIYyVj zs8B4Plz6FAQKv4Eyy+Y8o2XkvuuqmhedFIZ8HmbdboOtwjR//o1vE5wul+dsH6qlC DhEG0yZ3qKHITubTSDTQGGvj9+qri1h4fXa7cLgDLpQo6eMne+Gilab83VNB/HoQd1 vFiRrbC/C4K7wBuQKKqySw7Xm4P0MoOTi56Lr4OXZpCgy+WWHf2iWJofyY7sH2CzOv lvCWFC2QXNMApCvpbXDBeuQ8Wcxyopo667u1tzFmfuTRx086fSfC8y2cb3XbF2yRyW hYByEjKycFoRw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 13/33] function_graph: Add "task variables" per task for fgraph_ops Date: Mon, 27 Nov 2023 22:55:34 +0900 Message-Id: <170109333434.343914.12269219730661872767.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 27 Nov 2023 05:59:32 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783725896248729154 X-GMAIL-MSGID: 1783725896248729154 From: Steven Rostedt (VMware) Add a "task variables" array on the tasks shadow ret_stack that is the size of longs for each possible registered fgraph_ops. That's a total of 16, taking up 8 * 16 = 128 bytes (out of a page size 4k). This will allow for fgraph_ops to do specific features on a per task basis having a way to maintain state for each task. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Move fgraph_ops::idx to previous patch in the series. Changes in v2: - Make description lines shorter than 76 chars. --- include/linux/ftrace.h | 1 + kernel/trace/fgraph.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 70 insertions(+), 1 deletion(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index c431a33fe789..09ca4bba63f2 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1116,6 +1116,7 @@ ftrace_graph_get_ret_stack(struct task_struct *task, int idx); unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, unsigned long ret, unsigned long *retp); +unsigned long *fgraph_get_task_var(struct fgraph_ops *gops); /* * Sometimes we don't want to trace a function with the function diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 32c03d6ffd17..d1981fe8edf0 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -92,10 +92,18 @@ enum { #define SHADOW_STACK_SIZE (PAGE_SIZE) #define SHADOW_STACK_INDEX (SHADOW_STACK_SIZE / sizeof(long)) /* Leave on a buffer at the end */ -#define SHADOW_STACK_MAX_INDEX (SHADOW_STACK_INDEX - (FGRAPH_RET_INDEX + 1)) +#define SHADOW_STACK_MAX_INDEX \ + (SHADOW_STACK_INDEX - (FGRAPH_RET_INDEX + 1 + FGRAPH_ARRAY_SIZE)) #define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) +/* + * Each fgraph_ops has a reservered unsigned long at the end (top) of the + * ret_stack to store task specific state. + */ +#define SHADOW_STACK_TASK_VARS(ret_stack) \ + ((unsigned long *)(&(ret_stack)[SHADOW_STACK_INDEX - FGRAPH_ARRAY_SIZE])) + DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; @@ -148,6 +156,44 @@ static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops) { } +static void ret_stack_set_task_var(struct task_struct *t, int idx, long val) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(t->ret_stack); + + gvals[idx] = val; +} + +static unsigned long * +ret_stack_get_task_var(struct task_struct *t, int idx) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(t->ret_stack); + + return &gvals[idx]; +} + +static void ret_stack_init_task_vars(unsigned long *ret_stack) +{ + unsigned long *gvals = SHADOW_STACK_TASK_VARS(ret_stack); + + memset(gvals, 0, sizeof(*gvals) * FGRAPH_ARRAY_SIZE); +} + +/** + * fgraph_get_task_var - retrieve a task specific state variable + * @gops: The ftrace_ops that owns the task specific variable + * + * Every registered fgraph_ops has a task state variable + * reserved on the task's ret_stack. This function returns the + * address to that variable. + * + * Returns the address to the fgraph_ops @gops tasks specific + * unsigned long variable. + */ +unsigned long *fgraph_get_task_var(struct fgraph_ops *gops) +{ + return ret_stack_get_task_var(current, gops->idx); +} + /* * @offset: The index into @t->ret_stack to find the ret_stack entry * @index: Where to place the index into @t->ret_stack of that entry @@ -759,6 +805,7 @@ static int alloc_retstack_tasklist(unsigned long **ret_stack_list) if (t->ret_stack == NULL) { atomic_set(&t->trace_overrun, 0); + ret_stack_init_task_vars(ret_stack_list[start]); t->curr_ret_stack = 0; t->curr_ret_depth = -1; /* Make sure the tasks see the 0 first: */ @@ -819,6 +866,7 @@ static void graph_init_task(struct task_struct *t, unsigned long *ret_stack) { atomic_set(&t->trace_overrun, 0); + ret_stack_init_task_vars(ret_stack); t->ftrace_timestamp = 0; t->curr_ret_stack = 0; t->curr_ret_depth = -1; @@ -917,6 +965,24 @@ static int start_graph_tracing(void) return ret; } +static void init_task_vars(int idx) +{ + struct task_struct *g, *t; + int cpu; + + for_each_online_cpu(cpu) { + if (idle_task(cpu)->ret_stack) + ret_stack_set_task_var(idle_task(cpu), idx, 0); + } + + read_lock(&tasklist_lock); + for_each_process_thread(g, t) { + if (t->ret_stack) + ret_stack_set_task_var(t, idx, 0); + } + read_unlock(&tasklist_lock); +} + int register_ftrace_graph(struct fgraph_ops *gops) { int command = 0; @@ -970,6 +1036,8 @@ int register_ftrace_graph(struct fgraph_ops *gops) ftrace_graph_return = return_run; ftrace_graph_entry = entry_run; command = FTRACE_START_FUNC_RET; + } else { + init_task_vars(gops->idx); } ret = ftrace_startup(&gops->ops, command); From patchwork Mon Nov 27 13:55:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170155 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3133028vqx; Mon, 27 Nov 2023 06:01:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IHf3EZGREB+9mUupuSes31JD56Gb+bmtVukNEh+dCBf5GOFTv4xjUOk1GChJJFH+GvLy6ac X-Received: by 2002:a05:6e02:1c07:b0:35c:d4c8:b272 with SMTP id l7-20020a056e021c0700b0035cd4c8b272mr2912987ilh.30.1701093679426; Mon, 27 Nov 2023 06:01:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093679; cv=none; d=google.com; s=arc-20160816; b=gbdLtA5337jcMmTyZJ613FXQNlC4Dh4M89Xhs2ewNDcN/vELTmA9+mXklq5dxFJ/AA BRJa3s1yf/MmeIUKGVvhuf1fOiP7uRjTFz6UCwOJkiDnmsHyX9qzlV7R7cFErzl65+mF CYbpi959otL6DC3frCEdBWgwNfBrxP5e0H4nDjYjBv6lF2I+I0z+yxfTy1jpD6N5PZ/I sWpBLiD5Ma8lm6d8J3GXxXabqeRaFKN1xQ0qIBVK2IX0Iq8qVXArcS0y9tQBZoCNLphu +jhyV9jClesk/ebXWclN+xCJB1052OU8/P5E8HpOYgPHVEHehU7YOKzjHmJ5OQUri6ph uA7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Glc12OMQpuXGBrpuWjQfe4tNVNPWMlTUH5bfvqXzSpE=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=ROoaQM0Aq/nmQWVkcHYT1UBOYixxaFcjboA6uq5tO/VD+kkMXRWAEZlvlQXuD4O+xS GfhVYCAK2Z1actYzGON6O2laRFIJf/eIM4Z4iNtrsI7FJ+/iEGr48L0VzqpSeAMsuWXs HtfO0Q1+o6wiCYHmKd2yvXkmxkCXpmFteqz5uCoXdTxaHx1s7xWQK3iv/Klk1tW6uAAt 38BBEHIhAEw2LO+bQ43ncwtNh5hDqcqZnnu7aA450X1K3kc9YCMxBoKyu3aRwoiHHfrI B1daSrHcxBMgsRBmWdVLnEs2VXhE9faY2fWK3r0YwTiGVfI9t3A2EQjZVI4tO0YyqnJU hIwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=PPcaeMZJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id d5-20020a056e021c4500b0035ce80aedf6si545360ilg.33.2023.11.27.06.01.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:01:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=PPcaeMZJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 53CF28093F75; Mon, 27 Nov 2023 06:01:10 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231433AbjK0OBA (ORCPT + 99 others); Mon, 27 Nov 2023 09:01:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233358AbjK0N6p (ORCPT ); Mon, 27 Nov 2023 08:58:45 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A9E31FC8 for ; Mon, 27 Nov 2023 05:55:54 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9AFBC433BC; Mon, 27 Nov 2023 13:55:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093353; bh=fGXN2qZeLvCA8Erebbw3zTWQfZLdoNHyNwm1QJyPyZ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PPcaeMZJ/nxITw2sGpuJTVwEO2JnRITJIvNJJam6hEINiOaROR4paam8bPSXUxlSk jvrfhC3GoIVvXqwZTHFepM4JSkHXvIiAwhb7F4NTEFe11Kqhdp5OYbi36P7aSBJQu3 N97lnMrNL3QanKtMVJjQGkcYVUT9LXU7CNXQoLKqcfxd58KcvITjJykYsy7/9lDUZD FQno4b5P49rH88Crn1sL1TVhgCtkwWFt9HrsC/WWBOhxSt8cncVpzhKvrC+S7mIT9h wsq0xUi57JB5vGRGg6nzBTreAXbCgTV4+9N4lXBCzDIL3An7WiW4SL8l9QsN4055x0 Zs4twvHIbpSxw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 14/33] function_graph: Move set_graph_function tests to shadow stack global var Date: Mon, 27 Nov 2023 22:55:47 +0900 Message-Id: <170109334684.343914.7953030930194348147.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:01:10 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726005912397401 X-GMAIL-MSGID: 1783726005912397401 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the set_graph_funnction was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/trace_recursion.h | 5 +---- kernel/trace/trace.h | 32 +++++++++++++++++++++----------- kernel/trace/trace_functions_graph.c | 6 +++--- kernel/trace/trace_irqsoff.c | 4 ++-- kernel/trace/trace_sched_wakeup.c | 4 ++-- 5 files changed, 29 insertions(+), 22 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index d48cd92d2364..2efd5ec46d7f 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,9 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* Set if the function is in the set_graph_function file */ - TRACE_GRAPH_BIT, - /* * In the very unlikely case that an interrupt came in * at a start of graph tracing, and we want to trace @@ -60,7 +57,7 @@ enum { * that preempted a softirq start of a function that * preempted normal context!!!! Luckily, it can't be * greater than 3, so the next two bits are a mask - * of what the depth is when we set TRACE_GRAPH_BIT + * of what the depth is when we set TRACE_GRAPH_FL */ TRACE_GRAPH_DEPTH_START_BIT, diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index f77322e3b177..60d38709ab91 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -889,11 +889,16 @@ extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops); extern void free_fgraph_ops(struct trace_array *tr); +enum { + TRACE_GRAPH_FL = 1, +}; + #ifdef CONFIG_DYNAMIC_FTRACE extern struct ftrace_hash __rcu *ftrace_graph_hash; extern struct ftrace_hash __rcu *ftrace_graph_notrace_hash; -static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) +static inline int +ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) { unsigned long addr = trace->func; int ret = 0; @@ -915,12 +920,11 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) } if (ftrace_lookup_ip(hash, addr)) { - /* * This needs to be cleared on the return functions * when the depth is zero. */ - trace_recursion_set(TRACE_GRAPH_BIT); + *task_var |= TRACE_GRAPH_FL; trace_recursion_set_depth(trace->depth); /* @@ -940,11 +944,14 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) return ret; } -static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +static inline void +ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace) { - if (trace_recursion_test(TRACE_GRAPH_BIT) && + unsigned long *task_var = fgraph_get_task_var(gops); + + if ((*task_var & TRACE_GRAPH_FL) && trace->depth == trace_recursion_depth()) - trace_recursion_clear(TRACE_GRAPH_BIT); + *task_var &= ~TRACE_GRAPH_FL; } static inline int ftrace_graph_notrace_addr(unsigned long addr) @@ -971,7 +978,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) } #else -static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) +static inline int ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) { return 1; } @@ -980,17 +987,20 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) { return 0; } -static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +static inline void ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace) { } #endif /* CONFIG_DYNAMIC_FTRACE */ extern unsigned int fgraph_max_depth; -static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) +static inline bool +ftrace_graph_ignore_func(struct fgraph_ops *gops, struct ftrace_graph_ent *trace) { + unsigned long *task_var = fgraph_get_task_var(gops); + /* trace it when it is-nested-in or is a function enabled. */ - return !(trace_recursion_test(TRACE_GRAPH_BIT) || - ftrace_graph_addr(trace)) || + return !((*task_var & TRACE_GRAPH_FL) || + ftrace_graph_addr(task_var, trace)) || (trace->depth < 0) || (fgraph_max_depth && trace->depth >= fgraph_max_depth); } diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 7f30652f0e97..66cce73e94f8 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -160,7 +160,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, if (!ftrace_trace_task(tr)) return 0; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; if (ftrace_graph_ignore_irqs()) @@ -247,7 +247,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace, long disabled; int cpu; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); @@ -269,7 +269,7 @@ void trace_graph_return(struct ftrace_graph_ret *trace, static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 5478f4c4f708..fce064e20570 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -184,7 +184,7 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace, unsigned int trace_ctx; int ret; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; /* * Do not trace a function if it's filtered by set_graph_notrace. @@ -214,7 +214,7 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace, unsigned long flags; unsigned int trace_ctx; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (!func_prolog_dec(tr, &data, &flags)) return; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 49bcc812652c..130ca7e7787e 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -120,7 +120,7 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace, unsigned int trace_ctx; int ret = 0; - if (ftrace_graph_ignore_func(trace)) + if (ftrace_graph_ignore_func(gops, trace)) return 0; /* * Do not trace a function if it's filtered by set_graph_notrace. @@ -149,7 +149,7 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace, struct trace_array_cpu *data; unsigned int trace_ctx; - ftrace_graph_addr_finish(trace); + ftrace_graph_addr_finish(gops, trace); if (!func_prolog_preempt_disable(tr, &data, &trace_ctx)) return; From patchwork Mon Nov 27 13:55:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170158 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3133980vqx; Mon, 27 Nov 2023 06:02:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IF5iMx2sH02oXIrgkWWlXLyw4QLtH1yrqCu5WWkU+6A9Jh9A3ilV+MKai4W9yDRqUnoNV5Q X-Received: by 2002:a0d:ea0b:0:b0:5ca:e2d4:623a with SMTP id t11-20020a0dea0b000000b005cae2d4623amr11491338ywe.12.1701093730058; Mon, 27 Nov 2023 06:02:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093730; cv=none; d=google.com; s=arc-20160816; b=JxXDrgyK2LBdEhAQ7ud1yK++YrfGRwUDAX5dBs2Ethn83uwudpVC5QGSzpHz7lmX/d 6Gc6YMqdXlBD03CMUNZykRAznIQeFJ9xrf6TqwB/WoW49KxWAlijsfLRUL0h3ShYnCcj 3/KD0V1ZwllTQ/f/nfZf1ZWK5l0z99n2orqY04dDduV6ZZXosp/JM6IcIYDn6y5+E5I0 vsUWmkG/sdZ7fk4p9gCQYuxyN68VI2oxfkCWeu4LVI0S7DRsi7oA1SmOF6xdRGK3NIdW 9AOHlQUOk+fh8A5Qs3pu6pFKOM3iJec/E8MU811XR2UZTZGKLBInmshAwi+JiBJrLM0j YpqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=+FnSZ6xmFSrA3ZDQwJUZ+ljtz5vSSwvDSDyWa3YxNUE=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=KEfZIg6qkvXnA+PQVQlvHEsGw/zaqwn5nGzaRuEr54hiL4Z2YeVrvUOx9CVP2hABUy FdZ5pwv9RhUM2K3q3Ni0CpP6Boe6wtPbW/CePlr71Y2CjhmhnDKpJa2z/bzAS1AtuYKs f3edG3A1Z2nIcH1BRjOV7cboT7zQk4FN+qTQQ4y3dzsZb09XWmWkMIQ/BtzBGzTiuw+B KI6+ZFYb4Ig1OIVfnFVC5df2h5jKdeAKxfOvAYwN587F4DAblPlgyFBWBm1qVLfH1l1q wnyHO1l47gBobSUz5zzX+hQ0lesZZPmLVic9na9M7XRlLtwWNeLnu6NTDKV+XlY6nx2i ioIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Tbr0qLbw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id by17-20020a05690c083100b005c86a41869dsi6147257ywb.288.2023.11.27.06.02.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:02:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Tbr0qLbw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 89E39801B650; Mon, 27 Nov 2023 06:02:00 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233282AbjK0OBD (ORCPT + 99 others); Mon, 27 Nov 2023 09:01:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233536AbjK0N6w (ORCPT ); Mon, 27 Nov 2023 08:58:52 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A66310D7 for ; Mon, 27 Nov 2023 05:56:06 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4514FC433D9; Mon, 27 Nov 2023 13:56:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093366; bh=zcbx8h1RkttbOn5vskAZ1U6GGz5oeChBoNAtPCt+YfY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tbr0qLbw2Dtp6Qp5u9/1YEu4ixBJ8hhAXRamT5CoUPQoMD2IECrQPP/a10htZE6T5 hI7iStWbPTYDnUjnWznW0mz0JekcIKJnEPYZUT7t9xmi8XynyxKJrVXkQexO5PTVKU ZGM7ZhPyiwMYG1cRpbPCCPz9pcXfqKXkrA95iuk2UJBitgWSLuFj561pCGjAzs3ZXR /NAzNXQgiJAjYEH0EvPBCQxRPpvge/grEFA57Uvx369o0cPBVXl9KGlahZt3p7MIvg Wcd+Y8ETvXdZIBqsNkAJgEzLGKBye9Suq2Dh4eDb6y7QdWfAsna4nzLEXXkv2vmDcF HhiFWAw8N14cg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 15/33] function_graph: Move graph depth stored data to shadow stack global var Date: Mon, 27 Nov 2023 22:55:59 +0900 Message-Id: <170109335913.343914.13373917760465141817.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:02:00 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726059154221763 X-GMAIL-MSGID: 1783726059154221763 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the function graph depth was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- include/linux/trace_recursion.h | 29 ----------------------------- kernel/trace/trace.h | 34 ++++++++++++++++++++++++++++++++-- 2 files changed, 32 insertions(+), 31 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index 2efd5ec46d7f..00e792bf148d 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,25 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* - * In the very unlikely case that an interrupt came in - * at a start of graph tracing, and we want to trace - * the function in that interrupt, the depth can be greater - * than zero, because of the preempted start of a previous - * trace. In an even more unlikely case, depth could be 2 - * if a softirq interrupted the start of graph tracing, - * followed by an interrupt preempting a start of graph - * tracing in the softirq, and depth can even be 3 - * if an NMI came in at the start of an interrupt function - * that preempted a softirq start of a function that - * preempted normal context!!!! Luckily, it can't be - * greater than 3, so the next two bits are a mask - * of what the depth is when we set TRACE_GRAPH_FL - */ - - TRACE_GRAPH_DEPTH_START_BIT, - TRACE_GRAPH_DEPTH_END_BIT, - /* * To implement set_graph_notrace, if this bit is set, we ignore * function graph tracing of called functions, until the return @@ -78,16 +59,6 @@ enum { #define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0) #define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit))) -#define trace_recursion_depth() \ - (((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3) -#define trace_recursion_set_depth(depth) \ - do { \ - current->trace_recursion &= \ - ~(3 << TRACE_GRAPH_DEPTH_START_BIT); \ - current->trace_recursion |= \ - ((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \ - } while (0) - #define TRACE_CONTEXT_BITS 4 #define TRACE_FTRACE_START TRACE_FTRACE_BIT diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 60d38709ab91..cbe44998ef77 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -891,8 +891,38 @@ extern void free_fgraph_ops(struct trace_array *tr); enum { TRACE_GRAPH_FL = 1, + + /* + * In the very unlikely case that an interrupt came in + * at a start of graph tracing, and we want to trace + * the function in that interrupt, the depth can be greater + * than zero, because of the preempted start of a previous + * trace. In an even more unlikely case, depth could be 2 + * if a softirq interrupted the start of graph tracing, + * followed by an interrupt preempting a start of graph + * tracing in the softirq, and depth can even be 3 + * if an NMI came in at the start of an interrupt function + * that preempted a softirq start of a function that + * preempted normal context!!!! Luckily, it can't be + * greater than 3, so the next two bits are a mask + * of what the depth is when we set TRACE_GRAPH_FL + */ + + TRACE_GRAPH_DEPTH_START_BIT, + TRACE_GRAPH_DEPTH_END_BIT, }; +static inline unsigned long ftrace_graph_depth(unsigned long *task_var) +{ + return (*task_var >> TRACE_GRAPH_DEPTH_START_BIT) & 3; +} + +static inline void ftrace_graph_set_depth(unsigned long *task_var, int depth) +{ + *task_var &= ~(3 << TRACE_GRAPH_DEPTH_START_BIT); + *task_var |= (depth & 3) << TRACE_GRAPH_DEPTH_START_BIT; +} + #ifdef CONFIG_DYNAMIC_FTRACE extern struct ftrace_hash __rcu *ftrace_graph_hash; extern struct ftrace_hash __rcu *ftrace_graph_notrace_hash; @@ -925,7 +955,7 @@ ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace) * when the depth is zero. */ *task_var |= TRACE_GRAPH_FL; - trace_recursion_set_depth(trace->depth); + ftrace_graph_set_depth(task_var, trace->depth); /* * If no irqs are to be traced, but a set_graph_function @@ -950,7 +980,7 @@ ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace unsigned long *task_var = fgraph_get_task_var(gops); if ((*task_var & TRACE_GRAPH_FL) && - trace->depth == trace_recursion_depth()) + trace->depth == ftrace_graph_depth(task_var)) *task_var &= ~TRACE_GRAPH_FL; } From patchwork Mon Nov 27 13:56:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170160 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3134776vqx; Mon, 27 Nov 2023 06:02:48 -0800 (PST) X-Google-Smtp-Source: AGHT+IFihKQBExjQDd7u2XQTuhsTevH7ilE7yGR+CeDLbZAH0Xu2Uz5Hsltu36HzVCG5kkFQd8R5 X-Received: by 2002:a0d:cc13:0:b0:5cc:355e:5e93 with SMTP id o19-20020a0dcc13000000b005cc355e5e93mr12131187ywd.22.1701093768255; Mon, 27 Nov 2023 06:02:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093768; cv=none; d=google.com; s=arc-20160816; b=nm/uSDMOmySbDHluZrGfiUHuZY0CIDfq1JMoK2dSK4vgBzbfdDvIgkMBqxxjKC7CtE e3bVrh88b9qerThzzRPQOzW8nf+JnIvY+wbS+sBA8rJKzmIwgANyvMgvbWBtrwrhgbD6 +d+fsdBytoCg5TdyElWRqQaI1svQkDg1gZv0i8niLcSqOlwSTu6Ska76DlyJW7NteqA5 6eT9e49dhq9qmMvnrS+0QOec2AH59Be2tK5g9X1RfhEcO152qrZmvkcsUlCs9GsP4oJB h/sRDziduHcZzpCzTOiL+e2cmoX22RKkTJ6oL4+MrCAlqwUGw7tGH5JYwd6Snx/kQF2z 05Vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Ih7foLE9yrXjw3EVDGiI2vJzA3x2/UbIlB5AjZuDwHQ=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=Sqau8jjxI7eVfJDYoYUHkatHawFmyaTdEE/ez6xeVDAWdD9wItkT3hyURKfy39rvsM l98D9k3x7BDo9XrEBEXZt9YqmvXLFEGA36gDnoltPB3eFVvADuWQCsBnojKT0EbPrjCo xovO9WirAPYlBL0pdEb1Rj6SP86Oxd15EnuI61bJW4itlCNUrBnvMbVCthEwTkggU9QR Wr3KCFqBSieLscciHX7usZNYSuv53Jx5beBLh/4e04+1ltrX9r6fDIFbMAm0hKnKsJpy tJMoyhG3V08ec4MmG+larGarm77geP6qG3PPBulFG/znBzEexmCixZPnAVFn8x0NjdoH YSPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ntekJ5o1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id o18-20020a817e12000000b005839b8c52d0si5646369ywn.484.2023.11.27.06.02.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:02:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ntekJ5o1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 965B0806B54A; Mon, 27 Nov 2023 06:01:30 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233423AbjK0OBH (ORCPT + 99 others); Mon, 27 Nov 2023 09:01:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233569AbjK0N64 (ORCPT ); Mon, 27 Nov 2023 08:58:56 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E6A01988 for ; Mon, 27 Nov 2023 05:56:19 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEEF8C433D9; Mon, 27 Nov 2023 13:56:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093378; bh=AazvW4/a90V/YIaJshYrFG/FkUTxnxIHHqFsOEqmNJ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ntekJ5o1Kg3FOTwvmE8wk2UcJ8STdTR6o0/0mRObGaDja1IUR/mN09HwhSyCHrQ7Y tKDmmGDJGr3zPJdSqeHibckLzET5C27oB5k+1YgcrC5+hE5J6/QcYdWcXJ/rv6+cO1 vysHWZ5eJ3w9reVI7N0wUSGA28Qmmy6WT/bnKGhqP3rXf2TPbUs/al9MBVY/ErwtzI thRFEEBt8L8GxozyFzO4kpX/M35wAnUyjZfElwVMGGBylfW/3B/bQ/B1WFvrdCZFQN M4hXsZWM3++MEYDBmzfDFSVdxLzWaVSRZ23A13EzcBvZiO1h8CbGpb+uMZAH9tMmXb wSu39Vjd0DFnQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 16/33] function_graph: Move graph notrace bit to shadow stack global var Date: Mon, 27 Nov 2023 22:56:11 +0900 Message-Id: <170109337141.343914.17947410912543913151.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:01:30 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726099174905728 X-GMAIL-MSGID: 1783726099174905728 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the function graph no-trace was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Make description lines shorter than 76 chars. --- include/linux/trace_recursion.h | 7 ------- kernel/trace/trace.h | 9 +++++++++ kernel/trace/trace_functions_graph.c | 10 ++++++---- 3 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index 00e792bf148d..cc11b0e9d220 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,13 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* - * To implement set_graph_notrace, if this bit is set, we ignore - * function graph tracing of called functions, until the return - * function is called to clear it. - */ - TRACE_GRAPH_NOTRACE_BIT, - /* Used to prevent recursion recording from recursing. */ TRACE_RECORD_RECURSION_BIT, }; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index cbe44998ef77..27b2b52c36cc 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -910,8 +910,17 @@ enum { TRACE_GRAPH_DEPTH_START_BIT, TRACE_GRAPH_DEPTH_END_BIT, + + /* + * To implement set_graph_notrace, if this bit is set, we ignore + * function graph tracing of called functions, until the return + * function is called to clear it. + */ + TRACE_GRAPH_NOTRACE_BIT, }; +#define TRACE_GRAPH_NOTRACE (1 << TRACE_GRAPH_NOTRACE_BIT) + static inline unsigned long ftrace_graph_depth(unsigned long *task_var) { return (*task_var >> TRACE_GRAPH_DEPTH_START_BIT) & 3; diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 66cce73e94f8..13d0387ac6a6 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -130,6 +130,7 @@ static inline int ftrace_graph_ignore_irqs(void) int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops) { + unsigned long *task_var = fgraph_get_task_var(gops); struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; @@ -138,7 +139,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, int ret; int cpu; - if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) + if (*task_var & TRACE_GRAPH_NOTRACE) return 0; /* @@ -149,7 +150,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, * returning from the function. */ if (ftrace_graph_notrace_addr(trace->func)) { - trace_recursion_set(TRACE_GRAPH_NOTRACE_BIT); + *task_var |= TRACE_GRAPH_NOTRACE_BIT; /* * Need to return 1 to have the return called * that will clear the NOTRACE bit. @@ -240,6 +241,7 @@ void __trace_graph_return(struct trace_array *tr, void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { + unsigned long *task_var = fgraph_get_task_var(gops); struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; @@ -249,8 +251,8 @@ void trace_graph_return(struct ftrace_graph_ret *trace, ftrace_graph_addr_finish(gops, trace); - if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { - trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); + if (*task_var & TRACE_GRAPH_NOTRACE) { + *task_var &= ~TRACE_GRAPH_NOTRACE; return; } From patchwork Mon Nov 27 13:56:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170156 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3133594vqx; Mon, 27 Nov 2023 06:01:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IHM/y5gpsLIMcomhae63ai3dKmYfATGBmKKtjEKtdXbewv6BMXPHiaxhBIQjaQroX3MPzXD X-Received: by 2002:a25:f820:0:b0:db4:2dc4:3907 with SMTP id u32-20020a25f820000000b00db42dc43907mr11003882ybd.0.1701093710707; Mon, 27 Nov 2023 06:01:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093710; cv=none; d=google.com; s=arc-20160816; b=eit3YXSz0qtofq1lNi5peZws5/sWSu98XqS6N82frgZlE4qo5Z8q7vWHV6701smm4P c5+nQovaVIwzHp5kc1u/U1yNMo92tH6jsWUf+lIc69xztgIZst9BQn69KF8QwN45NUpn 4tQUdtg/dWpJl0VmWnRBM3FnhpYkoRutv9y0SzzD+4G9LONkBHGeUzAyha7SSE1xrl8+ ovq1xS1coJyzFjNmdhMwAUEB5Fc4vCkcKQ66onrRDq/9FGcOGxne3akHbaopHJpu1e1H zRDUitbazTkG2iV0VNpoQCjCxkxYWUFr7q1z2JpJv9a+6bDKBKKJM5PC4SQfyMF/+BGe uNmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=scIUeHglBht4jmg4BlZgY4goRZdBm2zZzH/NOq5U4Gs=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=ImG/cW1LILJS2xtJ58WB3EnxxYk9WrFFNsukjuIkdHgN9Hpdvd7fupQAU+kJfGbFZl VaU03GCoyUhRjnltVnxBEoRND4kpPJroLZn8W313pHanLrB48Xn6NYXc5xjFYBFSAMOS j9sMuIhk0+FcLhTPP18LsLEobP/fxK0bCSUl8Pokf44GM7tA9LgPTkQ3DyLw8+JMTCJO 4n2RYfjTAyk0mwvF7SmKUBux6eXM5ILLNj149a7yTWkYcAm01ud0DGkhqyxoYeIsgxn+ C7KjAIQGFTqraw+3Ds7aj8ISqKdGz2aEYR+H+amCaeFhNvAGxa7q2yhRwgaRGHAvgjqL yohQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=B1tvLS6t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id r84-20020a254457000000b00da0b6904d15si6011535yba.284.2023.11.27.06.01.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:01:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=B1tvLS6t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 602C480785C9; Mon, 27 Nov 2023 06:01:39 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233516AbjK0OBL (ORCPT + 99 others); Mon, 27 Nov 2023 09:01:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233614AbjK0N7C (ORCPT ); Mon, 27 Nov 2023 08:59:02 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C357211E for ; Mon, 27 Nov 2023 05:56:31 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 285FDC433D9; Mon, 27 Nov 2023 13:56:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093391; bh=bEmKPtX4VZv4hW/cbKkJPMLtohKC5T+Wkpa4vqI+jos=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B1tvLS6tXxcXoAmkiTRlCNz31Btw7dOkdmRJJbtaLc3BdJEkAalkPC2QYfqd/KvON k6dQotJjBDlznb8Jhn8cZwKWATdb9NrlRf12eYASmWIKJcqGgud72DBrKGTGhIM8c4 UYWtM5x84kQBMb9E6dg1njZBJqSChatXy678QXiCPABCPduItxojo2am30TMxLP+D6 KpkGJHJLqZuy5BFM/ABUNmLG+xxhmJgIzJ4MJMjHJ1OA2KCk2FhQM319/UNc/yyTXF QNm9KyJJtHj7JBPI9F1m/5tAg64ltdAaqVhdDT1KwQ8ECUfz+en7MCGtaQLVre/w3m rhdC3E/569svA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 17/33] function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data() Date: Mon, 27 Nov 2023 22:56:24 +0900 Message-Id: <170109338411.343914.7434126754301624440.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:01:39 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726038415307821 X-GMAIL-MSGID: 1783726038415307821 From: Steven Rostedt (VMware) Added functions that can be called by a fgraph_ops entryfunc and retfunc to store state between the entry of the function being traced to the exit of the same function. The fgraph_ops entryfunc() may call fgraph_reserve_data() to store up to 32 words onto the task's shadow ret_stack and this then can be retrieved by fgraph_retrieve_data() called by the corresponding retfunc(). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Store fgraph_array index to the data entry. - Both function requires fgraph_array index to store/retrieve data. - Reserve correct size of the data. - Return correct data area. Changes in v2: - Retrieve the reserved size by fgraph_retrieve_data(). - Expand the maximum data size to 32 words. - Update stack index with __get_index(val) if FGRAPH_TYPE_ARRAY entry. - fix typos and make description lines shorter than 76 chars. --- include/linux/ftrace.h | 3 + kernel/trace/fgraph.c | 175 ++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 170 insertions(+), 8 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 09ca4bba63f2..1c121237016e 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1075,6 +1075,9 @@ struct fgraph_ops { int idx; }; +void *fgraph_reserve_data(int idx, int size_bytes); +void *fgraph_retrieve_data(int idx, int *size_bytes); + /* * Stack of return addresses for functions * of a thread. diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index d1981fe8edf0..317202943936 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -41,17 +41,29 @@ * bits: 10 - 11 Type of storage * 0 - reserved * 1 - bitmap of fgraph_array index + * 2 - reserved data * * For bitmap of fgraph_array index * bits: 12 - 27 The bitmap of fgraph_ops fgraph_array index * + * For reserved data: + * bits: 12 - 17 The size in words that is stored + * bits: 18 - 23 The index of fgraph_array, which shows who is stored + * * That is, at the end of function_graph_enter, if the first and forth * fgraph_ops on the fgraph_array[] (index 0 and 3) needs their retfunc called - * on the return of the function being traced, this is what will be on the - * task's shadow ret_stack: (the stack grows upward) + * on the return of the function being traced, and the forth fgraph_ops + * stored two words of data, this is what will be on the task's shadow + * ret_stack: (the stack grows upward) * * | | <- task->curr_ret_stack * +--------------------------------------------+ + * | data_type(idx:3, size:2, | + * | offset:FGRAPH_RET_INDEX+3) | ( Data with size of 2 words) + * +--------------------------------------------+ ( It is 4 words from the ret_stack) + * | STORED DATA WORD 2 | + * | STORED DATA WORD 1 | + * +------i-------------------------------------+ * | bitmap_type(bitmap:(BIT(3)|BIT(0)), | * | offset:FGRAPH_RET_INDEX) | <- the offset is from here * +--------------------------------------------+ @@ -78,14 +90,23 @@ enum { FGRAPH_TYPE_RESERVED = 0, FGRAPH_TYPE_BITMAP = 1, + FGRAPH_TYPE_DATA = 2, }; #define FGRAPH_INDEX_SIZE 16 #define FGRAPH_INDEX_MASK GENMASK(FGRAPH_INDEX_SIZE - 1, 0) #define FGRAPH_INDEX_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) -/* Currently the max stack index can't be more than register callers */ -#define FGRAPH_MAX_INDEX (FGRAPH_INDEX_SIZE + FGRAPH_RET_INDEX) +#define FGRAPH_DATA_SIZE 5 +#define FGRAPH_DATA_MASK ((1 << FGRAPH_DATA_SIZE) - 1) +#define FGRAPH_DATA_SHIFT (FGRAPH_TYPE_SHIFT + FGRAPH_TYPE_SIZE) + +#define FGRAPH_DATA_INDEX_SIZE 4 +#define FGRAPH_DATA_INDEX_MASK ((1 << FGRAPH_DATA_INDEX_SIZE) - 1) +#define FGRAPH_DATA_INDEX_SHIFT (FGRAPH_DATA_SHIFT + FGRAPH_DATA_SIZE) + +#define FGRAPH_MAX_INDEX \ + ((FGRAPH_INDEX_SIZE << FGRAPH_DATA_SIZE) + FGRAPH_RET_INDEX) #define FGRAPH_ARRAY_SIZE FGRAPH_INDEX_SIZE @@ -97,6 +118,8 @@ enum { #define RET_STACK(t, index) ((struct ftrace_ret_stack *)(&(t)->ret_stack[index])) +#define FGRAPH_MAX_DATA_SIZE (sizeof(long) * (1 << FGRAPH_DATA_SIZE)) + /* * Each fgraph_ops has a reservered unsigned long at the end (top) of the * ret_stack to store task specific state. @@ -111,14 +134,39 @@ static int fgraph_array_cnt; static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; +static inline int __get_index(unsigned long val) +{ + return val & FGRAPH_RET_INDEX_MASK; +} + +static inline int __get_type(unsigned long val) +{ + return (val >> FGRAPH_TYPE_SHIFT) & FGRAPH_TYPE_MASK; +} + +static inline int __get_data_index(unsigned long val) +{ + return (val >> FGRAPH_DATA_INDEX_SHIFT) & FGRAPH_DATA_INDEX_MASK; +} + +static inline int __get_data_size(unsigned long val) +{ + return (val >> FGRAPH_DATA_SHIFT) & FGRAPH_DATA_MASK; +} + +static inline unsigned long get_fgraph_entry(struct task_struct *t, int index) +{ + return t->ret_stack[index]; +} + static inline int get_ret_stack_index(struct task_struct *t, int offset) { - return t->ret_stack[offset] & FGRAPH_RET_INDEX_MASK; + return __get_index(t->ret_stack[offset]); } static inline int get_fgraph_type(struct task_struct *t, int offset) { - return (t->ret_stack[offset] >> FGRAPH_TYPE_SHIFT) & FGRAPH_TYPE_MASK; + return __get_type(t->ret_stack[offset]); } static inline unsigned long @@ -145,6 +193,22 @@ add_fgraph_index_bitmap(struct task_struct *t, int offset, unsigned long bitmap) t->ret_stack[offset] |= (bitmap << FGRAPH_INDEX_SHIFT); } +static inline void *get_fgraph_data(struct task_struct *t, int index) +{ + unsigned long val = t->ret_stack[index]; + + if (__get_type(val) != FGRAPH_TYPE_DATA) + return NULL; + index -= __get_data_size(val); + return (void *)&t->ret_stack[index]; +} + +static inline unsigned long make_fgraph_data(int idx, int size, int offset) +{ + return (idx << FGRAPH_DATA_INDEX_SHIFT) | (size << FGRAPH_DATA_SHIFT) | + (FGRAPH_TYPE_DATA << FGRAPH_TYPE_SHIFT) | offset; +} + /* ftrace_graph_entry set to this to tell some archs to run function graph */ static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops) { @@ -178,6 +242,92 @@ static void ret_stack_init_task_vars(unsigned long *ret_stack) memset(gvals, 0, sizeof(*gvals) * FGRAPH_ARRAY_SIZE); } +/** + * fgraph_reserve_data - Reserve storage on the task's ret_stack + * @idx: The index of fgraph_array + * @size_bytes: The size in bytes to reserve + * + * Reserves space of up to FGRAPH_MAX_DATA_SIZE bytes on the + * task's ret_stack shadow stack, for a given fgraph_ops during + * the entryfunc() call. If entryfunc() returns zero, the storage + * is discarded. An entryfunc() can only call this once per iteration. + * The fgraph_ops retfunc() can retrieve this stored data with + * fgraph_retrieve_data(). + * + * Returns: On success, a pointer to the data on the stack. + * Otherwise, NULL if there's not enough space left on the + * ret_stack for the data, or if fgraph_reserve_data() was called + * more than once for a single entryfunc() call. + */ +void *fgraph_reserve_data(int idx, int size_bytes) +{ + unsigned long val; + void *data; + int curr_ret_stack = current->curr_ret_stack; + int data_size; + + if (size_bytes > FGRAPH_MAX_DATA_SIZE) + return NULL; + + /* Convert to number of longs + data word */ + data_size = DIV_ROUND_UP(size_bytes, sizeof(long)); + + val = get_fgraph_entry(current, curr_ret_stack - 1); + data = ¤t->ret_stack[curr_ret_stack]; + + curr_ret_stack += data_size + 1; + if (unlikely(curr_ret_stack >= SHADOW_STACK_MAX_INDEX)) + return NULL; + + val = make_fgraph_data(idx, data_size, __get_index(val) + data_size + 1); + + /* Set the last word to be reserved */ + current->ret_stack[curr_ret_stack - 1] = val; + + /* Make sure interrupts see this */ + barrier(); + current->curr_ret_stack = curr_ret_stack; + /* Again sync with interrupts, and reset reserve */ + current->ret_stack[curr_ret_stack - 1] = val; + + return data; +} + +/** + * fgraph_retrieve_data - Retrieve stored data from fgraph_reserve_data() + * @idx: the index of fgraph_array (fgraph_ops::idx) + * @size_bytes: pointer to retrieved data size. + * + * This is to be called by a fgraph_ops retfunc(), to retrieve data that + * was stored by the fgraph_ops entryfunc() on the function entry. + * That is, this will retrieve the data that was reserved on the + * entry of the function that corresponds to the exit of the function + * that the fgraph_ops retfunc() is called on. + * + * Returns: The stored data from fgraph_reserve_data() called by the + * matching entryfunc() for the retfunc() this is called from. + * Or NULL if there was nothing stored. + */ +void *fgraph_retrieve_data(int idx, int *size_bytes) +{ + int index = current->curr_ret_stack - 1; + unsigned long val; + + val = get_fgraph_entry(current, index); + while (__get_type(val) == FGRAPH_TYPE_DATA) { + if (__get_data_index(val) == idx) + goto found; + index -= __get_data_size(val) + 1; + val = get_fgraph_entry(current, index); + } + return NULL; +found: + if (size_bytes) + *size_bytes = __get_data_size(val) * + sizeof(long); + return get_fgraph_data(current, index); +} + /** * fgraph_get_task_var - retrieve a task specific state variable * @gops: The ftrace_ops that owns the task specific variable @@ -424,13 +574,18 @@ int function_graph_enter(unsigned long ret, unsigned long func, for (i = 0; i < fgraph_array_cnt; i++) { struct fgraph_ops *gops = fgraph_array[i]; + int save_curr_ret_stack; if (gops == &fgraph_stub) continue; + save_curr_ret_stack = current->curr_ret_stack; if (ftrace_ops_test(&gops->ops, func, NULL) && gops->entryfunc(&trace, gops)) bitmap |= BIT(i); + else + /* Clear out any saved storage */ + current->curr_ret_stack = save_curr_ret_stack; } if (!bitmap) @@ -452,6 +607,7 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, struct fgraph_ops *gops) { struct ftrace_graph_ent trace; + int save_curr_ret_stack; int index; int type; @@ -468,13 +624,15 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, trace.func = func; trace.depth = current->curr_ret_depth; + save_curr_ret_stack = current->curr_ret_stack; if (gops->entryfunc(&trace, gops)) { if (type == FGRAPH_TYPE_RESERVED) set_fgraph_index_bitmap(current, index, BIT(gops->idx)); else add_fgraph_index_bitmap(current, index, BIT(gops->idx)); return 0; - } + } else + current->curr_ret_stack = save_curr_ret_stack; if (type == FGRAPH_TYPE_RESERVED) { current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; @@ -619,7 +777,8 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs * curr_ret_stack is after that. */ barrier(); - current->curr_ret_stack -= FGRAPH_RET_INDEX + 1; + current->curr_ret_stack = index - FGRAPH_RET_INDEX; + current->curr_ret_depth--; return ret; } From patchwork Mon Nov 27 13:56:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170171 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3150883vqx; Mon, 27 Nov 2023 06:17:42 -0800 (PST) X-Google-Smtp-Source: AGHT+IGLjo32O3YghWziGfF8K9uubQHaAg1XXRUwwPEeHDi42wvg2ldsH7IogLmssgNXS2lOQAcF X-Received: by 2002:a05:6a20:6a22:b0:18c:346c:d59f with SMTP id p34-20020a056a206a2200b0018c346cd59fmr8322552pzk.62.1701094662286; Mon, 27 Nov 2023 06:17:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701094662; cv=none; d=google.com; s=arc-20160816; b=nJmgcBcNJzrHLh/1jB2MKBwfBMBw2FicU0BdDtlfq96L/9YZwJ50Dovp52PLRhNNUe eEI71azG1jFVHAseSLBhaL0bVxR9c8J/WV8k46SO27kMN0dDfsnDNbBnJRx8yPiYxLXA IM6yT9yv6GO4Pi51KyL3Sci+Bm24JlMxh4VeVF4LJD6p5xwYcI+N3/wJ8yRcQbDyCs+q 2kZDBYdxySmFa/TprAyOfdb4LWCaTt1bQOZh4CofqzSugRUvrUqbgADBsb0SudGzp/En JIoDSPyPOSQcpZ8pTywkQKBh1IOo7ZYHoKapT3TISNNDD6g0XVYtkrOqmObCFm6ZUolc zqCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=gNjysZumaRGj7wRl3wCUGBAp98rdVBpztEtFKkRLpNA=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=hRC1E/Gcnf3oToBNjmZnS+IZftOcVStA4/B0bWFqBFQsyWKha4Jk+BZyFTJgJ0pZod 1OhSiewBhapAPUrnLQgvJPfW43vRQE9kdIv/sHn3IaNNojH9DXT5IaKSSFcWIECNyuBj ke8RlUNJliVH25T6LNG0i3hNyJmn4IHlEhEZYL/596LMNdBQR6L5PLO1wk18i/jf3pgW y1QrE01l9q7BwraoEyVof3aV7mRT9WSVTbBxJnDcV2+2hiRyJ81P+6g1lPnvFotyB/jp tglyf05F/zjCxNeGtdPi1fYhyVff94IKhlwTbYTSM0scpIoi/Hs/UbRxBnLbT9WPDzex ssGw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=XFRXNTZl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id x190-20020a6263c7000000b006c338d18b6dsi9458683pfb.13.2023.11.27.06.17.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:17:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=XFRXNTZl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id BC67D80CF516; Mon, 27 Nov 2023 06:17:39 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233672AbjK0OR3 (ORCPT + 99 others); Mon, 27 Nov 2023 09:17:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233590AbjK0ORM (ORCPT ); Mon, 27 Nov 2023 09:17:12 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47D441BE4 for ; Mon, 27 Nov 2023 05:56:45 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 36C79C116D7; Mon, 27 Nov 2023 13:56:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093404; bh=loWWqL6DZVWbVMu4FDyOcIsVywT7pv1to3y8Tk+anuc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XFRXNTZlCtg3xfKsTF4K0QCLS2kgBFF4aOwHS0kbOG4rxkMIaTJTTLYib5JJl3Sp8 mGq6hn5SJXplfNEd+QRvKz5DRSH0QO4ctSMz2/nnL9qKG6dMBMcqswwNljsdmGWCae feeetb0RMtdFhqwbfVFmSRXeiMqEx3CZXVGUH/GI8E+QkvVFuj23tB2MWa+vRNdmom HTIeyWH8hzjwWVFjpIr3ilaGDOHdM+wnb3S1appMLqEQbh9zZ1h/QcWAvfTUsE+11/ wBXUiGTrh958Qevzea3F3qeYXmomIJqmlVQneNhY0H7itzC8PRqSFGQjBJ3wNfBe0B 1dE5MRp3piNYQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 18/33] function_graph: Add selftest for passing local variables Date: Mon, 27 Nov 2023 22:56:36 +0900 Message-Id: <170109339640.343914.11370867050596281510.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:17:39 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783727036559619372 X-GMAIL-MSGID: 1783727036559619372 From: Steven Rostedt (VMware) Add boot up selftest that passes variables from a function entry to a function exit, and make sure that they do get passed around. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Add reserved size test. - Use pr_*() instead of printk(KERN_*). --- kernel/trace/trace_selftest.c | 169 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 169 insertions(+) diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index f0758afa2f7d..4d86cd4c8c8c 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -756,6 +756,173 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr) #ifdef CONFIG_FUNCTION_GRAPH_TRACER +#ifdef CONFIG_DYNAMIC_FTRACE + +#define BYTE_NUMBER 123 +#define SHORT_NUMBER 12345 +#define WORD_NUMBER 1234567890 +#define LONG_NUMBER 1234567890123456789LL + +static int fgraph_store_size __initdata; +static const char *fgraph_store_type_name __initdata; +static char *fgraph_error_str __initdata; +static char fgraph_error_str_buf[128] __initdata; + +static __init int store_entry(struct ftrace_graph_ent *trace, + struct fgraph_ops *gops) +{ + const char *type = fgraph_store_type_name; + int size = fgraph_store_size; + void *p; + + p = fgraph_reserve_data(gops->idx, size); + if (!p) { + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "Failed to reserve %s\n", type); + fgraph_error_str = fgraph_error_str_buf; + return 0; + } + + switch (fgraph_store_size) { + case 1: + *(char *)p = BYTE_NUMBER; + break; + case 2: + *(short *)p = SHORT_NUMBER; + break; + case 4: + *(int *)p = WORD_NUMBER; + break; + case 8: + *(long long *)p = LONG_NUMBER; + break; + } + + return 1; +} + +static __init void store_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops) +{ + const char *type = fgraph_store_type_name; + long long expect = 0; + long long found = -1; + int size; + char *p; + + p = fgraph_retrieve_data(gops->idx, &size); + if (!p) { + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "Failed to retrieve %s\n", type); + fgraph_error_str = fgraph_error_str_buf; + return; + } + if (fgraph_store_size > size) { + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "Retrieved size %d is smaller than expected %d\n", + size, (int)fgraph_store_size); + fgraph_error_str = fgraph_error_str_buf; + return; + } + + switch (fgraph_store_size) { + case 1: + expect = BYTE_NUMBER; + found = *(char *)p; + break; + case 2: + expect = SHORT_NUMBER; + found = *(short *)p; + break; + case 4: + expect = WORD_NUMBER; + found = *(int *)p; + break; + case 8: + expect = LONG_NUMBER; + found = *(long long *)p; + break; + } + + if (found != expect) { + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "%s returned not %lld but %lld\n", type, expect, found); + fgraph_error_str = fgraph_error_str_buf; + return; + } + fgraph_error_str = NULL; +} + +static struct fgraph_ops store_bytes __initdata = { + .entryfunc = store_entry, + .retfunc = store_return, +}; + +static int __init test_graph_storage_type(const char *name, int size) +{ + char *func_name; + int len; + int ret; + + fgraph_store_type_name = name; + fgraph_store_size = size; + + snprintf(fgraph_error_str_buf, sizeof(fgraph_error_str_buf), + "Failed to execute storage %s\n", name); + fgraph_error_str = fgraph_error_str_buf; + + pr_cont("PASSED\n"); + pr_info("Testing fgraph storage of %d byte%s: ", size, size > 1 ? "s" : ""); + + func_name = "*" __stringify(DYN_FTRACE_TEST_NAME); + len = strlen(func_name); + + ret = ftrace_set_filter(&store_bytes.ops, func_name, len, 1); + if (ret && ret != -ENODEV) { + pr_cont("*Could not set filter* "); + return -1; + } + + ret = register_ftrace_graph(&store_bytes); + if (ret) { + pr_warn("Failed to init store_bytes fgraph tracing\n"); + return -1; + } + + DYN_FTRACE_TEST_NAME(); + + unregister_ftrace_graph(&store_bytes); + + if (fgraph_error_str) { + pr_cont("*** %s ***", fgraph_error_str); + return -1; + } + + return 0; +} +/* Test the storage passed across function_graph entry and return */ +static __init int test_graph_storage(void) +{ + int ret; + + ret = test_graph_storage_type("byte", 1); + if (ret) + return ret; + ret = test_graph_storage_type("short", 2); + if (ret) + return ret; + ret = test_graph_storage_type("word", 4); + if (ret) + return ret; + ret = test_graph_storage_type("long long", 8); + if (ret) + return ret; + return 0; +} +#else +static inline int test_graph_storage(void) { return 0; } +#endif /* CONFIG_DYNAMIC_FTRACE */ + /* Maximum number of functions to trace before diagnosing a hang */ #define GRAPH_MAX_FUNC_TEST 100000000 @@ -913,6 +1080,8 @@ trace_selftest_startup_function_graph(struct tracer *trace, ftrace_set_global_filter(NULL, 0, 1); #endif + ret = test_graph_storage(); + /* Don't test dynamic tracing, the function tracer already did */ out: /* Stop it if we failed */ From patchwork Mon Nov 27 13:56:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170159 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3134067vqx; Mon, 27 Nov 2023 06:02:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IHbdeKxbIGnLtGU66NKtBY5iOWcbMEL8Snn4NYPZhmozV0s0Z8+y1LyyKrj0f3oW7aJfUG6 X-Received: by 2002:a0d:c341:0:b0:5ce:5987:1e67 with SMTP id f62-20020a0dc341000000b005ce59871e67mr9812996ywd.21.1701093734058; Mon, 27 Nov 2023 06:02:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093734; cv=none; d=google.com; s=arc-20160816; b=woj0/WBPydUrls1SjsIjouEXfe0flvc6mdbI3GAkL+J6hDJNhhfFyqeU2FYJWqOqtQ O7tRkseY9dgLR+Ju+6DNshZxxanPdIUOwab3nVtOKh8qq43z4/ISgTbNHJkFMhxvEsr1 C8mhA1uDUvY5Lzf9kQugqiZuE1fvOaxA6SQDlD9BRZ9EX3Rjxp7ysEPzYjayDsiitNDn WGfbxoDCFtpA1/GdbkF4HTWhp7K+ZNogroDZhekLHw6jod88Zxe8dPNhzVEYD2fClF5k fE4Uc6Sfjjta9wfpoD9D/gl2Pqf/o4c0GX5xumvB3Ar4e6ltrS92pYdDEQCT9C14WvV/ BLXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yGZhqOfTvp3ahfvZ8dZqR9tdXeNvCQL9iaEMq1Hsjfw=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=dMI1kyJ6128gh2/lFQpHSnvNGDYM4H6ToaGb+YqUSg1Eqb5+SSbdLqEQasLdquoaci mYFlEjlIH4bVZYuEMKsTlmKhM1lDs7BtH2MjiJPBLRKd1CD/lVss9/8oHtqmW6T4K6OJ rmPPSX2q01CROQNl/m+VxsZF8yE9gMNtSfycJ2nDF3LHWwaNm4a0Z7NEsJ7Ea0O40Mgr DA/89+EW2WDDRwzGvRD/59sF6vanWBjnDKzLniilqth4UZw+Enr1iTWSXcOAEEIU7u+F MVazIxbeKP49DpdnTZUpW/UWEEoY6D8NxoAexc+46SpLs86wu04sQ8MLyzNEX/tTe2ld lpGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uDRb3wkt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id m23-20020a0dca17000000b005d0a874a607si751693ywd.356.2023.11.27.06.02.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:02:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uDRb3wkt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 3B7F18093F6B; Mon, 27 Nov 2023 06:02:05 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233597AbjK0OBY (ORCPT + 99 others); Mon, 27 Nov 2023 09:01:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233691AbjK0OAu (ORCPT ); Mon, 27 Nov 2023 09:00:50 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C59DE2684 for ; Mon, 27 Nov 2023 05:56:57 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C2AEC43142; Mon, 27 Nov 2023 13:56:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093417; bh=R6EgaSgZ7Tdx6KgYvPS0TsL+d8KuoSEr4jZGrN8IH3E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uDRb3wktXM7mUmieOpxA0h8PD9PH/lzIA2FmVUvXYYArKqItK0Z3Lbr6iAEiVKI9P uWHcfhZ714xxrl4lhaeeoSGziKQl5A1USSywa6QPsc5tOwhayU/tnPtqsc5B8L+QJY ZNZeQL+W4zKkhW2ug7+oLTFdDLNQ+isaJUrdhJVMo9IdERqLBjmuUiuqaq4hkUwkHf +zn89278SvyjBO1WnGfXWH1IlfwVHX4ZfT6o4eZtOLVrjh1ayuvJCAh0Dh7soO2gh3 WgOtFjKA2eXspMjCGFXQ9vKQh2qLrqtTaB6zXIJhh98TYS9ah7Clk+m99LuRR/FMeU q+L8isT8TrXBQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 19/33] function_graph: Add a new entry handler with parent_ip and ftrace_regs Date: Mon, 27 Nov 2023 22:56:50 +0900 Message-Id: <170109341033.343914.15037337357771755187.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:02:05 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726063587052904 X-GMAIL-MSGID: 1783726063587052904 From: Masami Hiramatsu (Google) Add a new entry handler to fgraph_ops as 'entryregfunc' which takes parent_ip and ftrace_regs. Note that the 'entryfunc' and 'entryregfunc' are mutual exclusive. You can set only one of them. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Update for new multiple fgraph. --- arch/arm64/kernel/ftrace.c | 2 + arch/loongarch/kernel/ftrace_dyn.c | 6 ++++ arch/powerpc/kernel/trace/ftrace.c | 2 + arch/powerpc/kernel/trace/ftrace_64_pg.c | 10 ++++--- arch/x86/kernel/ftrace.c | 42 ++++++++++++++++-------------- include/linux/ftrace.h | 19 +++++++++++--- kernel/trace/fgraph.c | 30 +++++++++++++++++---- 7 files changed, 76 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 205937e04ece..9d71a475d4c5 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -495,7 +495,7 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, if (bit < 0) return; - if (!function_graph_enter_ops(*parent, ip, fregs->fp, parent, gops)) + if (!function_graph_enter_ops(*parent, ip, fregs->fp, parent, fregs, gops)) *parent = (unsigned long)&return_to_handler; ftrace_test_recursion_unlock(bit); diff --git a/arch/loongarch/kernel/ftrace_dyn.c b/arch/loongarch/kernel/ftrace_dyn.c index 73858c9029cc..39b3f09a5e0c 100644 --- a/arch/loongarch/kernel/ftrace_dyn.c +++ b/arch/loongarch/kernel/ftrace_dyn.c @@ -244,7 +244,11 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct pt_regs *regs = &fregs->regs; unsigned long *parent = (unsigned long *)®s->regs[1]; - prepare_ftrace_return(ip, (unsigned long *)parent); + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + + if (!function_graph_enter_regs(regs->regs[1], ip, 0, parent, fregs)) + regs->regs[1] = (unsigned long)&return_to_handler; } #else static int ftrace_modify_graph_caller(bool enable) diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c index 82010629cf88..9bf1b6912116 100644 --- a/arch/powerpc/kernel/trace/ftrace.c +++ b/arch/powerpc/kernel/trace/ftrace.c @@ -422,7 +422,7 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, if (bit < 0) goto out; - if (!function_graph_enter(parent_ip, ip, 0, (unsigned long *)sp)) + if (!function_graph_enter_regs(parent_ip, ip, 0, (unsigned long *)sp, fregs)) parent_ip = ppc_function_entry(return_to_handler); ftrace_test_recursion_unlock(bit); diff --git a/arch/powerpc/kernel/trace/ftrace_64_pg.c b/arch/powerpc/kernel/trace/ftrace_64_pg.c index 7b85c3b460a3..43f6cfaaf7db 100644 --- a/arch/powerpc/kernel/trace/ftrace_64_pg.c +++ b/arch/powerpc/kernel/trace/ftrace_64_pg.c @@ -795,7 +795,8 @@ int ftrace_disable_ftrace_graph_caller(void) * in current thread info. Return the address we want to divert to. */ static unsigned long -__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) +__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp, + struct ftrace_regs *fregs) { unsigned long return_hooker; int bit; @@ -812,7 +813,7 @@ __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp return_hooker = ppc_function_entry(return_to_handler); - if (!function_graph_enter(parent, ip, 0, (unsigned long *)sp)) + if (!function_graph_enter_regs(parent, ip, 0, (unsigned long *)sp, fregs)) parent = return_hooker; ftrace_test_recursion_unlock(bit); @@ -824,13 +825,14 @@ __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { - fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]); + fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, + fregs->regs.gpr[1], fregs); } #else unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) { - return __prepare_ftrace_return(parent, ip, sp); + return __prepare_ftrace_return(parent, ip, sp, NULL); } #endif #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 845e29b4254f..0f757e399a96 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -614,16 +614,8 @@ int ftrace_disable_ftrace_graph_caller(void) } #endif /* CONFIG_DYNAMIC_FTRACE && !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ -/* - * Hook the return address and push it in the stack of return addrs - * in current thread info. - */ -void prepare_ftrace_return(unsigned long ip, unsigned long *parent, - unsigned long frame_pointer) +static inline bool skip_ftrace_return(void) { - unsigned long return_hooker = (unsigned long)&return_to_handler; - int bit; - /* * When resuming from suspend-to-ram, this function can be indirectly * called from early CPU startup code while the CPU is in real mode, @@ -633,13 +625,28 @@ void prepare_ftrace_return(unsigned long ip, unsigned long *parent, * This check isn't as accurate as virt_addr_valid(), but it should be * good enough for this purpose, and it's fast. */ - if (unlikely((long)__builtin_frame_address(0) >= 0)) - return; + if ((long)__builtin_frame_address(0) >= 0) + return true; - if (unlikely(ftrace_graph_is_dead())) - return; + if (ftrace_graph_is_dead()) + return true; + + if (atomic_read(¤t->tracing_graph_pause)) + return true; + return false; +} - if (unlikely(atomic_read(¤t->tracing_graph_pause))) +/* + * Hook the return address and push it in the stack of return addrs + * in current thread info. + */ +void prepare_ftrace_return(unsigned long ip, unsigned long *parent, + unsigned long frame_pointer) +{ + unsigned long return_hooker = (unsigned long)&return_to_handler; + int bit; + + if (unlikely(skip_ftrace_return())) return; bit = ftrace_test_recursion_trylock(ip, *parent); @@ -661,17 +668,14 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct fgraph_ops *gops = container_of(op, struct fgraph_ops, ops); int bit; - if (unlikely(ftrace_graph_is_dead())) - return; - - if (unlikely(atomic_read(¤t->tracing_graph_pause))) + if (unlikely(skip_ftrace_return())) return; bit = ftrace_test_recursion_trylock(ip, *parent); if (bit < 0) return; - if (!function_graph_enter_ops(*parent, ip, 0, parent, gops)) + if (!function_graph_enter_ops(*parent, ip, 0, parent, fregs, gops)) *parent = (unsigned long)&return_to_handler; ftrace_test_recursion_unlock(bit); diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 1c121237016e..6da6cc9aaeaf 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1063,6 +1063,11 @@ typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *, typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *, struct fgraph_ops *); /* entry */ +typedef int (*trace_func_graph_regs_ent_t)(unsigned long func, + unsigned long parent_ip, + struct ftrace_regs *fregs, + struct fgraph_ops *); /* entry w/ regs */ + extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); #ifdef CONFIG_FUNCTION_GRAPH_TRACER @@ -1070,6 +1075,7 @@ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph struct fgraph_ops { trace_func_graph_ent_t entryfunc; trace_func_graph_ret_t retfunc; + trace_func_graph_regs_ent_t entryregfunc; struct ftrace_ops ops; /* for the hash lists */ void *private; int idx; @@ -1106,13 +1112,20 @@ struct ftrace_ret_stack { extern void return_to_handler(void); extern int -function_graph_enter(unsigned long ret, unsigned long func, - unsigned long frame_pointer, unsigned long *retp); +function_graph_enter_regs(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct ftrace_regs *fregs); + +static inline int function_graph_enter(unsigned long ret, unsigned long func, + unsigned long fp, unsigned long *retp) +{ + return function_graph_enter_regs(ret, func, fp, retp, NULL); +} extern int function_graph_enter_ops(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp, - struct fgraph_ops *gops); + struct ftrace_regs *fregs, struct fgraph_ops *gops); struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int idx); diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 317202943936..47040a78dfb3 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -544,9 +544,21 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, # define MCOUNT_INSN_SIZE 0 #endif +static inline int call_entry_func(struct ftrace_graph_ent *trace, + unsigned long func, unsigned long ret, + struct ftrace_regs *fregs, + struct fgraph_ops *gops) +{ + if (gops->entryregfunc) + return gops->entryregfunc(func, ret, fregs, gops); + + return gops->entryfunc(trace, gops); +} + /* If the caller does not use ftrace, call this function. */ -int function_graph_enter(unsigned long ret, unsigned long func, - unsigned long frame_pointer, unsigned long *retp) +int function_graph_enter_regs(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct ftrace_regs *fregs) { struct ftrace_graph_ent trace; unsigned long bitmap = 0; @@ -581,7 +593,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, save_curr_ret_stack = current->curr_ret_stack; if (ftrace_ops_test(&gops->ops, func, NULL) && - gops->entryfunc(&trace, gops)) + call_entry_func(&trace, func, ret, fregs, gops)) bitmap |= BIT(i); else /* Clear out any saved storage */ @@ -604,6 +616,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, /* This is called from ftrace_graph_func() via ftrace */ int function_graph_enter_ops(unsigned long ret, unsigned long func, unsigned long frame_pointer, unsigned long *retp, + struct ftrace_regs *fregs, struct fgraph_ops *gops) { struct ftrace_graph_ent trace; @@ -625,7 +638,7 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, trace.func = func; trace.depth = current->curr_ret_depth; save_curr_ret_stack = current->curr_ret_stack; - if (gops->entryfunc(&trace, gops)) { + if (call_entry_func(&trace, func, ret, fregs, gops)) { if (type == FGRAPH_TYPE_RESERVED) set_fgraph_index_bitmap(current, index, BIT(gops->idx)); else @@ -906,7 +919,8 @@ void fgraph_init_ops(struct ftrace_ops *dst_ops, struct ftrace_ops *src_ops) { dst_ops->func = ftrace_graph_func; - dst_ops->flags = FTRACE_OPS_FL_PID | FTRACE_OPS_GRAPH_STUB; + dst_ops->flags = FTRACE_OPS_FL_PID | FTRACE_OPS_GRAPH_STUB | + FTRACE_OPS_FL_SAVE_ARGS; #ifdef FTRACE_GRAPH_TRAMP_ADDR dst_ops->trampoline = FTRACE_GRAPH_TRAMP_ADDR; @@ -1148,10 +1162,14 @@ int register_ftrace_graph(struct fgraph_ops *gops) int ret = 0; int i; + if (gops->entryfunc && gops->entryregfunc) + return -EINVAL; + mutex_lock(&ftrace_lock); if (!gops->ops.func) { - gops->ops.flags |= FTRACE_OPS_GRAPH_STUB; + gops->ops.flags |= FTRACE_OPS_GRAPH_STUB | + FTRACE_OPS_FL_SAVE_ARGS; gops->ops.func = ftrace_graph_func; #ifdef FTRACE_GRAPH_TRAMP_ADDR gops->ops.trampoline = FTRACE_GRAPH_TRAMP_ADDR; From patchwork Mon Nov 27 13:57:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170172 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3151267vqx; Mon, 27 Nov 2023 06:18:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IGiwr9Qr6i5tZpz5DppmNfwVduTT2MvTH75axzi19IMRaaAs8+iziK1w7jlACuBAMT1Oa/T X-Received: by 2002:a05:6808:21a4:b0:3ae:349:7c03 with SMTP id be36-20020a05680821a400b003ae03497c03mr16611012oib.27.1701094676539; Mon, 27 Nov 2023 06:17:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701094676; cv=none; d=google.com; s=arc-20160816; b=KyILg82AZWTLbh654kaPbfxDUphvIPljCh/509iroc08U/7m6jRll4G3YZI+euGe/I m54gNTd9o1GVP91S7dnpWhUDg/NaqIaLxh/2OWoZvw3Mxk+J5gmfExAAF+DVSIYuAcm+ jkRVijXJy6SPOCHo4J1hRZQeFQCWHgLgyo1okUWPQzRiLIYIjYHks3509UvTY08Klirj J85VnNngYu9LkG87C8WvdIkpXMAPdHsQ8xcCH4R8DYW+OA1oHsXqWOQRGX/TE50CcJhy VpxF9X/gXJNWTTZ2kZhRiKsesYdzzsbdX/ohNBD3zMszENZKah9msY/mjWqIN7YzjiwI a4NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=3svWY4wybKKKx/5loLVC6PbiBIqgQ8RkJ3VNOBdXGlQ=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=WCpk2GBXd0i/jI6PTSMq4zGuupZ5be1mH+RS1MxEaULpgzgUSDLDkocK8yoQOqXh7S gh8RFgl/oqwGE8U4NqTOUwPAaEYkFS28o23Pk8//6xsv1UEsslSwBa8BkeJ2vCAvRgKj AhWhH2lSAFvcm7DCQ6CWXHZN359l0IKgXJX3Xxtqx1xPcpX4bwguBXOLQbrgDh6Vn7cQ h5+VjxI7NL6t7H2XWj5ORKZWGZ1TRm9CdZtBxjwT2Se1tKBedz2RLysdsjri1loeUaXy YPCy+QK+QF7S5yFejBqfqKkenugNMOLiyUPx16igLmLjSN7T9GFkAV28Lro7WCj65l3l YWng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=mbtjKytB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id bg32-20020a05680817a000b003af05854023si3978557oib.249.2023.11.27.06.17.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:17:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=mbtjKytB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 5C877809638E; Mon, 27 Nov 2023 06:17:50 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233689AbjK0ORl (ORCPT + 99 others); Mon, 27 Nov 2023 09:17:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233690AbjK0ORV (ORCPT ); Mon, 27 Nov 2023 09:17:21 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D67BC19AC for ; Mon, 27 Nov 2023 05:57:09 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BBD45C43142; Mon, 27 Nov 2023 13:57:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093429; bh=C8DaLdk6PxxPZKYVu9FKFgBJUL1Q208ei+ScrPCkgjE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mbtjKytBcyycXAOBt8HVyUi/khWk+pg2FMuyMp/GPe0ISi3IHpIh3Q/V0vRSx0Xbs ZVtI/yQ6NtazvEKoOSxKck59X54YUAlaYuGsbnOiaqJ8yy5snSr8R3Am6JHXvvjDmk 6oZXyFBfzOPKFyjZJ3XSLsPVGNrjwx/sOald9R36y/MHuYfZgQpcKLgomx1VNu7OiC AxH3YqYKM2WCN6yoIGcfZoiTuSPl7t6xwPGWGXTCxg+vwCytUg9PITNk3Q8Y5XN7pE +EQ/6uiE7NVZ/k07tIFN4J0ZsTq32+ogO5r5qx0Bk4FkH3M1FirE5i5uVvoHYsrymR tG4+9HOVXXD+w== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 20/33] function_graph: Add a new exit handler with parent_ip and ftrace_regs Date: Mon, 27 Nov 2023 22:57:03 +0900 Message-Id: <170109342272.343914.17495684638995923633.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:17:50 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783727051604710006 X-GMAIL-MSGID: 1783727051604710006 From: Masami Hiramatsu (Google) Add a new return handler to fgraph_ops as 'retregfunc' which takes parent_ip and ftrace_regs instead of ftrace_graph_ret. This handler is available only if the arch support CONFIG_HAVE_FUNCTION_GRAPH_FREGS. Note that the 'retfunc' and 'reregfunc' are mutual exclusive. You can set only one of them. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Update for new multiple fgraph. - Save the return address to instruction pointer in ftrace_regs. --- arch/x86/include/asm/ftrace.h | 2 + include/linux/ftrace.h | 10 +++++- kernel/trace/Kconfig | 5 ++- kernel/trace/fgraph.c | 70 ++++++++++++++++++++++++++++------------- 4 files changed, 63 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index 415cf7a2ec2c..0b306c82855d 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -72,6 +72,8 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) override_function_with_return(&(fregs)->regs) #define ftrace_regs_query_register_offset(name) \ regs_query_register_offset(name) +#define ftrace_regs_get_frame_pointer(fregs) \ + frame_pointer(&(fregs)->regs) struct ftrace_ops; #define ftrace_graph_func ftrace_graph_func diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 6da6cc9aaeaf..79875a00c02b 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -43,7 +43,9 @@ struct dyn_ftrace; char *arch_ftrace_match_adjust(char *str, const char *search); -#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS +unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs); +#elif defined(CONFIG_HAVE_FUNCTION_GRAPH_RETVAL) struct fgraph_ret_regs; unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs); #else @@ -157,6 +159,7 @@ struct ftrace_regs { #define ftrace_regs_set_instruction_pointer(fregs, ip) do { } while (0) #endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ + static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs) { if (!fregs) @@ -1067,6 +1070,10 @@ typedef int (*trace_func_graph_regs_ent_t)(unsigned long func, unsigned long parent_ip, struct ftrace_regs *fregs, struct fgraph_ops *); /* entry w/ regs */ +typedef void (*trace_func_graph_regs_ret_t)(unsigned long func, + unsigned long parent_ip, + struct ftrace_regs *, + struct fgraph_ops *); /* return w/ regs */ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); @@ -1076,6 +1083,7 @@ struct fgraph_ops { trace_func_graph_ent_t entryfunc; trace_func_graph_ret_t retfunc; trace_func_graph_regs_ent_t entryregfunc; + trace_func_graph_regs_ret_t retregfunc; struct ftrace_ops ops; /* for the hash lists */ void *private; int idx; diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 61c541c36596..308b3bec01b1 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -34,6 +34,9 @@ config HAVE_FUNCTION_GRAPH_TRACER config HAVE_FUNCTION_GRAPH_RETVAL bool +config HAVE_FUNCTION_GRAPH_FREGS + bool + config HAVE_DYNAMIC_FTRACE bool help @@ -232,7 +235,7 @@ config FUNCTION_GRAPH_TRACER config FUNCTION_GRAPH_RETVAL bool "Kernel Function Graph Return Value" - depends on HAVE_FUNCTION_GRAPH_RETVAL + depends on HAVE_FUNCTION_GRAPH_RETVAL || HAVE_FUNCTION_GRAPH_FREGS depends on FUNCTION_GRAPH_TRACER default n help diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 47040a78dfb3..6faf04740908 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -656,8 +656,8 @@ int function_graph_enter_ops(unsigned long ret, unsigned long func, /* Retrieve a function return address to the trace stack on thread info.*/ static struct ftrace_ret_stack * -ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, - unsigned long frame_pointer, int *index) +ftrace_pop_return_trace(unsigned long *ret, unsigned long frame_pointer, + int *index) { struct ftrace_ret_stack *ret_stack; @@ -702,10 +702,6 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, *index += FGRAPH_RET_INDEX; *ret = ret_stack->ret; - trace->func = ret_stack->func; - trace->calltime = ret_stack->calltime; - trace->overrun = atomic_read(¤t->trace_overrun); - trace->depth = current->curr_ret_depth; /* * We still want to trace interrupts coming in if * max_depth is set to 1. Make sure the decrement is @@ -744,21 +740,42 @@ static struct notifier_block ftrace_suspend_notifier = { /* fgraph_ret_regs is not defined without CONFIG_FUNCTION_GRAPH_RETVAL */ struct fgraph_ret_regs; +static void fgraph_call_retfunc(struct ftrace_regs *fregs, + struct fgraph_ret_regs *ret_regs, + struct ftrace_ret_stack *ret_stack, + struct fgraph_ops *gops) +{ + struct ftrace_graph_ret trace; + + trace.func = ret_stack->func; + trace.calltime = ret_stack->calltime; + trace.overrun = atomic_read(¤t->trace_overrun); + trace.depth = current->curr_ret_depth; + trace.rettime = trace_clock_local(); +#ifdef CONFIG_FUNCTION_GRAPH_RETVAL + if (fregs) + trace.retval = ftrace_regs_return_value(fregs); + else + trace.retval = fgraph_ret_regs_return_value(ret_regs); +#endif + gops->retfunc(&trace, gops); +} + /* * Send the trace to the ring-buffer. * @return the original return address. */ -static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs, +static unsigned long __ftrace_return_to_handler(struct ftrace_regs *fregs, + struct fgraph_ret_regs *ret_regs, unsigned long frame_pointer) { struct ftrace_ret_stack *ret_stack; - struct ftrace_graph_ret trace; unsigned long bitmap; unsigned long ret; int index; int i; - ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer, &index); + ret_stack = ftrace_pop_return_trace(&ret, frame_pointer, &index); if (unlikely(!ret_stack)) { ftrace_graph_stop(); @@ -767,10 +784,8 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs return (unsigned long)panic; } - trace.rettime = trace_clock_local(); -#ifdef CONFIG_FUNCTION_GRAPH_RETVAL - trace.retval = fgraph_ret_regs_return_value(ret_regs); -#endif + if (fregs) + ftrace_regs_set_instruction_pointer(fregs, ret); bitmap = get_fgraph_index_bitmap(current, index); for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { @@ -781,7 +796,10 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs if (gops == &fgraph_stub) continue; - gops->retfunc(&trace, gops); + if (gops->retregfunc) + gops->retregfunc(ret_stack->func, ret, fregs, gops); + else + fgraph_call_retfunc(fregs, ret_regs, ret_stack, gops); } /* @@ -796,20 +814,22 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs return ret; } -/* - * After all architecures have selected HAVE_FUNCTION_GRAPH_RETVAL, we can - * leave only ftrace_return_to_handler(ret_regs). - */ -#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS +unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs) +{ + return __ftrace_return_to_handler(fregs, NULL, + ftrace_regs_get_frame_pointer(fregs)); +} +#elif defined(CONFIG_HAVE_FUNCTION_GRAPH_RETVAL) unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs) { - return __ftrace_return_to_handler(ret_regs, + return __ftrace_return_to_handler(NULL, ret_regs, fgraph_ret_regs_frame_pointer(ret_regs)); } #else unsigned long ftrace_return_to_handler(unsigned long frame_pointer) { - return __ftrace_return_to_handler(NULL, frame_pointer); + return __ftrace_return_to_handler(NULL, NULL, frame_pointer); } #endif @@ -1162,9 +1182,15 @@ int register_ftrace_graph(struct fgraph_ops *gops) int ret = 0; int i; - if (gops->entryfunc && gops->entryregfunc) + if ((gops->entryfunc && gops->entryregfunc) || + (gops->retfunc && gops->retregfunc)) return -EINVAL; +#ifndef CONFIG_HAVE_FUNCTION_GRAPH_FREGS + if (gops->retregfunc) + return -EOPNOTSUPP; +#endif + mutex_lock(&ftrace_lock); if (!gops->ops.func) { From patchwork Mon Nov 27 13:57:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170157 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3133722vqx; Mon, 27 Nov 2023 06:01:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IFRhYj77PEXkJQVnGLcLtwsKAq7NDRoH/j8Ypg2FeIjqsocFR5hwPthiBqz5jtUDu4gD4uC X-Received: by 2002:a05:6870:b512:b0:1fa:2014:6d6e with SMTP id v18-20020a056870b51200b001fa20146d6emr9499453oap.39.1701093716651; Mon, 27 Nov 2023 06:01:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093716; cv=none; d=google.com; s=arc-20160816; b=Ke5hs64fX8FJ/GDGEromulSi3LvNlLi2Z8mSZL2b+WLrL1suDYRH8ymNtfdFRT2Trx cwlz6wCazilaRu9KZEyu/nP9x3I8Yv2033h4qNGWr5wm+DPGfbhzue9hcaUT0KfjprET 4Olu5X36dhXYROHAKE9Yy7HT7UVwJ77mLqmB8sOeCzUyGn6O6+7e75pXqNY22tsNfEgM Nb0CN/jFhPToLOMNdlIc9yLFz5xSSfFs2hPsUbCDQtjvFWizyTfbnUb+XEpMn0ORoD8P 81jIIw9z5Ro17++Eqso+HggxX7aseasNWMDSGLR0yA2EZ7xpYT5gqPKnYul4EBvbSq7X 4Qag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=XsLCt2zocbaN2T0HWhPnSL85Pn7Z9+qHfokTfISPpsQ=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=QdXxZ9lOeOQysdSyYhS3rU+kz3TnrI4/Agzv2GDz2/vGOSZM1HRIulXmrkDu5Yz5UQ XMiAoNNzXXRhh9V7OXRZkxa8ar8z/y64jIJfVEXaas3G4/OlSdNLDQlDTxLHOwcAU4h0 bIu/GBi4qJa8vpUCFEsCDVAuXjxE4N8T4OWaREBjUoF7EFBdd6XqDBRBlf2LTFDF8fQ8 Ek8EM+8wtPM09Ml7hxkI+7G6b25ZvwUmssAresAvXULunZU1Ba02oaxvzTeJvmpRGDW0 u45HJyxQPWWAVgm6ddspGB+iW9LUbjiCddEP6nvPbTfkfspZiK5FJfDrMGaa5XEvhBUD KNfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=tc1U8C8B; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id z12-20020a05683020cc00b006d7eee62b0fsi3605351otq.328.2023.11.27.06.01.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:01:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=tc1U8C8B; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 1A68380964D0; Mon, 27 Nov 2023 06:01:46 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233099AbjK0OBb (ORCPT + 99 others); Mon, 27 Nov 2023 09:01:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233739AbjK0OA5 (ORCPT ); Mon, 27 Nov 2023 09:00:57 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 263E42701 for ; Mon, 27 Nov 2023 05:57:22 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E92A8C43142; Mon, 27 Nov 2023 13:57:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093441; bh=5BNVBB7peZKxOYYyHONtdZrMFDGpcSDPvsIQ08LtrvI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tc1U8C8BCNN5UzRem7hdSm4O98H/yFk/QaKBcxjKhB4Y15Ed35bD5P1DcNtu7Ly7L yDFDZM5DkWfQNzsY818YkfJMK4+lW1sEppUvuPGc7SneWKamDbU4hKW0Dzb1Bk/3yf ejFzvhMVB+8MDzkAzj2JqpeJgGQZFgRlcQY2hQ0KQv9jy8t+chs3E4F2Zq98+2dfla MHOPRV7nFL7fVmSxa69OyO9ke4gqW6ttpsfLK0C9pJj/1cA1gKjugk7V75VG2D+5/7 67cT19Igd4Vk0yODvBs6w8akuKyDEgi0fwnkLUTTEAV5yLWendJUvwysvzAhqnS9fu 29nl/Z1FmT++A== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 21/33] x86/ftrace: Enable HAVE_FUNCTION_GRAPH_FREGS Date: Mon, 27 Nov 2023 22:57:15 +0900 Message-Id: <170109343500.343914.8822279017262195121.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:01:46 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726045039380561 X-GMAIL-MSGID: 1783726045039380561 From: Masami Hiramatsu (Google) Support HAVE_FUNCTION_GRAPH_FREGS on x86-64, which saves ftrace_regs on the stack in ftrace_graph return trampoline so that the callbacks can access registers via ftrace_regs APIs. Note that this only recovers 'rax' and 'rdx' registers because other registers are not used anymore and recovered by caller. 'rax' and 'rdx' will be used for passing the return value. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Add a comment about rip. Changes in v2: - Save rsp register and drop clearing orig_ax. --- arch/x86/Kconfig | 3 ++- arch/x86/kernel/ftrace_64.S | 37 +++++++++++++++++++++++++++++-------- 2 files changed, 31 insertions(+), 9 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 66bfabae8814..4b4c2f9d67da 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -219,7 +219,8 @@ config X86 select HAVE_FAST_GUP select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD - select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER + select HAVE_FUNCTION_GRAPH_FREGS if HAVE_DYNAMIC_FTRACE_WITH_ARGS + select HAVE_FUNCTION_GRAPH_RETVAL if !HAVE_DYNAMIC_FTRACE_WITH_ARGS select HAVE_FUNCTION_GRAPH_TRACER if X86_32 || (X86_64 && DYNAMIC_FTRACE) select HAVE_FUNCTION_TRACER select HAVE_GCC_PLUGINS diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S index 945cfa5f7239..89a479f0c332 100644 --- a/arch/x86/kernel/ftrace_64.S +++ b/arch/x86/kernel/ftrace_64.S @@ -348,21 +348,42 @@ STACK_FRAME_NON_STANDARD_FP(__fentry__) SYM_CODE_START(return_to_handler) UNWIND_HINT_UNDEFINED ANNOTATE_NOENDBR - subq $24, %rsp + /* + * Save the registers requires for ftrace_regs; + * rax, rcx, rdx, rdi, rsi, r8, r9 and rbp + */ + subq $(FRAME_SIZE), %rsp + movq %rax, RAX(%rsp) + movq %rcx, RCX(%rsp) + movq %rdx, RDX(%rsp) + movq %rsi, RSI(%rsp) + movq %rdi, RDI(%rsp) + movq %r8, R8(%rsp) + movq %r9, R9(%rsp) + movq %rbp, RBP(%rsp) + /* + * orig_ax is not cleared because it is used for indicating the direct + * trampoline in the fentry. And rip is not set because we don't know + * the correct return address here. + */ + + leaq FRAME_SIZE(%rsp), %rcx + movq %rcx, RSP(%rsp) - /* Save the return values */ - movq %rax, (%rsp) - movq %rdx, 8(%rsp) - movq %rbp, 16(%rsp) movq %rsp, %rdi call ftrace_return_to_handler movq %rax, %rdi - movq 8(%rsp), %rdx - movq (%rsp), %rax - addq $24, %rsp + /* + * Restore only rax and rdx because other registers are not used + * for return value nor callee saved. Caller will reuse/recover it. + */ + movq RDX(%rsp), %rdx + movq RAX(%rsp), %rax + + addq $(FRAME_SIZE), %rsp /* * Jump back to the old return address. This cannot be JMP_NOSPEC rdi * since IBT would demand that contain ENDBR, which simply isn't so for From patchwork Mon Nov 27 13:57:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170162 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3135856vqx; Mon, 27 Nov 2023 06:03:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IFVXkce0KTdXmkVYqqJ5wjSC4NVuPrhVi2wl4ysnb4IdSLQAK1df9NWJdkgtQgVqP38loVw X-Received: by 2002:a81:5e43:0:b0:5be:7046:b2f7 with SMTP id s64-20020a815e43000000b005be7046b2f7mr8906130ywb.40.1701093825019; Mon, 27 Nov 2023 06:03:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093825; cv=none; d=google.com; s=arc-20160816; b=bvidAoDHKFL7AfLpGhMDaQCwqQuOD7adrgtKPzDck9GGHjyrr2eKHAOTPkImeXnjqG A3URQN0ar07mB7hRujvJaT0SG+8ztOlvk6oZpLhq8U0R9qoTMk2IgxjiK4BlOaecONji MlTOQwJLIoM692NVkVpmstY+pQGhpl6s7Kenm1ag2LgoptYxF7ZTWoWtuCuF27N+5S70 GmfzEfM8L72shwYK7Je/BiBlfJeCxbBriOT5A0LdUTNFN+e1unwEVQoE1usCpqTCyfFD qskO8No49vYlTm3R7UiYl/q+qtof4OiG9AHCfa0J4vAd6MhXet8INCBdf7TH9m2OfikI bqiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=oKaa2V947Ime17JK0nRKcxx8GCTqrCCMWN41KLGlrec=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=qU3XM2456+TgnqJ612vnKnDH2K+RgZU4YeX3nmARIarlbvGRqqGZ89a6bvyN879kws YtN1gPdrWvMoyv+66Rs/umJAzrh5Iprw53ZyxUXMEsAR2m2zolG7OIGJpZhsRb3DfECL TGFaOOLjZnxUIrxcEA84QhhQIHeL/oJc+kpXEuNh9vH/mBd9hAu5sSUDQ6foB4vQAb+z R80U5Ru7rcbJqEfdFBsysc/fgDAZ1kDAmIEb0iYzchi/2Ed4O0i5BfQ1is1IZ6ipJ3nT Hc8tcvyH/Q+2m2jrYLwj8inQVfw7CGUQBRmq/dZv8fnjjoP4+lTyOAXRZXshDkSzouaA aGCQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=AhIzkEBU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id j5-20020a81ec05000000b0056ce2ff50fcsi5986293ywm.411.2023.11.27.06.03.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:03:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=AhIzkEBU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 1CC5B8070DBF; Mon, 27 Nov 2023 06:02:18 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233276AbjK0OBm (ORCPT + 99 others); Mon, 27 Nov 2023 09:01:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233316AbjK0OBE (ORCPT ); Mon, 27 Nov 2023 09:01:04 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 901942724 for ; Mon, 27 Nov 2023 05:57:34 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3AFFBC433CB; Mon, 27 Nov 2023 13:57:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093454; bh=XgejxEeSUCgC78mdpML2P/NZltFSt+DFPHLOkxfE4cc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AhIzkEBUjsZmIX7dBCEzhF6+8hcCIffZrHxn4j0PEX6cacO0NeiAE0mrJzYrloJcc YX0S1gxD6Y+Vux81EMxp99RHo6nG1X/78Zhli8DrYQDhEki3Oo2l0KnS2yD1NJDhmI kgDgWxM1sMONq3aMX2mEmd+5DkgHJHJYFiVg7tgdttcV3sIQyY8qXALxZNhVbp6DCV SbgVOkTNKG/epk0nSgIsp5pIQ0I+H7nVZqsYbHuOMhOgX3MOR4aQKmLmlF6zVQe3JP AAjvIqyu2srbtBOBDb0LfTRi0SU30j50yeTHUP/e/pPNiCrFfvyV2IZYZZjRLIw/45 kUSThxUX2jnFA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 22/33] tracing: Rename ftrace_regs_return_value to ftrace_regs_get_return_value Date: Mon, 27 Nov 2023 22:57:27 +0900 Message-Id: <170109344719.343914.7390510141635028198.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:02:18 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726158371161520 X-GMAIL-MSGID: 1783726158371161520 From: Masami Hiramatsu (Google) Rename ftrace_regs_return_value to ftrace_regs_get_return_value as same as other ftrace_regs_get/set_* APIs. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Newly added. --- arch/loongarch/include/asm/ftrace.h | 2 +- arch/powerpc/include/asm/ftrace.h | 2 +- arch/s390/include/asm/ftrace.h | 2 +- arch/x86/include/asm/ftrace.h | 2 +- include/linux/ftrace.h | 2 +- kernel/trace/fgraph.c | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/loongarch/include/asm/ftrace.h b/arch/loongarch/include/asm/ftrace.h index a11996eb5892..a9c3d0f2f941 100644 --- a/arch/loongarch/include/asm/ftrace.h +++ b/arch/loongarch/include/asm/ftrace.h @@ -70,7 +70,7 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip) regs_get_kernel_argument(&(fregs)->regs, n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(&(fregs)->regs) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(&(fregs)->regs) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(&(fregs)->regs, ret) diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h index 9e5a39b6a311..7e138e0e3baf 100644 --- a/arch/powerpc/include/asm/ftrace.h +++ b/arch/powerpc/include/asm/ftrace.h @@ -69,7 +69,7 @@ ftrace_regs_get_instruction_pointer(struct ftrace_regs *fregs) regs_get_kernel_argument(&(fregs)->regs, n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(&(fregs)->regs) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(&(fregs)->regs) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(&(fregs)->regs, ret) diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index 5a82b08f03cd..01e775c98425 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -88,7 +88,7 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, regs_get_kernel_argument(&(fregs)->regs, n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(&(fregs)->regs) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(&(fregs)->regs) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(&(fregs)->regs, ret) diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index 0b306c82855d..a061f8832b20 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -64,7 +64,7 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) regs_get_kernel_argument(&(fregs)->regs, n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(&(fregs)->regs) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(&(fregs)->regs) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(&(fregs)->regs, ret) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 79875a00c02b..da2a23f5a9ed 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -187,7 +187,7 @@ static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs) regs_get_kernel_argument(ftrace_get_regs(fregs), n) #define ftrace_regs_get_stack_pointer(fregs) \ kernel_stack_pointer(ftrace_get_regs(fregs)) -#define ftrace_regs_return_value(fregs) \ +#define ftrace_regs_get_return_value(fregs) \ regs_return_value(ftrace_get_regs(fregs)) #define ftrace_regs_set_return_value(fregs, ret) \ regs_set_return_value(ftrace_get_regs(fregs), ret) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 6faf04740908..d1740f990ce7 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -754,7 +754,7 @@ static void fgraph_call_retfunc(struct ftrace_regs *fregs, trace.rettime = trace_clock_local(); #ifdef CONFIG_FUNCTION_GRAPH_RETVAL if (fregs) - trace.retval = ftrace_regs_return_value(fregs); + trace.retval = ftrace_regs_get_return_value(fregs); else trace.retval = fgraph_ret_regs_return_value(ret_regs); #endif From patchwork Mon Nov 27 13:57:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170161 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3135581vqx; Mon, 27 Nov 2023 06:03:29 -0800 (PST) X-Google-Smtp-Source: AGHT+IFF7eJz/IVXWb8kJrjCc4rxwzC4zWg8QjmUKZeBg+XYMvOg+EvntKFLiDQY2n5w61fkUQif X-Received: by 2002:a17:90b:4b0f:b0:281:8e9:7b86 with SMTP id lx15-20020a17090b4b0f00b0028108e97b86mr10495247pjb.23.1701093809548; Mon, 27 Nov 2023 06:03:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093809; cv=none; d=google.com; s=arc-20160816; b=egdSIqXOPkypL7mHKkwqabRCP0vrdFM3dk9A/8Izv5K9bpewJgzkyotaO2fxTvRy4k M3AixqOxUBL9Em2sMMMoNNHu/5wBFZkiA1XnII1zwzDlOPOKzraF6nPU3YsgKxvEDGuT TkCyLxsbdsqAF55Luo0lQJ697Fcg2t/vsyYfQz+OOSgXroNutcQgjtZlsmABpsWZSA4w FCIox7EUyznjOXsyGaTSPA0u8TKc7TtoF0ulj3srTlW4zmfpxRzvRYuZHZ6e325FRwFz Hmopo//JZaj6jPSrdGdhmo6JcXw5jiovGE5zERlQz8LPYpe5upVKV6mt3QO4Iv1MVYhL eRoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=fZ0jd+o/ZSLrIRQfuGu+aLWmGpes0zoYvq3h/cU5EhE=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=qTdq52mevttVXcUeUZRG7Qbduj0zyCJOjvq8BVVDXAr+Nn9ajzL3WwjgCZ/zdGDIky bljuQjgaiWQryMJX2ICd+vYE4kmOZ03/TLi5wStHXtHMko9nRFDLcbx5z4ePoFvYLCBm 5Q7EzrL9MKQWAAo2p0kjrp9BSzpPNf97jB2zsFb9Kp3HI50iclFPzxDN1jF53klsag3z fXoqCNMeeBmL03pi0C6To5o7zcFXye1ieHnPH3vo0lpsBRW4aAk5/tSlDo4PDbX0r0ol svYkDctCWKp8WGdzvJ3M4LyCfiIbSLjC719iS70kai0hnjjQCz6whmRT9fxb9mJqI+et t30w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=mjl1gU6E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id in24-20020a17090b439800b0028031f2f450si9648654pjb.22.2023.11.27.06.03.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:03:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=mjl1gU6E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 95D88809D3CB; Mon, 27 Nov 2023 06:03:23 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233489AbjK0ODO (ORCPT + 99 others); Mon, 27 Nov 2023 09:03:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233524AbjK0OBL (ORCPT ); Mon, 27 Nov 2023 09:01:11 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0ECAE2D40 for ; Mon, 27 Nov 2023 05:57:47 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4BF8C433C7; Mon, 27 Nov 2023 13:57:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093466; bh=laXiPq6pCZezVsgqy2AbYoST02LKOiIN00hspmTWaV4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mjl1gU6E/H4SivudDYeoneFcQjDJLVwxkwxIB9sH92a8mwKwj4fGQiTQNOHHDC3tu J0P+AoSuDghoMThhLLoWMMxQw5jaEWhi5zj8N9bzLZy2IFSwzFhzFZqhU+BWNR2zfi mCQme+LXphmTC9L6W+akPXsWajpZfajpiqNsSPHl/W2PhMPyI0i2nhsjV/Qcvp4S/v qsNYBw3iHERLhrXwbdiLtqXY+/dZqDGRnQPxA4L+LMVNYcDaq23KhbVNja/lpr7SWK 9kXStCyPUn1fmeQMOG6iRXO62PL3tWoN9iOJEY5WkHAMp81fPUZdA+DJXBUDCtfc0A kpQ3oqJhOwciA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 23/33] arm64: ftrace: Enable HAVE_FUNCTION_GRAPH_FREGS Date: Mon, 27 Nov 2023 22:57:40 +0900 Message-Id: <170109345963.343914.8985762897942714493.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:03:23 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726142068459124 X-GMAIL-MSGID: 1783726142068459124 From: Masami Hiramatsu (Google) Enable CONFIG_HAVE_FUNCTION_GRAPH_FREGS on arm64. Note that this depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS which is enabled if the compiler supports "-fpatchable-function-entry=2". If not, it continue to use ftrace_ret_regs. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Newly added. --- arch/arm64/Kconfig | 2 ++ arch/arm64/include/asm/ftrace.h | 6 ++++++ arch/arm64/kernel/entry-ftrace.S | 28 ++++++++++++++++++++++++++++ 3 files changed, 36 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 78f20e632712..373f6e113141 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -192,6 +192,8 @@ config ARM64 select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_ARGS \ if $(cc-option,-fpatchable-function-entry=2) + select HAVE_FUNCTION_GRAPH_FREGS \ + if HAVE_DYNAMIC_FTRACE_WITH_ARGS select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS \ if DYNAMIC_FTRACE_WITH_ARGS && DYNAMIC_FTRACE_WITH_CALL_OPS select HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS \ diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index ab158196480c..efd5dbf74dd6 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -131,6 +131,12 @@ ftrace_regs_set_return_value(struct ftrace_regs *fregs, fregs->regs[0] = ret; } +static __always_inline unsigned long +ftrace_regs_get_frame_pointer(struct ftrace_regs *fregs) +{ + return fregs->fp; +} + static __always_inline void ftrace_override_function_with_return(struct ftrace_regs *fregs) { diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index f0c16640ef21..d87ccdb9e678 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -328,6 +328,33 @@ SYM_FUNC_END(ftrace_stub_graph) * Run ftrace_return_to_handler() before going back to parent. * @fp is checked against the value passed by ftrace_graph_caller(). */ +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS +SYM_CODE_START(return_to_handler) + /* save ftrace_regs except for PC */ + sub sp, sp, #FREGS_SIZE + stp x0, x1, [sp, #FREGS_X0] + stp x2, x3, [sp, #FREGS_X2] + stp x4, x5, [sp, #FREGS_X4] + stp x6, x7, [sp, #FREGS_X6] + str x8, [sp, #FREGS_X8] + str x29, [sp, #FREGS_FP] + str x9, [sp, #FREGS_LR] + str x10, [sp, #FREGS_SP] + + mov x0, sp + bl ftrace_return_to_handler // addr = ftrace_return_to_hander(fregs); + mov x30, x0 // restore the original return address + + /* restore return value regs */ + ldp x0, x1, [sp, #FREGS_X0] + ldp x2, x3, [sp, #FREGS_X2] + ldp x4, x5, [sp, #FREGS_X4] + ldp x6, x7, [sp, #FREGS_X6] + add sp, sp, #FREGS_SIZE + + ret +SYM_CODE_END(return_to_handler) +#else /* !CONFIG_HAVE_FUNCTION_GRAPH_FREGS */ SYM_CODE_START(return_to_handler) /* save return value regs */ sub sp, sp, #FGRET_REGS_SIZE @@ -350,4 +377,5 @@ SYM_CODE_START(return_to_handler) ret SYM_CODE_END(return_to_handler) +#endif /* CONFIG_HAVE_FUNCTION_GRAPH_FREGS */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ From patchwork Mon Nov 27 13:57:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170163 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3135991vqx; Mon, 27 Nov 2023 06:03:52 -0800 (PST) X-Google-Smtp-Source: AGHT+IF3VSo45WjR9jogtfZWHTSRqJo9i9G5TNtYxm/K5fT0Yru5jyC9grK+RrUI5+hexwq2GWhC X-Received: by 2002:a05:6a00:8d3:b0:68e:41e9:10be with SMTP id s19-20020a056a0008d300b0068e41e910bemr12263175pfu.20.1701093832631; Mon, 27 Nov 2023 06:03:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093832; cv=none; d=google.com; s=arc-20160816; b=i72rxi5wH7DdEFKEwT/ioVYHSAxqrrEH4RaN8qn0qp7Llwm0IWAmHS9c+48Lj6uad+ XzPj1uoEXMdrxjtESIQn6UizJGxwN2VoBrzbqToZR5AhiKYHxGMFFiSwejxtPc20/hvy h4HywGDyMDKThfSZQPQkPjcn7oE+V67BsSiYI1gGUI+PWmF120qByrWcz4ouOc7EJPhC WDRZg2j442RHIok9mBt1bK15RZwBzbrWjW6sumdv5UYGFXTJydTU3EYrN0lRac6y0qIG bAWRynbdXLD9Tzqn0LsslZPI8cQ1vORkTkfyssArjNTRYMlXp9wDVFKVXvy3g8iZIbmU OPkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=+aT4nYCdwDxeICjZ/nlUFA4py1+K5KVW7e9Qgmix9rQ=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=Av3FU1L4ETduYVP0naek0vbvJAY2eX23Vpm4+2Z1U5Py/rQ7hLTB+Z64ze2tpadZcy JQUuw2MgxNmG9F5slJSM5oJJSl6Moh0if2HuriSkoZrTIbDt4EpzJ5tAroWfEW7ROXkN daTKt8shQ1O4uA9QT51CKXdfbGXgDO5Gc+j36WgM/2yq9Ga6uxCePWndEF5coVteSSAg /PxCteNpYuw1P0AOrjBOtyFDEJ19BCikoh74ugUkaRvh9esMyVcAH2kqEcqan/ka7cU2 OjYtI6gFa/HCc348Btc+o48uJQ5jV7K1XnaJ+AWQP8FFK9RoG9lGB52zLFBpzg8jcqgu tdTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=DTy2Z9dy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id fi37-20020a056a0039a500b006cb69513cc5si10112954pfb.254.2023.11.27.06.03.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:03:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=DTy2Z9dy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id C06A7809ADEC; Mon, 27 Nov 2023 06:03:39 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233711AbjK0OD2 (ORCPT + 99 others); Mon, 27 Nov 2023 09:03:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233561AbjK0OBV (ORCPT ); Mon, 27 Nov 2023 09:01:21 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84FDD2D4D for ; Mon, 27 Nov 2023 05:57:58 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B85EFC433C7; Mon, 27 Nov 2023 13:57:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093478; bh=8gPb8NMkwz/YkIjqBczF6oGA1ML3K/ChE49sxSPUXio=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DTy2Z9dyMppc7EmQS6nKQpdYP21gX6/xfmxD3CJQ8Eu4+eTLbq4FPG20hyo6jrikY FOcmeXqEBIYpoTsV46FQPBEV3VA5kmQuK9fbkW95Ua84kG2u15CB0fL9W298mYvix7 d5x0rfH4EWNKfjb4ZnwCN4fsz8Qb73WotuVsVrmsLRFugKcOouWof6ZRBlsrbcdPk6 gzDUBR7EPCN6APZfLIbn+Vc8BNN1BwTx/yhEDL5cyl7Gz47E7RGVV8bSArCv2nNT3h JLomHgoEgl8I9cFe4u/S1BPs89VKLYHRNJZYKvLxlwlJTqhxLVIdOqk5Y8giqdoKDb +wtDhlwHweupw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 24/33] fprobe: Use ftrace_regs in fprobe entry handler Date: Mon, 27 Nov 2023 22:57:52 +0900 Message-Id: <170109347190.343914.13818837385144085179.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:03:39 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726166604431960 X-GMAIL-MSGID: 1783726166604431960 From: Masami Hiramatsu (Google) This allows fprobes to be available with CONFIG_DYNAMIC_FTRACE_WITH_ARGS instead of CONFIG_DYNAMIC_FTRACE_WITH_REGS, then we can enable fprobe on arm64. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest --- Changes from previous series: NOTHING, just forward ported. --- include/linux/fprobe.h | 2 +- kernel/trace/Kconfig | 3 ++- kernel/trace/bpf_trace.c | 10 +++++++--- kernel/trace/fprobe.c | 4 ++-- kernel/trace/trace_fprobe.c | 6 +++++- lib/test_fprobe.c | 4 ++-- samples/fprobe/fprobe_example.c | 2 +- 7 files changed, 20 insertions(+), 11 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 3e03758151f4..36c0595f7b93 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -35,7 +35,7 @@ struct fprobe { int nr_maxactive; int (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *regs, void *entry_data); void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 308b3bec01b1..805d72ab77c6 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -290,7 +290,7 @@ config DYNAMIC_FTRACE_WITH_ARGS config FPROBE bool "Kernel Function Probe (fprobe)" depends on FUNCTION_TRACER - depends on DYNAMIC_FTRACE_WITH_REGS + depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS depends on HAVE_RETHOOK select RETHOOK default n @@ -675,6 +675,7 @@ config FPROBE_EVENTS select TRACING select PROBE_EVENTS select DYNAMIC_EVENTS + depends on DYNAMIC_FTRACE_WITH_REGS default y help This allows user to add tracing events on the function entry and diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 868008f56fec..b0b98f0a0e52 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2501,7 +2501,7 @@ static int __init bpf_event_init(void) fs_initcall(bpf_event_init); #endif /* CONFIG_MODULES */ -#ifdef CONFIG_FPROBE +#if defined(CONFIG_FPROBE) && defined(CONFIG_DYNAMIC_FTRACE_WITH_REGS) struct bpf_kprobe_multi_link { struct bpf_link link; struct fprobe fp; @@ -2729,10 +2729,14 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, static int kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *data) { struct bpf_kprobe_multi_link *link; + struct pt_regs *regs = ftrace_get_regs(fregs); + + if (!regs) + return 0; link = container_of(fp, struct bpf_kprobe_multi_link, fp); kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); @@ -3004,7 +3008,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr kvfree(cookies); return err; } -#else /* !CONFIG_FPROBE */ +#else /* !CONFIG_FPROBE || !CONFIG_DYNAMIC_FTRACE_WITH_REGS */ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index 881f90f0cbcf..df323cb7bed1 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -46,7 +46,7 @@ static inline void __fprobe_handler(unsigned long ip, unsigned long parent_ip, } if (fp->entry_handler) - ret = fp->entry_handler(fp, ip, parent_ip, ftrace_get_regs(fregs), entry_data); + ret = fp->entry_handler(fp, ip, parent_ip, fregs, entry_data); /* If entry_handler returns !0, nmissed is not counted. */ if (rh) { @@ -182,7 +182,7 @@ static void fprobe_init(struct fprobe *fp) fp->ops.func = fprobe_kprobe_handler; else fp->ops.func = fprobe_handler; - fp->ops.flags |= FTRACE_OPS_FL_SAVE_REGS; + fp->ops.flags |= FTRACE_OPS_FL_SAVE_ARGS; } static int fprobe_init_rethook(struct fprobe *fp, int num) diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 8bfe23af9c73..71bf38d698f1 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -320,12 +320,16 @@ NOKPROBE_SYMBOL(fexit_perf_func); #endif /* CONFIG_PERF_EVENTS */ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); + struct pt_regs *regs = ftrace_get_regs(fregs); int ret = 0; + if (!regs) + return 0; + if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) fentry_trace_func(tf, entry_ip, regs); #ifdef CONFIG_PERF_EVENTS diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 24de0e5ff859..ff607babba18 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -40,7 +40,7 @@ static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32)) static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); /* This can be called on the fprobe_selftest_target and the fprobe_selftest_target2 */ @@ -81,7 +81,7 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); return 0; diff --git a/samples/fprobe/fprobe_example.c b/samples/fprobe/fprobe_example.c index 64e715e7ed11..1545a1aac616 100644 --- a/samples/fprobe/fprobe_example.c +++ b/samples/fprobe/fprobe_example.c @@ -50,7 +50,7 @@ static void show_backtrace(void) static int sample_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { if (use_trace) /* From patchwork Mon Nov 27 13:58:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170165 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3136123vqx; Mon, 27 Nov 2023 06:03:58 -0800 (PST) X-Google-Smtp-Source: AGHT+IHEtQcHQ0qrVaalXL1WYWwu5ygd54wHhnYWsp1kA0+qPQzdqzD9Rmb17lUZ0aL0UVqYP+vv X-Received: by 2002:a05:6820:80a:b0:587:992d:48b with SMTP id bg10-20020a056820080a00b00587992d048bmr12423279oob.1.1701093838625; Mon, 27 Nov 2023 06:03:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093838; cv=none; d=google.com; s=arc-20160816; b=MbrUaqyb1fE9ZGZz9fYOtv93dWh6zWzpkjpEZih+D1O531/dcfcfO4thGhjSjEax3p QkYdmjbO8bBl1UQWTPXzuQaMXGmhlg0Jnn57Sh+QkHwefCjkF3MYbGvfwtfl9vKLZ5R/ MGcOzNCm8W3jJTdNGlAmAAdMywiGCe6zHwI6s5QJaX0VQWPzeWaVtdVCqDjvAT10CTWD XtMpwZYXkIN0Q9Nfsbx/vSWIYc9bEA2eFUVKTmJJFpar4+MUCPN4za580livbHRWwCaU ZwVrJpXGgEFVQpVl9mcLP2upP7PWFe+huhtTNoYfFlkMheDPqHHbnqM075+jDHj/yyAO iVng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=6eRhhxt+LAY5BJ25zVOe62AcPf3iJwN5d/st87pGTAw=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=c9DPlZW8HwZO9AIPqAbpOD7emD9e2bS77AuVuNZJJZDWh78c7PItZ0vO4IDJRxCTuf D6HPoEeNrWWCv6mQeK1UdFWARdEgZ8ujNZU7tzxuXJjcKKYVKyDh/TZz1B3Rpw9NG1c5 jnJVzuACyfdd5DiUXF4CRUzIJPjlsMkE8mZv7e4B86W0nKYAjzypEQNE/oJeUgiDnRbs BlPiUJLUe3szNZPn2V3JIBKyw9cl2LulMvbJ71UMN0f51LrNl//1E0tQPzI8C4nrPmmz /IL7iDZo9wD0n2jNjZAUecXKg+UA/ACF5IykjqMc0T5fXlFBXR3rn8kNoBTi4joYAUuY c8IA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="onqqBN/7"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id z15-20020a4a808f000000b0058d8eb26f88si528363oof.12.2023.11.27.06.03.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:03:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="onqqBN/7"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id D367E805E47C; Mon, 27 Nov 2023 06:03:46 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233612AbjK0ODe (ORCPT + 99 others); Mon, 27 Nov 2023 09:03:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233571AbjK0OBV (ORCPT ); Mon, 27 Nov 2023 09:01:21 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C89792D55 for ; Mon, 27 Nov 2023 05:58:10 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B120C433C9; Mon, 27 Nov 2023 13:58:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093490; bh=2AW12FTlf1EcyHOw6jbrNQjXIrohQLC4qMkPFxwKBbo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=onqqBN/76WHT0m1YYsPaiqO2fOcfYk21rKo7lpmRHUIZEz906PW3vVH30QVoLvIOG F5GAI3yzhbjrZkAWhREUiddnv4wQMrpkOzvHf+KpiqSmssFLssqHyPF4FlfejAHQKb nRvCgKUv8L8MaVfCAixyzAmnpSzrFr/MGKmBX6vt+zLF76cjm6iYF86bvHc8E9UuDk UZwT8cgDO8lQaaUMDWicvUc5s7ZSo34S0T1fy4s6zpGVlqYmDLp2go8ktyp79wJZbH 3uR7FSEjqLHzAMeI961xBBxQXDuzBTqFVrtkVQomoQIhtMmBPV3sdKLzzgkaoCuCbM +R7XN2jt4S4Uw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 25/33] fprobe: Use ftrace_regs in fprobe exit handler Date: Mon, 27 Nov 2023 22:58:03 +0900 Message-Id: <170109348358.343914.14064032478196912219.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:03:47 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726172917433336 X-GMAIL-MSGID: 1783726172917433336 From: Masami Hiramatsu (Google) Change the fprobe exit handler to use ftrace_regs structure instead of pt_regs. This also introduce HAVE_PT_REGS_TO_FTRACE_REGS_CAST which means the ftrace_regs's memory layout is equal to the pt_regs so that those are able to cast. Fprobe introduces a new dependency with that. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Use ftrace_regs_get_return_value() Changes from previous series: NOTHING, just forward ported. --- arch/loongarch/Kconfig | 1 + arch/s390/Kconfig | 1 + arch/x86/Kconfig | 1 + include/linux/fprobe.h | 2 +- include/linux/ftrace.h | 5 +++++ kernel/trace/Kconfig | 8 ++++++++ kernel/trace/bpf_trace.c | 6 +++++- kernel/trace/fprobe.c | 3 ++- kernel/trace/trace_fprobe.c | 6 +++++- lib/test_fprobe.c | 6 +++--- samples/fprobe/fprobe_example.c | 2 +- 11 files changed, 33 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index e14396a2ddcb..258e9bee1503 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -108,6 +108,7 @@ config LOONGARCH select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_ARGS + select HAVE_PT_REGS_TO_FTRACE_REGS_CAST select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EBPF_JIT diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index ae29e4392664..5aedb4320e7c 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -167,6 +167,7 @@ config S390 select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_ARGS + select HAVE_PT_REGS_TO_FTRACE_REGS_CAST select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EBPF_JIT if HAVE_MARCH_Z196_FEATURES diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 4b4c2f9d67da..e11ba9a33afe 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -209,6 +209,7 @@ config X86 select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64 + select HAVE_PT_REGS_TO_FTRACE_REGS_CAST if X86_64 select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_SAMPLE_FTRACE_DIRECT if X86_64 select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64 diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 36c0595f7b93..879a30956009 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -38,7 +38,7 @@ struct fprobe { unsigned long ret_ip, struct ftrace_regs *regs, void *entry_data); void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); }; diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index da2a23f5a9ed..a72a2eaec576 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -159,6 +159,11 @@ struct ftrace_regs { #define ftrace_regs_set_instruction_pointer(fregs, ip) do { } while (0) #endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ +#ifdef CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST + +static_assert(sizeof(struct pt_regs) == sizeof(struct ftrace_regs)); + +#endif /* CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST */ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs) { diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 805d72ab77c6..1a2544712690 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -60,6 +60,13 @@ config HAVE_DYNAMIC_FTRACE_WITH_ARGS This allows for use of ftrace_regs_get_argument() and ftrace_regs_get_stack_pointer(). +config HAVE_PT_REGS_TO_FTRACE_REGS_CAST + bool + help + If this is set, the memory layout of the ftrace_regs data structure + is the same as the pt_regs. So the pt_regs is possible to be casted + to ftrace_regs. + config HAVE_DYNAMIC_FTRACE_NO_PATCHABLE bool help @@ -291,6 +298,7 @@ config FPROBE bool "Kernel Function Probe (fprobe)" depends on FUNCTION_TRACER depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS + depends on HAVE_PT_REGS_TO_FTRACE_REGS_CAST || !HAVE_DYNAMIC_FTRACE_WITH_ARGS depends on HAVE_RETHOOK select RETHOOK default n diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index b0b98f0a0e52..a4b5e34b0419 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2745,10 +2745,14 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, static void kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *data) { struct bpf_kprobe_multi_link *link; + struct pt_regs *regs = ftrace_get_regs(fregs); + + if (!regs) + return; link = container_of(fp, struct bpf_kprobe_multi_link, fp); kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index df323cb7bed1..38fe6a19450b 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -124,6 +124,7 @@ static void fprobe_exit_handler(struct rethook_node *rh, void *data, { struct fprobe *fp = (struct fprobe *)data; struct fprobe_rethook_node *fpr; + struct ftrace_regs *fregs = (struct ftrace_regs *)regs; int bit; if (!fp || fprobe_disabled(fp)) @@ -141,7 +142,7 @@ static void fprobe_exit_handler(struct rethook_node *rh, void *data, return; } - fp->exit_handler(fp, fpr->entry_ip, ret_ip, regs, + fp->exit_handler(fp, fpr->entry_ip, ret_ip, fregs, fp->entry_data_size ? (void *)fpr->data : NULL); ftrace_test_recursion_unlock(bit); } diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 71bf38d698f1..c60d0d9f1a95 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -341,10 +341,14 @@ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, NOKPROBE_SYMBOL(fentry_dispatcher); static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); + struct pt_regs *regs = ftrace_get_regs(fregs); + + if (!regs) + return; if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) fexit_trace_func(tf, entry_ip, ret_ip, regs); diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index ff607babba18..271ce0caeec0 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -59,9 +59,9 @@ static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { - unsigned long ret = regs_return_value(regs); + unsigned long ret = ftrace_regs_get_return_value(fregs); KUNIT_EXPECT_FALSE(current_test, preemptible()); if (ip != target_ip) { @@ -89,7 +89,7 @@ static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); KUNIT_EXPECT_EQ(current_test, ip, target_nest_ip); diff --git a/samples/fprobe/fprobe_example.c b/samples/fprobe/fprobe_example.c index 1545a1aac616..d476d1f07538 100644 --- a/samples/fprobe/fprobe_example.c +++ b/samples/fprobe/fprobe_example.c @@ -67,7 +67,7 @@ static int sample_entry_handler(struct fprobe *fp, unsigned long ip, } static void sample_exit_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *regs, void *data) { unsigned long rip = ret_ip; From patchwork Mon Nov 27 13:58:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170164 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3136078vqx; Mon, 27 Nov 2023 06:03:57 -0800 (PST) X-Google-Smtp-Source: AGHT+IFXCDi6Obg74XwErKh4NwB/n86B+yQjAghHl3Nc6Ltq1Khvr+nUgfRf2fKo/3DZFU/2sUZ8 X-Received: by 2002:a25:870b:0:b0:da0:c887:36cb with SMTP id a11-20020a25870b000000b00da0c88736cbmr9879996ybl.45.1701093836999; Mon, 27 Nov 2023 06:03:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093836; cv=none; d=google.com; s=arc-20160816; b=kkhK1BhZNLO6SlpMxQy6vycu/rFVNIZfeSafsNOLO/IW8++FpaHKoieSt0lnPFl2om hbuVfUHA41rT0F1mNOUFk3ifLOfoputzzKTCdKI+trNI0T4cB64r7KkrX+DtxsJ9ky5C UvUx281KUTRudih8Ts+3bYOvvmst/zOknoCsmh0vywsSbiuABFAMez3LWS2kRgl1QFKj WHZ9zhbOjLEH51RTrOVbmACNUAfIQjx6BpwU10mjwYLZ1MMGeDXgoX0fGY/kei5cU2An wpxYkiGdx78NjPZ2YdvVKf/OpzcA/DZAbQZHTTRtTguJwSeosNxb6IsaCm3HvAcz6STz NPTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=6CvyDT2/9dgk4JdmyncNzCvoFxaDuIj66p5rM2LDAl8=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=0NwjB/3q0/AMaEupENYpmD1Jtwsc0CFUuOmFupBH/7WizSUtOG4DzAuQCnCsrPiMQ8 M9rzwZkelhwGjd3HKBG8DjdIqmRJVCVTm2Jp2eA6eN+V020YPdHzsxr61pyk/v9o5cos eBqVo8qR6JeEw58ZBO4gnTeKiWMipfMuxCzn3MBnWpawBqcagvSFC6lxVGyrRCFVmEZM SDnV584Xjt18P8V48gRmScuNcQMYgF1LSoq7Pmz4UxtX6xKZ9ZKxDK41w5ySclvS4r8J l1kV/6eL79TBO/cHtpvEz6dSD4mZ0UEp04Bg6AquPFEkQURw4mR2xYaT6Rwef80YiEr8 8COw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=iJ9hUKCt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id o8-20020a257308000000b00da232c6af74si6167657ybc.328.2023.11.27.06.03.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:03:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=iJ9hUKCt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 0A704805BCA5; Mon, 27 Nov 2023 06:03:50 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233638AbjK0ODh (ORCPT + 99 others); Mon, 27 Nov 2023 09:03:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233584AbjK0OBY (ORCPT ); Mon, 27 Nov 2023 09:01:24 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 052362D62 for ; Mon, 27 Nov 2023 05:58:23 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C868AC433C9; Mon, 27 Nov 2023 13:58:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093502; bh=JC1iyaIU7mWHHBTGXNCiN27YHUuUtROsifUJHNlxJlc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iJ9hUKCtm++6tbn9WwWC/r9c8LS7rx/iAPHHhGedM7Sz5lACR88lDa8IPsc6CW4dU 5NXgtoQmB65TRBkfEUHx5Dh3Mnup9XqbfAX22r9tk5iQSfgJ36su0sl9Kqagw9/p4/ kbTid5I3BwtjB80sJzhubR1Nl4NrtQSlJZidHDYtAU6wMBg7sXS1b03RI9WQYsWBOJ /KDVr5gpGqneUDI0P9DMIHnRwzEQQwLfxt+/l+Kry046ECesp5zz3jq2kzMi5v54uw pz7VAre8BdVHo741Q2OTMkJjyDCx7RepvPOS5CiKi94CY3u+aEq08+gP9pr9qob+uP 5YxLdNwK/ASRA== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 26/33] tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs Date: Mon, 27 Nov 2023 22:58:16 +0900 Message-Id: <170109349587.343914.12760806372446369018.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:03:50 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726171079613963 X-GMAIL-MSGID: 1783726171079613963 From: Masami Hiramatsu (Google) Add ftrace_partial_regs() which converts the ftrace_regs to pt_regs. If the architecture defines its own ftrace_regs, this copies partial registers to pt_regs and returns it. If not, ftrace_regs is the same as pt_regs and ftrace_partial_regs() will return ftrace_regs::regs. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest --- Changes from previous series: NOTHING, just forward ported. --- arch/arm64/include/asm/ftrace.h | 11 +++++++++++ include/linux/ftrace.h | 17 +++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index efd5dbf74dd6..31051fa2b4d9 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -143,6 +143,17 @@ ftrace_override_function_with_return(struct ftrace_regs *fregs) fregs->pc = fregs->lr; } +static __always_inline struct pt_regs * +ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs) +{ + memcpy(regs->regs, fregs->regs, sizeof(u64) * 9); + regs->sp = fregs->sp; + regs->pc = fregs->pc; + regs->regs[29] = fregs->fp; + regs->regs[30] = fregs->lr; + return regs; +} + int ftrace_regs_query_register_offset(const char *name); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index a72a2eaec576..515ec804d605 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -173,6 +173,23 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs return arch_ftrace_get_regs(fregs); } +#if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \ + defined(CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST) + +static __always_inline struct pt_regs * +ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + /* + * If CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST=y, ftrace_regs memory + * layout is the same as pt_regs. So always returns that address. + * Since arch_ftrace_get_regs() will check some members and may return + * NULL, we can not use it. + */ + return &fregs->regs; +} + +#endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST */ + /* * When true, the ftrace_regs_{get,set}_*() functions may be used on fregs. * Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs. From patchwork Mon Nov 27 13:58:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170168 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3136862vqx; Mon, 27 Nov 2023 06:04:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IHH11v1HSJSmId61nc460MoM27zT/KPIV/SHP3j7JFvC2kJd8ehSJYSujRLzcyBayJrZJGP X-Received: by 2002:a05:6830:1e89:b0:6d8:21d3:d5d2 with SMTP id n9-20020a0568301e8900b006d821d3d5d2mr4937728otr.30.1701093874837; Mon, 27 Nov 2023 06:04:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093874; cv=none; d=google.com; s=arc-20160816; b=kOlmlA1HdsQvu7NVHENdsjewEY7x6Miq8qynl91+0hGTNKD/uzYO5Znczs2GduCFMK JUq4y03EQLM4sFZgyaWeljJpuaNPXhIQ3MUAqe7dlqNVQU87uVBev6Y0cBROhuJT3eZ4 T+20t5PUaGBe4fEwLBAI6k9WbmfYJh0Qp4OmtD8Rc/2XTzw9Rmyfnc7cvGWq+BRuQL1A azSX0qdmF+Xh5unRyQ7OfQTCAeAXru70h5cBcXTz+s9rBU2njk/xsVtenC+t8zbIcw1t RKxavtiCX3vxpmVFyVFsh/HmrvqeUzqA/9Jve/+hWwFRyzl/oH3SX/WNpCREvXXx8oe3 TCkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=I/NOdqZd4vjz2DBRERgllFY0C5eXmhFfbexyugXtquw=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=kRJ2Vz9khZWEn5n8sqJ6bo9fIJB+BHbRLI28M4Xm8j2YRjucoGyEX6EngSgC6JUG9x 40L6Ri+AbfgAHiOpB9XK/Yo3x0UIOnk6iJw2WUXGuaJJoJJ3iwlT21kbeRPjCvZ6NVVL FqLZf+Rp62tkcBt8uEsa1uIllA7ymSoalVwZ3f87qqf5T/Thvd4bXUwC2PimBXZ/CIsH 6o9hvt3favEl6gyEBS9r/f4TRiAlc9yIF5rUGh2ij30MaUOqom4xjxw2crjHlduRAaqX 3K8+7Z3jDMGyiFgFi41ZtaapCsmHUcWtlgcc01v8XAJeDjPjNfOrY8SGgXCsU9HOa/fQ cwaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=iIgblfrD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id q4-20020a9d6644000000b006d7eb6bd798si3759378otm.80.2023.11.27.06.04.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:04:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=iIgblfrD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id A21E0802209E; Mon, 27 Nov 2023 06:04:20 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233720AbjK0ODk (ORCPT + 99 others); Mon, 27 Nov 2023 09:03:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233594AbjK0OBY (ORCPT ); Mon, 27 Nov 2023 09:01:24 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A2712D65 for ; Mon, 27 Nov 2023 05:58:35 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 018C1C433CA; Mon, 27 Nov 2023 13:58:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093514; bh=qNmjHVAcUhvkoBgfYIyluKuCUP9AzOWE8BbeAF90uB0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iIgblfrDetsLRQ1WEfbCn9IeJgY5uGjVPh7jNpB9eZaIw/oxzFThX4V5189B5P0rQ 7jkwZJ2Dtysf8bS1KAT+sY+ypAPjJs5rvzunpkb+aeO9alSwFxysZogy1nf+5La0e7 CSKhuMd7JdWaoWlfBecrntANwZiHViW7v7/zIZX+1RuPt5rrO+3HeooGpmQjOCXeGp gNw80tD2BzutN8KvEExCLk9eR+Ttfn0nY/sFsl8H/lt1DduJAuUjREhO9jW04B7LWC iNXRFPtZVl3dAKFYQAnKd700mqSKwygB2ZGzxtWx9XL3+KOfp4zMDRo8NUNvTDwUNn 2MBNyCFNM9xsw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 27/33] tracing: Add ftrace_fill_perf_regs() for perf event Date: Mon, 27 Nov 2023 22:58:28 +0900 Message-Id: <170109350805.343914.11303748656507387778.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:04:20 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726210950158491 X-GMAIL-MSGID: 1783726210950158491 From: Masami Hiramatsu (Google) Add ftrace_fill_perf_regs() which should be compatible with the perf_fetch_caller_regs(). In other words, the pt_regs returned from the ftrace_fill_perf_regs() must satisfy 'user_mode(regs) == false' and can be used for stack tracing. Signed-off-by: Masami Hiramatsu (Google) --- Changes from previous series: NOTHING, just forward ported. --- arch/arm64/include/asm/ftrace.h | 7 +++++++ arch/powerpc/include/asm/ftrace.h | 7 +++++++ arch/s390/include/asm/ftrace.h | 5 +++++ arch/x86/include/asm/ftrace.h | 7 +++++++ include/linux/ftrace.h | 31 +++++++++++++++++++++++++++++++ 5 files changed, 57 insertions(+) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 31051fa2b4d9..c1921bdf760b 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -154,6 +154,13 @@ ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs) return regs; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->pc = (fregs)->pc; \ + (_regs)->regs[29] = (fregs)->fp; \ + (_regs)->sp = (fregs)->sp; \ + (_regs)->pstate = PSR_MODE_EL1h; \ + } while (0) + int ftrace_regs_query_register_offset(const char *name); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h index 7e138e0e3baf..8737a794c764 100644 --- a/arch/powerpc/include/asm/ftrace.h +++ b/arch/powerpc/include/asm/ftrace.h @@ -52,6 +52,13 @@ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs * return fregs->regs.msr ? &fregs->regs : NULL; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->result = 0; \ + (_regs)->nip = (fregs)->regs.nip; \ + (_regs)->gpr[1] = (fregs)->regs.gpr[1]; \ + asm volatile("mfmsr %0" : "=r" ((_regs)->msr)); \ + } while (0) + static __always_inline void ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip) diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index 01e775c98425..c2a269c1617c 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -97,6 +97,11 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, #define ftrace_regs_query_register_offset(name) \ regs_query_register_offset(name) +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->psw.addr = (fregs)->regs.psw.addr; \ + (_regs)->gprs[15] = (fregs)->regs.gprs[15]; \ + } while (0) + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS /* * When an ftrace registered caller is tracing a function that is diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index a061f8832b20..2e3de45e9746 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -54,6 +54,13 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) return &fregs->regs; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->ip = (fregs)->regs.ip; \ + (_regs)->sp = (fregs)->regs.sp; \ + (_regs)->cs = __KERNEL_CS; \ + (_regs)->flags = 0; \ + } while (0) + #define ftrace_regs_set_instruction_pointer(fregs, _ip) \ do { (fregs)->regs.ip = (_ip); } while (0) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 515ec804d605..8150edcf8496 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -190,6 +190,37 @@ ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs) #endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_PT_REGS_TO_FTRACE_REGS_CAST */ +#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS + +/* + * Please define arch dependent pt_regs which compatible to the + * perf_arch_fetch_caller_regs() but based on ftrace_regs. + * This requires + * - user_mode(_regs) returns false (always kernel mode). + * - able to use the _regs for stack trace. + */ +#ifndef arch_ftrace_fill_perf_regs +/* As same as perf_arch_fetch_caller_regs(), do nothing by default */ +#define arch_ftrace_fill_perf_regs(fregs, _regs) do {} while (0) +#endif + +static __always_inline struct pt_regs * +ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + arch_ftrace_fill_perf_regs(fregs, regs); + return regs; +} + +#else /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ + +static __always_inline struct pt_regs * +ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + return &fregs->regs; +} + +#endif + /* * When true, the ftrace_regs_{get,set}_*() functions may be used on fregs. * Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs. From patchwork Mon Nov 27 13:58:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170166 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3136347vqx; Mon, 27 Nov 2023 06:04:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IHi8QLNboasIXHzvQor3/noaP+v6W5iW33SOgxRs4g2Z1qp4G0UJm8dO1Gs53rnNauKYz9G X-Received: by 2002:a5b:749:0:b0:da0:66ed:ea1e with SMTP id s9-20020a5b0749000000b00da066edea1emr11165007ybq.11.1701093849898; Mon, 27 Nov 2023 06:04:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093849; cv=none; d=google.com; s=arc-20160816; b=D21OvKu3Y3e1tCb7NW/sRLxAQH4P5a46IEHVt6m0OXnGFY3pb1kVv3O6AzlIKaCFlv UPTXHgJ/YLfXgGL65dzyH1KPMXg+pJgfkI6ECteuGcH/Y05qr4XGADXhxasTH47qT88a oDj6NhU6N3OfCv3vebUf5rxVQO1OaEEEq4bVS986tX5svu64FX98Naxf5B/PPjrRqZW6 lph8jCVYQuxvXW1zVaTz4v7nytltR6a8nW/hEg9T0o9TVdsxLK0A2yEidVu1D5AO9I6d kOKSwOPHYmvioxMQattc1OiKiD9x3VWQ3ZeZg8WerYyFpIMQl/sfyNKps0WbuyMc2Gbt nVJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=jUy6Tux7ZBD6ECpB2lybYuNb48pK9745UVffOXT3KtQ=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=REGPPu2SuBbOvwgPUkB1xHs6A6JUbrdi91WTJffA1eqB8kry2qftZmFqL+qEkIARJr 5aPIVKW8h8wqM1LsAuhWyKBVOkSUvAErrBToxeZIqKaLeMnN/J8sablzx4/W9rx1hmco ptv5Y1bTwXevwrH8b/o7PLSspsp8s7cPwFum+exO/eTL7yPhs+/cM7aRisb3M78qWn4L M+4TK53MUeZVBbOeik6t8y82oOXyvt0JXiCCrdE36H6CIAsNzPJgWgWOOEMSAyR3TLY+ oXj6aBUsjKB/B+JutJuVFpG8lThcWk46BVFxlZHXwyp8NtTJxuLjgFrktX2W55v8TIBP oa0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=KK8MEaMW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id c127-20020a253585000000b00db40480f942si6296625yba.0.2023.11.27.06.04.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:04:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=KK8MEaMW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 9CBE5809E712; Mon, 27 Nov 2023 06:04:03 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233656AbjK0ODs (ORCPT + 99 others); Mon, 27 Nov 2023 09:03:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233605AbjK0OBZ (ORCPT ); Mon, 27 Nov 2023 09:01:25 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94A352D75 for ; Mon, 27 Nov 2023 05:58:47 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2EFFFC433CB; Mon, 27 Nov 2023 13:58:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093527; bh=A8+7z8ezOGWJDI8qIqnzCb7UInJI15rZvKclxLIXcYc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KK8MEaMWTZAcQ50ho1LI1SX+KIR5drvb1Y7IeLrjsADWobDCw4g+mA0dA88RbYmRa o/66beNZ3SJp1OLD6ApRgQUdSmUXthjt1i9X+WTGEyYWgbtzVJlbXyN2KgPWmtdL1T ZBM8HipwegXEWG2kjo+0Zm8w8UE5m48zT9iwWsf0IlS7HMUwhsNtz/rccgBoYl7Svz UbkwCO89DYrQ29kvnKBK+rfAtailPmYTFTkdPlsC3MIwx30hhEtSblnZHnXWmoWNz+ 2yDcbHurqBhpaAZAgiYOf6dutnVrW/MxS9mWl/0J3A5o1JmOB8ErQZew9l8BSMD6l+ 4+sM23oMODguQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 28/33] fprobe: Rewrite fprobe on function-graph tracer Date: Mon, 27 Nov 2023 22:58:40 +0900 Message-Id: <170109352014.343914.17580314660854847955.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:04:04 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726184549419911 X-GMAIL-MSGID: 1783726184549419911 From: Masami Hiramatsu (Google) Rewrite fprobe implementation on function-graph tracer. Major API changes are: - 'nr_maxactive' field is deprecated. - This depends on CONFIG_DYNAMIC_FTRACE_WITH_ARGS or !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS, and CONFIG_HAVE_FUNCTION_GRAPH_FREGS. So currently works only on x86_64. - Currently the entry size is limited in 15 * sizeof(long). - If there is too many fprobe exit handler set on the same function, it will fail to probe. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v3: - Update for new reserve_data/retrieve_data API. - Fix internal push/pop on fgraph data logic so that it can correctly save/restore the returning fprobes. Changes in v2: - Add more lockdep_assert_held(fprobe_mutex) - Use READ_ONCE() and WRITE_ONCE() for fprobe_hlist_node::fp. - Add NOKPROBE_SYMBOL() for the functions which is called from entry/exit callback. --- include/linux/fprobe.h | 54 +++- kernel/trace/Kconfig | 8 - kernel/trace/fprobe.c | 638 ++++++++++++++++++++++++++++++++++-------------- lib/test_fprobe.c | 45 --- 4 files changed, 493 insertions(+), 252 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 879a30956009..08b37b0d1d05 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -5,32 +5,56 @@ #include #include -#include +#include +#include +#include + +struct fprobe; + +/** + * strcut fprobe_hlist_node - address based hash list node for fprobe. + * + * @hlist: The hlist node for address search hash table. + * @addr: The address represented by this. + * @fp: The fprobe which owns this. + */ +struct fprobe_hlist_node { + struct hlist_node hlist; + unsigned long addr; + struct fprobe *fp; +}; + +/** + * struct fprobe_hlist - hash list nodes for fprobe. + * + * @hlist: The hlist node for existence checking hash table. + * @rcu: rcu_head for RCU deferred release. + * @fp: The fprobe which owns this fprobe_hlist. + * @size: The size of @array. + * @array: The fprobe_hlist_node for each address to probe. + */ +struct fprobe_hlist { + struct hlist_node hlist; + struct rcu_head rcu; + struct fprobe *fp; + int size; + struct fprobe_hlist_node array[]; +}; /** * struct fprobe - ftrace based probe. - * @ops: The ftrace_ops. + * * @nmissed: The counter for missing events. * @flags: The status flag. - * @rethook: The rethook data structure. (internal data) * @entry_data_size: The private data storage size. - * @nr_maxactive: The max number of active functions. + * @nr_maxactive: The max number of active functions. (*deprecated) * @entry_handler: The callback function for function entry. * @exit_handler: The callback function for function exit. + * @hlist_array: The fprobe_hlist for fprobe search from IP hash table. */ struct fprobe { -#ifdef CONFIG_FUNCTION_TRACER - /* - * If CONFIG_FUNCTION_TRACER is not set, CONFIG_FPROBE is disabled too. - * But user of fprobe may keep embedding the struct fprobe on their own - * code. To avoid build error, this will keep the fprobe data structure - * defined here, but remove ftrace_ops data structure. - */ - struct ftrace_ops ops; -#endif unsigned long nmissed; unsigned int flags; - struct rethook *rethook; size_t entry_data_size; int nr_maxactive; @@ -40,6 +64,8 @@ struct fprobe { void (*exit_handler)(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); + + struct fprobe_hlist *hlist_array; }; /* This fprobe is soft-disabled. */ diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 1a2544712690..11a96275b68c 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -296,11 +296,9 @@ config DYNAMIC_FTRACE_WITH_ARGS config FPROBE bool "Kernel Function Probe (fprobe)" - depends on FUNCTION_TRACER - depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS - depends on HAVE_PT_REGS_TO_FTRACE_REGS_CAST || !HAVE_DYNAMIC_FTRACE_WITH_ARGS - depends on HAVE_RETHOOK - select RETHOOK + depends on FUNCTION_GRAPH_TRACER + depends on HAVE_FUNCTION_GRAPH_FREGS + depends on DYNAMIC_FTRACE_WITH_ARGS || !HAVE_DYNAMIC_FTRACE_WITH_ARGS default n help This option enables kernel function probe (fprobe) based on ftrace. diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index 38fe6a19450b..53e681c2458b 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -8,98 +8,193 @@ #include #include #include -#include +#include +#include #include #include #include "trace.h" -struct fprobe_rethook_node { - struct rethook_node node; - unsigned long entry_ip; - unsigned long entry_parent_ip; - char data[]; -}; +#define FPROBE_IP_HASH_BITS 8 +#define FPROBE_IP_TABLE_SIZE (1 << FPROBE_IP_HASH_BITS) -static inline void __fprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) -{ - struct fprobe_rethook_node *fpr; - struct rethook_node *rh = NULL; - struct fprobe *fp; - void *entry_data = NULL; - int ret = 0; +#define FPROBE_HASH_BITS 6 +#define FPROBE_TABLE_SIZE (1 << FPROBE_HASH_BITS) - fp = container_of(ops, struct fprobe, ops); +/* + * fprobe_table: hold 'fprobe_hlist::hlist' for checking the fprobe still + * exists. The key is the address of fprobe instance. + * fprobe_ip_table: hold 'fprobe_hlist::array[*]' for searching the fprobe + * instance related to the funciton address. The key is the ftrace IP + * address. + * + * When unregistering the fprobe, fprobe_hlist::fp and fprobe_hlist::array[*].fp + * are set NULL and delete those from both hash tables (by hlist_del_rcu). + * After an RCU grace period, the fprobe_hlist itself will be released. + * + * fprobe_table and fprobe_ip_table can be accessed from either + * - Normal hlist traversal and RCU add/del under 'fprobe_mutex' is held. + * - RCU hlist traversal under disabling preempt + */ +static struct hlist_head fprobe_table[FPROBE_TABLE_SIZE]; +static struct hlist_head fprobe_ip_table[FPROBE_IP_TABLE_SIZE]; +static DEFINE_MUTEX(fprobe_mutex); - if (fp->exit_handler) { - rh = rethook_try_get(fp->rethook); - if (!rh) { - fp->nmissed++; - return; - } - fpr = container_of(rh, struct fprobe_rethook_node, node); - fpr->entry_ip = ip; - fpr->entry_parent_ip = parent_ip; - if (fp->entry_data_size) - entry_data = fpr->data; +/* + * Find first fprobe in the hlist. It will be iterated twice in the entry + * probe, once for correcting the total required size, the second time is + * calling back the user handlers. + * Thus the hlist in the fprobe_table must be sorted and new probe needs to + * be added *before* the first fprobe. + */ +static struct fprobe_hlist_node *find_first_fprobe_node(unsigned long ip) +{ + struct fprobe_hlist_node *node; + struct hlist_head *head; + + head = &fprobe_ip_table[hash_ptr((void *)ip, FPROBE_IP_HASH_BITS)]; + hlist_for_each_entry_rcu(node, head, hlist, + lockdep_is_held(&fprobe_mutex)) { + if (node->addr == ip) + return node; } + return NULL; +} +NOKPROBE_SYMBOL(find_first_fprobe_node); - if (fp->entry_handler) - ret = fp->entry_handler(fp, ip, parent_ip, fregs, entry_data); +/* Node insertion and deletion requires the fprobe_mutex */ +static void insert_fprobe_node(struct fprobe_hlist_node *node) +{ + unsigned long ip = node->addr; + struct fprobe_hlist_node *next; + struct hlist_head *head; - /* If entry_handler returns !0, nmissed is not counted. */ - if (rh) { - if (ret) - rethook_recycle(rh); - else - rethook_hook(rh, ftrace_get_regs(fregs), true); + lockdep_assert_held(&fprobe_mutex); + + next = find_first_fprobe_node(ip); + if (next) { + hlist_add_before_rcu(&node->hlist, &next->hlist); + return; } + head = &fprobe_ip_table[hash_ptr((void *)ip, FPROBE_IP_HASH_BITS)]; + hlist_add_head_rcu(&node->hlist, head); } -static void fprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) +/* Return true if there are synonims */ +static bool delete_fprobe_node(struct fprobe_hlist_node *node) { - struct fprobe *fp; - int bit; + lockdep_assert_held(&fprobe_mutex); - fp = container_of(ops, struct fprobe, ops); - if (fprobe_disabled(fp)) - return; + WRITE_ONCE(node->fp, NULL); + hlist_del_rcu(&node->hlist); + return !!find_first_fprobe_node(node->addr); +} - /* recursion detection has to go before any traceable function and - * all functions before this point should be marked as notrace - */ - bit = ftrace_test_recursion_trylock(ip, parent_ip); - if (bit < 0) { - fp->nmissed++; - return; +/* Check existence of the fprobe */ +static bool is_fprobe_still_exist(struct fprobe *fp) +{ + struct hlist_head *head; + struct fprobe_hlist *fph; + + head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)]; + hlist_for_each_entry_rcu(fph, head, hlist, + lockdep_is_held(&fprobe_mutex)) { + if (fph->fp == fp) + return true; } - __fprobe_handler(ip, parent_ip, ops, fregs); - ftrace_test_recursion_unlock(bit); + return false; +} +NOKPROBE_SYMBOL(is_fprobe_still_exist); + +static int add_fprobe_hash(struct fprobe *fp) +{ + struct fprobe_hlist *fph = fp->hlist_array; + struct hlist_head *head; + + lockdep_assert_held(&fprobe_mutex); + if (WARN_ON_ONCE(!fph)) + return -EINVAL; + + if (is_fprobe_still_exist(fp)) + return -EEXIST; + + head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)]; + hlist_add_head_rcu(&fp->hlist_array->hlist, head); + return 0; } -NOKPROBE_SYMBOL(fprobe_handler); -static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) +static int del_fprobe_hash(struct fprobe *fp) { - struct fprobe *fp; - int bit; + struct fprobe_hlist *fph = fp->hlist_array; - fp = container_of(ops, struct fprobe, ops); - if (fprobe_disabled(fp)) - return; + lockdep_assert_held(&fprobe_mutex); - /* recursion detection has to go before any traceable function and - * all functions called before this point should be marked as notrace - */ - bit = ftrace_test_recursion_trylock(ip, parent_ip); - if (bit < 0) { - fp->nmissed++; - return; + if (WARN_ON_ONCE(!fph)) + return -EINVAL; + + if (!is_fprobe_still_exist(fp)) + return -ENOENT; + + fph->fp = NULL; + hlist_del_rcu(&fph->hlist); + return 0; +} + +/* The entry data size is 4 bits (=16) * sizeof(long) in maximum */ +#define FPROBE_HEADER_SIZE_BITS 4 +#define MAX_FPROBE_DATA_SIZE_WORD ((1L << FPROBE_HEADER_SIZE_BITS) - 1) +#define MAX_FPROBE_DATA_SIZE (MAX_FPROBE_DATA_SIZE_WORD * sizeof(long)) +#define FPROBE_HEADER_PTR_BITS (BITS_PER_LONG - FPROBE_HEADER_SIZE_BITS) +#define FPROBE_HEADER_PTR_MASK GENMASK(FPROBE_HEADER_PTR_BITS - 1, 0) +#define FPROBE_HEADER_SIZE sizeof(unsigned long) + +static inline unsigned long encode_fprobe_header(struct fprobe *fp, int size_words) +{ + if (WARN_ON_ONCE(size_words > MAX_FPROBE_DATA_SIZE_WORD || + ((unsigned long)fp & ~FPROBE_HEADER_PTR_MASK) != + ~FPROBE_HEADER_PTR_MASK)) { + return 0; } + return ((unsigned long)size_words << FPROBE_HEADER_PTR_BITS) | + ((unsigned long)fp & FPROBE_HEADER_PTR_MASK); +} + +/* Return reserved data size in words */ +static inline int decode_fprobe_header(unsigned long val, struct fprobe **fp) +{ + unsigned long ptr; + + ptr = (val & FPROBE_HEADER_PTR_MASK) | ~FPROBE_HEADER_PTR_MASK; + if (fp) + *fp = (struct fprobe *)ptr; + return val >> FPROBE_HEADER_PTR_BITS; +} + +/* + * fprobe shadow stack management: + * Since fprobe shares a single fgraph_ops, it needs to share the stack entry + * among the probes on the same function exit. Note that a new probe can be + * registered before a target function is returning, we can not use the hash + * table to find the corresponding probes. Thus the probe address is stored on + * the shadow stack with its entry data size. + * + */ +static inline int __fprobe_handler(unsigned long ip, unsigned long parent_ip, + struct fprobe *fp, struct ftrace_regs *fregs, + void *data) +{ + if (!fp->entry_handler) + return 0; + + return fp->entry_handler(fp, ip, parent_ip, fregs, data); +} +static inline int __fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, + struct fprobe *fp, struct ftrace_regs *fregs, + void *data) +{ + int ret; /* * This user handler is shared with other kprobes and is not expected to be * called recursively. So if any other kprobe handler is running, this will @@ -108,45 +203,180 @@ static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, */ if (unlikely(kprobe_running())) { fp->nmissed++; - goto recursion_unlock; + return 0; } kprobe_busy_begin(); - __fprobe_handler(ip, parent_ip, ops, fregs); + ret = __fprobe_handler(ip, parent_ip, fp, fregs, data); kprobe_busy_end(); - -recursion_unlock: - ftrace_test_recursion_unlock(bit); + return ret; } -static void fprobe_exit_handler(struct rethook_node *rh, void *data, - unsigned long ret_ip, struct pt_regs *regs) +static int fprobe_entry(unsigned long func, unsigned long ret_ip, + struct ftrace_regs *fregs, struct fgraph_ops *gops) { - struct fprobe *fp = (struct fprobe *)data; - struct fprobe_rethook_node *fpr; - struct ftrace_regs *fregs = (struct ftrace_regs *)regs; - int bit; + struct fprobe_hlist_node *node, *first; + unsigned long *fgraph_data = NULL; + unsigned long header; + int reserved_words; + struct fprobe *fp; + int used, ret; - if (!fp || fprobe_disabled(fp)) - return; + if (WARN_ON_ONCE(!fregs)) + return 0; - fpr = container_of(rh, struct fprobe_rethook_node, node); + first = node = find_first_fprobe_node(func); + if (unlikely(!first)) + return 0; + + reserved_words = 0; + hlist_for_each_entry_from_rcu(node, hlist) { + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (!fp || !fp->exit_handler) + continue; + /* + * Since fprobe can be enabled until the next loop, we ignore the + * fprobe's disabled flag in this loop. + */ + reserved_words += + DIV_ROUND_UP(fp->entry_data_size, sizeof(long)) + 1; + } + node = first; + if (reserved_words) { + fgraph_data = fgraph_reserve_data(gops->idx, reserved_words * sizeof(long)); + if (unlikely(!fgraph_data)) { + hlist_for_each_entry_from_rcu(node, hlist) { + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (fp && !fprobe_disabled(fp)) + fp->nmissed++; + } + return 0; + } + } /* - * we need to assure no calls to traceable functions in-between the - * end of fprobe_handler and the beginning of fprobe_exit_handler. + * TODO: recursion detection has been done in the fgraph. Thus we need + * to add a callback to increment missed counter. */ - bit = ftrace_test_recursion_trylock(fpr->entry_ip, fpr->entry_parent_ip); - if (bit < 0) { - fp->nmissed++; + used = 0; + hlist_for_each_entry_from_rcu(node, hlist) { + void *data; + + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (!fp || fprobe_disabled(fp)) + continue; + + if (fp->entry_data_size && fp->exit_handler) + data = fgraph_data + used + 1; + else + data = NULL; + + if (fprobe_shared_with_kprobes(fp)) + ret = __fprobe_kprobe_handler(func, ret_ip, fp, fregs, data); + else + ret = __fprobe_handler(func, ret_ip, fp, fregs, data); + /* If entry_handler returns !0, nmissed is not counted but skips exit_handler. */ + if (!ret && fp->exit_handler) { + int size_words = DIV_ROUND_UP(fp->entry_data_size, sizeof(long)); + + header = encode_fprobe_header(fp, size_words); + if (likely(header)) { + fgraph_data[used] = header; + used += size_words + 1; + } + } + } + if (used < reserved_words) + memset(fgraph_data + used, 0, reserved_words - used); + + /* If any exit_handler is set, data must be used. */ + return used != 0; +} +NOKPROBE_SYMBOL(fprobe_entry); + +static void fprobe_return(unsigned long func, unsigned long ret_ip, + struct ftrace_regs *fregs, struct fgraph_ops *gops) +{ + unsigned long *fgraph_data = NULL; + unsigned long val; + struct fprobe *fp; + int size, curr; + int size_words; + + fgraph_data = (unsigned long *)fgraph_retrieve_data(gops->idx, &size); + if (!fgraph_data) + return; + size_words = DIV_ROUND_UP(size, sizeof(long)); + + preempt_disable(); + + curr = 0; + while (size_words > curr) { + val = fgraph_data[curr++]; + if (!val) + break; + + size = decode_fprobe_header(val, &fp); + if (fp && is_fprobe_still_exist(fp) && !fprobe_disabled(fp)) { + if (WARN_ON_ONCE(curr + size > size_words)) + break; + fp->exit_handler(fp, func, ret_ip, fregs, + size ? fgraph_data + curr : NULL); + } + curr += size + 1; + } + preempt_enable(); +} +NOKPROBE_SYMBOL(fprobe_return); + +static struct fgraph_ops fprobe_graph_ops = { + .entryregfunc = fprobe_entry, + .retregfunc = fprobe_return, +}; +static int fprobe_graph_active; + +/* Add @addrs to the ftrace filter and register fgraph if needed. */ +static int fprobe_graph_add_ips(unsigned long *addrs, int num) +{ + int ret; + + lockdep_assert_held(&fprobe_mutex); + + ret = ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 0, 0); + if (ret) + return ret; + + if (!fprobe_graph_active) { + ret = register_ftrace_graph(&fprobe_graph_ops); + if (WARN_ON_ONCE(ret)) { + ftrace_free_filter(&fprobe_graph_ops.ops); + return ret; + } + } + fprobe_graph_active++; + return 0; +} + +/* Remove @addrs from the ftrace filter and unregister fgraph if possible. */ +static void fprobe_graph_remove_ips(unsigned long *addrs, int num) +{ + lockdep_assert_held(&fprobe_mutex); + + fprobe_graph_active--; + if (!fprobe_graph_active) { + /* Q: should we unregister it ? */ + unregister_ftrace_graph(&fprobe_graph_ops); return; } - fp->exit_handler(fp, fpr->entry_ip, ret_ip, fregs, - fp->entry_data_size ? (void *)fpr->data : NULL); - ftrace_test_recursion_unlock(bit); + ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0); } -NOKPROBE_SYMBOL(fprobe_exit_handler); static int symbols_cmp(const void *a, const void *b) { @@ -176,62 +406,96 @@ static unsigned long *get_ftrace_locations(const char **syms, int num) return ERR_PTR(-ENOENT); } -static void fprobe_init(struct fprobe *fp) +struct filter_match_data { + const char *filter; + const char *notfilter; + size_t index; + size_t size; + unsigned long *addrs; +}; + +static int filter_match_callback(void *data, const char *name, unsigned long addr) { - fp->nmissed = 0; - if (fprobe_shared_with_kprobes(fp)) - fp->ops.func = fprobe_kprobe_handler; - else - fp->ops.func = fprobe_handler; - fp->ops.flags |= FTRACE_OPS_FL_SAVE_ARGS; + struct filter_match_data *match = data; + + if (!glob_match(match->filter, name) || + (match->notfilter && glob_match(match->notfilter, name))) + return 0; + + if (!ftrace_location(addr)) + return 0; + + if (match->addrs) + match->addrs[match->index] = addr; + + match->index++; + return match->index == match->size; } -static int fprobe_init_rethook(struct fprobe *fp, int num) +/* + * Make IP list from the filter/no-filter glob patterns. + * Return the number of matched symbols, or -ENOENT. + */ +static int ip_list_from_filter(const char *filter, const char *notfilter, + unsigned long *addrs, size_t size) { - int i, size; + struct filter_match_data match = { .filter = filter, .notfilter = notfilter, + .index = 0, .size = size, .addrs = addrs}; + int ret; - if (num <= 0) - return -EINVAL; + ret = kallsyms_on_each_symbol(filter_match_callback, &match); + if (ret < 0) + return ret; + ret = module_kallsyms_on_each_symbol(NULL, filter_match_callback, &match); + if (ret < 0) + return ret; - if (!fp->exit_handler) { - fp->rethook = NULL; - return 0; - } + return match.index ?: -ENOENT; +} - /* Initialize rethook if needed */ - if (fp->nr_maxactive) - size = fp->nr_maxactive; - else - size = num * num_possible_cpus() * 2; - if (size <= 0) +static void fprobe_fail_cleanup(struct fprobe *fp) +{ + kfree(fp->hlist_array); + fp->hlist_array = NULL; +} + +/* Initialize the fprobe data structure. */ +static int fprobe_init(struct fprobe *fp, unsigned long *addrs, int num) +{ + struct fprobe_hlist *hlist_array; + unsigned long addr; + int size, i; + + if (!fp || !addrs || num <= 0) return -EINVAL; - fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler); - if (!fp->rethook) + size = ALIGN(fp->entry_data_size, sizeof(long)); + if (size > MAX_FPROBE_DATA_SIZE) + return -E2BIG; + fp->entry_data_size = size; + + hlist_array = kzalloc(struct_size(hlist_array, array, num), GFP_KERNEL); + if (!hlist_array) return -ENOMEM; - for (i = 0; i < size; i++) { - struct fprobe_rethook_node *node; - - node = kzalloc(sizeof(*node) + fp->entry_data_size, GFP_KERNEL); - if (!node) { - rethook_free(fp->rethook); - fp->rethook = NULL; - return -ENOMEM; + + fp->nmissed = 0; + + hlist_array->size = num; + fp->hlist_array = hlist_array; + hlist_array->fp = fp; + for (i = 0; i < num; i++) { + hlist_array->array[i].fp = fp; + addr = ftrace_location(addrs[i]); + if (!addr) { + fprobe_fail_cleanup(fp); + return -ENOENT; } - rethook_add_node(fp->rethook, &node->node); + hlist_array->array[i].addr = addr; } return 0; } -static void fprobe_fail_cleanup(struct fprobe *fp) -{ - if (fp->rethook) { - /* Don't need to cleanup rethook->handler because this is not used. */ - rethook_free(fp->rethook); - fp->rethook = NULL; - } - ftrace_free_filter(&fp->ops); -} +#define FPROBE_IPS_MAX INT_MAX /** * register_fprobe() - Register fprobe to ftrace by pattern. @@ -246,46 +510,24 @@ static void fprobe_fail_cleanup(struct fprobe *fp) */ int register_fprobe(struct fprobe *fp, const char *filter, const char *notfilter) { - struct ftrace_hash *hash; - unsigned char *str; - int ret, len; + unsigned long *addrs; + int ret; if (!fp || !filter) return -EINVAL; - fprobe_init(fp); - - len = strlen(filter); - str = kstrdup(filter, GFP_KERNEL); - ret = ftrace_set_filter(&fp->ops, str, len, 0); - kfree(str); - if (ret) + ret = ip_list_from_filter(filter, notfilter, NULL, FPROBE_IPS_MAX); + if (ret < 0) return ret; - if (notfilter) { - len = strlen(notfilter); - str = kstrdup(notfilter, GFP_KERNEL); - ret = ftrace_set_notrace(&fp->ops, str, len, 0); - kfree(str); - if (ret) - goto out; - } - - /* TODO: - * correctly calculate the total number of filtered symbols - * from both filter and notfilter. - */ - hash = rcu_access_pointer(fp->ops.local_hash.filter_hash); - if (WARN_ON_ONCE(!hash)) - goto out; - - ret = fprobe_init_rethook(fp, (int)hash->count); - if (!ret) - ret = register_ftrace_function(&fp->ops); + addrs = kcalloc(ret, sizeof(unsigned long), GFP_KERNEL); + if (!addrs) + return -ENOMEM; + ret = ip_list_from_filter(filter, notfilter, addrs, ret); + if (ret > 0) + ret = register_fprobe_ips(fp, addrs, ret); -out: - if (ret) - fprobe_fail_cleanup(fp); + kfree(addrs); return ret; } EXPORT_SYMBOL_GPL(register_fprobe); @@ -293,7 +535,7 @@ EXPORT_SYMBOL_GPL(register_fprobe); /** * register_fprobe_ips() - Register fprobe to ftrace by address. * @fp: A fprobe data structure to be registered. - * @addrs: An array of target ftrace location addresses. + * @addrs: An array of target function address. * @num: The number of entries of @addrs. * * Register @fp to ftrace for enabling the probe on the address given by @addrs. @@ -305,23 +547,27 @@ EXPORT_SYMBOL_GPL(register_fprobe); */ int register_fprobe_ips(struct fprobe *fp, unsigned long *addrs, int num) { - int ret; - - if (!fp || !addrs || num <= 0) - return -EINVAL; + struct fprobe_hlist *hlist_array; + int ret, i; - fprobe_init(fp); - - ret = ftrace_set_filter_ips(&fp->ops, addrs, num, 0, 0); + ret = fprobe_init(fp, addrs, num); if (ret) return ret; - ret = fprobe_init_rethook(fp, num); - if (!ret) - ret = register_ftrace_function(&fp->ops); + mutex_lock(&fprobe_mutex); + + hlist_array = fp->hlist_array; + ret = fprobe_graph_add_ips(addrs, num); + if (!ret) { + add_fprobe_hash(fp); + for (i = 0; i < hlist_array->size; i++) + insert_fprobe_node(&hlist_array->array[i]); + } + mutex_unlock(&fprobe_mutex); if (ret) fprobe_fail_cleanup(fp); + return ret; } EXPORT_SYMBOL_GPL(register_fprobe_ips); @@ -359,14 +605,13 @@ EXPORT_SYMBOL_GPL(register_fprobe_syms); bool fprobe_is_registered(struct fprobe *fp) { - if (!fp || (fp->ops.saved_func != fprobe_handler && - fp->ops.saved_func != fprobe_kprobe_handler)) + if (!fp || !fp->hlist_array) return false; return true; } /** - * unregister_fprobe() - Unregister fprobe from ftrace + * unregister_fprobe() - Unregister fprobe. * @fp: A fprobe data structure to be unregistered. * * Unregister fprobe (and remove ftrace hooks from the function entries). @@ -375,23 +620,40 @@ bool fprobe_is_registered(struct fprobe *fp) */ int unregister_fprobe(struct fprobe *fp) { - int ret; + struct fprobe_hlist *hlist_array; + unsigned long *addrs = NULL; + int ret = 0, i, count; - if (!fprobe_is_registered(fp)) - return -EINVAL; + mutex_lock(&fprobe_mutex); + if (!fp || !is_fprobe_still_exist(fp)) { + ret = -EINVAL; + goto out; + } - if (fp->rethook) - rethook_stop(fp->rethook); + hlist_array = fp->hlist_array; + addrs = kcalloc(hlist_array->size, sizeof(unsigned long), GFP_KERNEL); + if (!addrs) { + ret = -ENOMEM; /* TODO: Fallback to one-by-one loop */ + goto out; + } - ret = unregister_ftrace_function(&fp->ops); - if (ret < 0) - return ret; + /* Remove non-synonim ips from table and hash */ + count = 0; + for (i = 0; i < hlist_array->size; i++) { + if (!delete_fprobe_node(&hlist_array->array[i])) + addrs[count++] = hlist_array->array[i].addr; + } + del_fprobe_hash(fp); - if (fp->rethook) - rethook_free(fp->rethook); + fprobe_graph_remove_ips(addrs, count); - ftrace_free_filter(&fp->ops); + kfree_rcu(hlist_array, rcu); + fp->hlist_array = NULL; +out: + mutex_unlock(&fprobe_mutex); + + kfree(addrs); return ret; } EXPORT_SYMBOL_GPL(unregister_fprobe); diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 271ce0caeec0..cf92111b5c79 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -17,10 +17,8 @@ static u32 rand1, entry_val, exit_val; /* Use indirect calls to avoid inlining the target functions */ static u32 (*target)(u32 value); static u32 (*target2)(u32 value); -static u32 (*target_nest)(u32 value, u32 (*nest)(u32)); static unsigned long target_ip; static unsigned long target2_ip; -static unsigned long target_nest_ip; static int entry_return_value; static noinline u32 fprobe_selftest_target(u32 value) @@ -33,11 +31,6 @@ static noinline u32 fprobe_selftest_target2(u32 value) return (value / div_factor) + 1; } -static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32)) -{ - return nest(value + 2); -} - static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *data) @@ -79,22 +72,6 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, KUNIT_EXPECT_NULL(current_test, data); } -static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, - struct ftrace_regs *fregs, void *data) -{ - KUNIT_EXPECT_FALSE(current_test, preemptible()); - return 0; -} - -static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, - struct ftrace_regs *fregs, void *data) -{ - KUNIT_EXPECT_FALSE(current_test, preemptible()); - KUNIT_EXPECT_EQ(current_test, ip, target_nest_ip); -} - /* Test entry only (no rethook) */ static void test_fprobe_entry(struct kunit *test) { @@ -191,25 +168,6 @@ static void test_fprobe_data(struct kunit *test) KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); } -/* Test nr_maxactive */ -static void test_fprobe_nest(struct kunit *test) -{ - static const char *syms[] = {"fprobe_selftest_target", "fprobe_selftest_nest_target"}; - struct fprobe fp = { - .entry_handler = nest_entry_handler, - .exit_handler = nest_exit_handler, - .nr_maxactive = 1, - }; - - current_test = test; - KUNIT_EXPECT_EQ(test, 0, register_fprobe_syms(&fp, syms, 2)); - - target_nest(rand1, target); - KUNIT_EXPECT_EQ(test, 1, fp.nmissed); - - KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); -} - static void test_fprobe_skip(struct kunit *test) { struct fprobe fp = { @@ -247,10 +205,8 @@ static int fprobe_test_init(struct kunit *test) rand1 = get_random_u32_above(div_factor); target = fprobe_selftest_target; target2 = fprobe_selftest_target2; - target_nest = fprobe_selftest_nest_target; target_ip = get_ftrace_location(target); target2_ip = get_ftrace_location(target2); - target_nest_ip = get_ftrace_location(target_nest); return 0; } @@ -260,7 +216,6 @@ static struct kunit_case fprobe_testcases[] = { KUNIT_CASE(test_fprobe), KUNIT_CASE(test_fprobe_syms), KUNIT_CASE(test_fprobe_data), - KUNIT_CASE(test_fprobe_nest), KUNIT_CASE(test_fprobe_skip), {} }; From patchwork Mon Nov 27 13:58:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170167 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3136791vqx; Mon, 27 Nov 2023 06:04:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IER7l1o6TYTREspn2542mhLcz8pveYEAGu5Ng54Zlu81i4KUSF0xQ0JjwpJPixBSvdbsV2p X-Received: by 2002:a05:6808:1802:b0:3b2:f461:b31e with SMTP id bh2-20020a056808180200b003b2f461b31emr5264527oib.29.1701093871280; Mon, 27 Nov 2023 06:04:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093871; cv=none; d=google.com; s=arc-20160816; b=S5GXkz6PcUYLPpNtfgwT5EvalvWlyb+MknTEN3diJCiXjdblSxUTvP7Q3k7Wcg2a3H OpwFidM4SQ3zD+oKTZ4GbtMAH0uf+QM9qqmmxzwt4J5mPPJ6uktGX2qR/MLGQOj1MZsm NTSHWlQt2T7uCMifg3hiymnPNbdU0KpReHlvN2/Co1Lk8NS5JJO6hSo5Rcyzou1n3+R9 95HovTozl9rhbvS/LIkZrMjm6XTb6YNfVOEB4MlS4sHxBlgPp6V4eUN0G2mRqPLxJskO BNkaTk1RmS2NBGWL2TntRfM1wMX8UlUWmesNu+Ii89bW4XT9+QGo0LZlQSABJylw54Tc uj9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=3rTm6qnkpyoUbdO9pnPymratIgRDvFcln4lBmuOF0tk=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=bs4/1RmI68vO8ZkZ0uE0RloSLdh/fSU/aLA412ltLNnAs+H83bm4p9FNRYCHozhhUX ns/ga/RqFnkIFqEm0rIJR8j+9oSBpQIezaGqXnK9O1E6ASLJHkdjc5I02IcRJNp6HEER pGiV5PjlqjcGimTjAwzLneQKwbBHKcDTHwuC1W/jWXosfd/mASWkynfEm6/9AdhYvCoU AmH3CoWxCkRxz/8TIJOGmsZY/w8nw0vTAYNsYB5G+Ze8fX+GqXO+PP/+oNCmWnzZu7Pq lTIW5cvU44KUCq9nTJueOYycyR6PzVTOddh99KE2sBTh3hyMSngIPKQU3FeEt+BbDvUP Ttsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=StF3J+UU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id bq11-20020a05680823cb00b003b71e05cc1dsi3976864oib.94.2023.11.27.06.04.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:04:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=StF3J+UU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A7A07809E722; Mon, 27 Nov 2023 06:04:13 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233421AbjK0OD4 (ORCPT + 99 others); Mon, 27 Nov 2023 09:03:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233658AbjK0OBt (ORCPT ); Mon, 27 Nov 2023 09:01:49 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6C2530D8 for ; Mon, 27 Nov 2023 05:58:59 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90F89C43397; Mon, 27 Nov 2023 13:58:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093539; bh=VzChugqmxn2FZXPNNk2q4QjwiB/rKZhvHsJJiY0UHmY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=StF3J+UU6tCL5CyM81pYCy3Cfu06EKgjhGYYqeHACqc8h3qjo/R5LPj6FxcxdBZCe G6Er7lE1w3KWnu4uBjb276MDK+hyOE3pIYp7mcQ8pEdZmXHVg7TOOawHH6J1xI9AKS zDtlG4tPQkdt4z3GqWfRjGj18AbkrBjoxtLiMkayQDSvAoH891rn+qf9CIckjr9wcX in/nEsaOfuGQNg1AChD8YidvWVArQsZEu3f9zhNSD+M9vMFS4kelrdOtm8rn9JHejX UDw2/iRDniGSfEw3X2uqOnaisb0h3hEmkxA9VdpCjQ7AI/7ReHyzEui18MUg6jVfFj fQufGmB2aP/zg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 29/33] tracing/fprobe: Remove nr_maxactive from fprobe Date: Mon, 27 Nov 2023 22:58:53 +0900 Message-Id: <170109353263.343914.17085363307950852857.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:04:13 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726207014775966 X-GMAIL-MSGID: 1783726207014775966 From: Masami Hiramatsu (Google) Remove depercated fprobe::nr_maxactive. This involves fprobe events to rejects the maxactive number. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Newly added. --- include/linux/fprobe.h | 2 -- kernel/trace/trace_fprobe.c | 44 ++++++------------------------------------- 2 files changed, 6 insertions(+), 40 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 08b37b0d1d05..c28d06ddfb8e 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -47,7 +47,6 @@ struct fprobe_hlist { * @nmissed: The counter for missing events. * @flags: The status flag. * @entry_data_size: The private data storage size. - * @nr_maxactive: The max number of active functions. (*deprecated) * @entry_handler: The callback function for function entry. * @exit_handler: The callback function for function exit. * @hlist_array: The fprobe_hlist for fprobe search from IP hash table. @@ -56,7 +55,6 @@ struct fprobe { unsigned long nmissed; unsigned int flags; size_t entry_data_size; - int nr_maxactive; int (*entry_handler)(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *regs, diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index c60d0d9f1a95..59d2ef8d9552 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -375,7 +375,6 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group, const char *event, const char *symbol, struct tracepoint *tpoint, - int maxactive, int nargs, bool is_return) { struct trace_fprobe *tf; @@ -395,7 +394,6 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group, tf->fp.entry_handler = fentry_dispatcher; tf->tpoint = tpoint; - tf->fp.nr_maxactive = maxactive; ret = trace_probe_init(&tf->tp, event, group, false); if (ret < 0) @@ -973,12 +971,11 @@ static int __trace_fprobe_create(int argc, const char *argv[]) * FETCHARG:TYPE : use TYPE instead of unsigned long. */ struct trace_fprobe *tf = NULL; - int i, len, new_argc = 0, ret = 0; + int i, new_argc = 0, ret = 0; bool is_return = false; char *symbol = NULL; const char *event = NULL, *group = FPROBE_EVENT_SYSTEM; const char **new_argv = NULL; - int maxactive = 0; char buf[MAX_EVENT_NAME_LEN]; char gbuf[MAX_EVENT_NAME_LEN]; char sbuf[KSYM_NAME_LEN]; @@ -999,33 +996,13 @@ static int __trace_fprobe_create(int argc, const char *argv[]) trace_probe_log_init("trace_fprobe", argc, argv); - event = strchr(&argv[0][1], ':'); - if (event) - event++; - - if (isdigit(argv[0][1])) { - if (event) - len = event - &argv[0][1] - 1; - else - len = strlen(&argv[0][1]); - if (len > MAX_EVENT_NAME_LEN - 1) { - trace_probe_log_err(1, BAD_MAXACT); - goto parse_error; - } - memcpy(buf, &argv[0][1], len); - buf[len] = '\0'; - ret = kstrtouint(buf, 0, &maxactive); - if (ret || !maxactive) { + if (argv[0][1] != '\0') { + if (argv[0][1] != ':') { + trace_probe_log_set_index(0); trace_probe_log_err(1, BAD_MAXACT); goto parse_error; } - /* fprobe rethook instances are iterated over via a list. The - * maximum should stay reasonable. - */ - if (maxactive > RETHOOK_MAXACTIVE_MAX) { - trace_probe_log_err(1, MAXACT_TOO_BIG); - goto parse_error; - } + event = &argv[0][2]; } trace_probe_log_set_index(1); @@ -1035,12 +1012,6 @@ static int __trace_fprobe_create(int argc, const char *argv[]) if (ret < 0) goto parse_error; - if (!is_return && maxactive) { - trace_probe_log_set_index(0); - trace_probe_log_err(1, BAD_MAXACT_TYPE); - goto parse_error; - } - trace_probe_log_set_index(0); if (event) { ret = traceprobe_parse_event_name(&event, &group, gbuf, @@ -1094,8 +1065,7 @@ static int __trace_fprobe_create(int argc, const char *argv[]) } /* setup a probe */ - tf = alloc_trace_fprobe(group, event, symbol, tpoint, maxactive, - argc, is_return); + tf = alloc_trace_fprobe(group, event, symbol, tpoint, argc, is_return); if (IS_ERR(tf)) { ret = PTR_ERR(tf); /* This must return -ENOMEM, else there is a bug */ @@ -1171,8 +1141,6 @@ static int trace_fprobe_show(struct seq_file *m, struct dyn_event *ev) seq_putc(m, 't'); else seq_putc(m, 'f'); - if (trace_fprobe_is_return(tf) && tf->fp.nr_maxactive) - seq_printf(m, "%d", tf->fp.nr_maxactive); seq_printf(m, ":%s/%s", trace_probe_group_name(&tf->tp), trace_probe_name(&tf->tp)); From patchwork Mon Nov 27 13:59:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170169 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3138876vqx; Mon, 27 Nov 2023 06:06:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IEWu7qF5RUDRrSBO4gf5Yp+PAUjpgkwTMo4eD/yTSE0dIJC922Hll30q0GZVLiLyzBJ/bbY X-Received: by 2002:a05:6871:14d:b0:1f5:aed1:d211 with SMTP id z13-20020a056871014d00b001f5aed1d211mr12666290oab.16.1701093976394; Mon, 27 Nov 2023 06:06:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701093976; cv=none; d=google.com; s=arc-20160816; b=PT7giA+pC+OjkuyyngISF1r15QkcrTGt2gRTkTz1lr4Cm63Ufi86Srf/aEt/+YK3KK kANr0Y97SPeE1GqnYmWv6VKvfERvq7kF+2z1VvX1qu6QGYxWBivgLba13deLhZMujKfz EBkt225WQxQcdi0tgCy1WOfjk1PLFVlhme6yzNvJYDvJcFFX+IQ4wxSv+Z6RtO2JONAL VzIW4ozs6fCOvYA3VabUHlT3PFYfWcIrjwMEWYMFjpCtyKrXK8Ai6wRhnudX+SYZTgkm lXmT2JAM571VC+jRYpfHvVMp01L+UodXQvSYRG1MjWNx6x3xX1xgFqQh0GGetsL9EdxT CLow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=YfpBiCdf9eAmTyP6fuFJw0ngIPfdY2ICTA+8bzETYls=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=cEZTp4CDHEipWSrMH0Y32NOGhu10W7MVdRzlf/1nGvBLLMXmr06qn7JIG2z3mVvKrF 7V+XB0U4/H8zVgyPBehZK+pnxW24kNzrza4vM1aX+OGvZdUC325h5ttjlBaOkml7liBj 7tLt+uBP208TokGQJfnWIj+5uCncgkrbXFn0oM6/k8O70bw6VJTfHq0edRz27Fz2CBY3 QdLl7CjonWI+/V+2a/xYMiDQC4sweLaLsZJjE/cvhQawsMfLx/HXF4uAPIRcFcrDmO8r /3HEFgNCHLQATeYgYp47WA16GQKq+utPL/L6glyxwkFppm1ngmBlMR2u3oEUpa+RJtdE xDLQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JQsw52Bw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id gy16-20020a056870289000b001f9de9eeb1asi3533677oab.19.2023.11.27.06.06.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:06:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JQsw52Bw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id D57F880779A4; Mon, 27 Nov 2023 06:06:03 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233381AbjK0OFx (ORCPT + 99 others); Mon, 27 Nov 2023 09:05:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233735AbjK0ODu (ORCPT ); Mon, 27 Nov 2023 09:03:50 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63AC6325F for ; Mon, 27 Nov 2023 05:59:12 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D442FC43142; Mon, 27 Nov 2023 13:59:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093552; bh=PEWsPdgg+aXqu72zXNVmfsbH7GHdoZ9vi/blpknJS3E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JQsw52BwqAuRY1e6Cmb//PfNDh+N0JGAjBcyOB74tG8KTc6vL2fo2EgeHaz29pLMJ yvFhoXKKy/Ka0MaoebY56k+jbeH7KCTCByGEFmHRk/KfqZAJDFH14wUrXVpU2trR+f hqspLJTmHpSy7CPOVmGURfJ0+bD4iR4RePujmYCnbaogtmpCt8u7bf8Aa5aCFmhRXO Bgm6lmfOoJY2xryyS3V0+2mlbpyc8PSTW1Qt0iWtBaormJLBw6QIBlg76NzvmirGsS fHIU1T9sioTMYUjoWCTfzu8O/G2VVOHYnQopEhHl5DV/dp3riGdJ24PmzfIWhmuTVB xs6wRt0bQ6CmQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 30/33] tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS Date: Mon, 27 Nov 2023 22:59:05 +0900 Message-Id: <170109354482.343914.3788252376338598326.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:06:04 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726317080008237 X-GMAIL-MSGID: 1783726317080008237 From: Masami Hiramatsu (Google) Allow fprobe events to be enabled with CONFIG_DYNAMIC_FTRACE_WITH_ARGS. With this change, fprobe events mostly use ftrace_regs instead of pt_regs. Note that if the arch doesn't enable HAVE_PT_REGS_COMPAT_FTRACE_REGS, fprobe events will not be able to be used from perf. Signed-off-by: Masami Hiramatsu (Google) --- Chagnes in v3: - Use ftrace_regs_get_return_value(). Changes in v2: - Define ftrace_regs_get_kernel_stack_nth() for !CONFIG_HAVE_REGS_AND_STACK_ACCESS_API. Changes from previous series: Update against the new series. --- include/linux/ftrace.h | 17 +++++++++ kernel/trace/Kconfig | 1 - kernel/trace/trace_fprobe.c | 74 ++++++++++++++++++++------------------- kernel/trace/trace_probe_tmpl.h | 2 + 4 files changed, 55 insertions(+), 39 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 8150edcf8496..ad28daa507f7 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -250,6 +250,23 @@ static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs) regs_query_register_offset(name) #endif +#ifdef CONFIG_HAVE_REGS_AND_STACK_ACCESS_API +static __always_inline unsigned long +ftrace_regs_get_kernel_stack_nth(struct ftrace_regs *fregs, unsigned int nth) +{ + unsigned long *stackp; + + stackp = (unsigned long *)ftrace_regs_get_stack_pointer(fregs); + if (((unsigned long)(stackp + nth) & ~(THREAD_SIZE - 1)) == + ((unsigned long)stackp & ~(THREAD_SIZE - 1))) + return *(stackp + nth); + + return 0; +} +#else /* !CONFIG_HAVE_REGS_AND_STACK_ACCESS_API */ +#define ftrace_regs_get_kernel_stack_nth(fregs, nth) (0L) +#endif /* CONFIG_HAVE_REGS_AND_STACK_ACCESS_API */ + typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs); diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 11a96275b68c..169588021d90 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -681,7 +681,6 @@ config FPROBE_EVENTS select TRACING select PROBE_EVENTS select DYNAMIC_EVENTS - depends on DYNAMIC_FTRACE_WITH_REGS default y help This allows user to add tracing events on the function entry and diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 59d2ef8d9552..4b7684a7c95c 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -132,7 +132,7 @@ static int process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, void *base) { - struct pt_regs *regs = rec; + struct ftrace_regs *fregs = rec; unsigned long val; int ret; @@ -140,17 +140,17 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *dest, /* 1st stage: get value from context */ switch (code->op) { case FETCH_OP_STACK: - val = regs_get_kernel_stack_nth(regs, code->param); + val = ftrace_regs_get_kernel_stack_nth(fregs, code->param); break; case FETCH_OP_STACKP: - val = kernel_stack_pointer(regs); + val = ftrace_regs_get_stack_pointer(fregs); break; case FETCH_OP_RETVAL: - val = regs_return_value(regs); + val = ftrace_regs_get_return_value(fregs); break; #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API case FETCH_OP_ARG: - val = regs_get_kernel_argument(regs, code->param); + val = ftrace_regs_get_argument(fregs, code->param); break; #endif case FETCH_NOP_SYMBOL: /* Ignore a place holder */ @@ -170,7 +170,7 @@ NOKPROBE_SYMBOL(process_fetch_insn) /* function entry handler */ static nokprobe_inline void __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs, + struct ftrace_regs *fregs, struct trace_event_file *trace_file) { struct fentry_trace_entry_head *entry; @@ -184,36 +184,36 @@ __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, if (trace_trigger_soft_disabled(trace_file)) return; - dsize = __get_data_size(&tf->tp, regs); + dsize = __get_data_size(&tf->tp, fregs); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + tf->tp.size + dsize); if (!entry) return; - fbuffer.regs = regs; + fbuffer.regs = ftrace_get_regs(fregs); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry->ip = entry_ip; - store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, sizeof(*entry), dsize); trace_event_buffer_commit(&fbuffer); } static void fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs) + struct ftrace_regs *fregs) { struct event_file_link *link; trace_probe_for_each_link_rcu(link, &tf->tp) - __fentry_trace_func(tf, entry_ip, regs, link->file); + __fentry_trace_func(tf, entry_ip, fregs, link->file); } NOKPROBE_SYMBOL(fentry_trace_func); /* Kretprobe handler */ static nokprobe_inline void __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, struct trace_event_file *trace_file) { struct fexit_trace_entry_head *entry; @@ -227,60 +227,63 @@ __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, if (trace_trigger_soft_disabled(trace_file)) return; - dsize = __get_data_size(&tf->tp, regs); + dsize = __get_data_size(&tf->tp, fregs); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + tf->tp.size + dsize); if (!entry) return; - fbuffer.regs = regs; + fbuffer.regs = ftrace_get_regs(fregs); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry->func = entry_ip; entry->ret_ip = ret_ip; - store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, sizeof(*entry), dsize); trace_event_buffer_commit(&fbuffer); } static void fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs) + unsigned long ret_ip, struct ftrace_regs *fregs) { struct event_file_link *link; trace_probe_for_each_link_rcu(link, &tf->tp) - __fexit_trace_func(tf, entry_ip, ret_ip, regs, link->file); + __fexit_trace_func(tf, entry_ip, ret_ip, fregs, link->file); } NOKPROBE_SYMBOL(fexit_trace_func); #ifdef CONFIG_PERF_EVENTS static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs) + struct ftrace_regs *fregs) { struct trace_event_call *call = trace_probe_event_call(&tf->tp); struct fentry_trace_entry_head *entry; struct hlist_head *head; int size, __size, dsize; + struct pt_regs *regs; int rctx; head = this_cpu_ptr(call->perf_events); if (hlist_empty(head)) return 0; - dsize = __get_data_size(&tf->tp, regs); + dsize = __get_data_size(&tf->tp, fregs); __size = sizeof(*entry) + tf->tp.size + dsize; size = ALIGN(__size + sizeof(u32), sizeof(u64)); size -= sizeof(u32); - entry = perf_trace_buf_alloc(size, NULL, &rctx); + entry = perf_trace_buf_alloc(size, ®s, &rctx); if (!entry) return 0; + regs = ftrace_fill_perf_regs(fregs, regs); + entry->ip = entry_ip; memset(&entry[1], 0, dsize); - store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, sizeof(*entry), dsize); perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, head, NULL); return 0; @@ -289,30 +292,33 @@ NOKPROBE_SYMBOL(fentry_perf_func); static void fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs) + unsigned long ret_ip, struct ftrace_regs *fregs) { struct trace_event_call *call = trace_probe_event_call(&tf->tp); struct fexit_trace_entry_head *entry; struct hlist_head *head; int size, __size, dsize; + struct pt_regs *regs; int rctx; head = this_cpu_ptr(call->perf_events); if (hlist_empty(head)) return; - dsize = __get_data_size(&tf->tp, regs); + dsize = __get_data_size(&tf->tp, fregs); __size = sizeof(*entry) + tf->tp.size + dsize; size = ALIGN(__size + sizeof(u32), sizeof(u64)); size -= sizeof(u32); - entry = perf_trace_buf_alloc(size, NULL, &rctx); + entry = perf_trace_buf_alloc(size, ®s, &rctx); if (!entry) return; + regs = ftrace_fill_perf_regs(fregs, regs); + entry->func = entry_ip; entry->ret_ip = ret_ip; - store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, sizeof(*entry), dsize); perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, head, NULL); } @@ -324,17 +330,14 @@ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); - struct pt_regs *regs = ftrace_get_regs(fregs); int ret = 0; - if (!regs) - return 0; - if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) - fentry_trace_func(tf, entry_ip, regs); + fentry_trace_func(tf, entry_ip, fregs); + #ifdef CONFIG_PERF_EVENTS if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE)) - ret = fentry_perf_func(tf, entry_ip, regs); + ret = fentry_perf_func(tf, entry_ip, fregs); #endif return ret; } @@ -345,16 +348,13 @@ static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); - struct pt_regs *regs = ftrace_get_regs(fregs); - - if (!regs) - return; if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) - fexit_trace_func(tf, entry_ip, ret_ip, regs); + fexit_trace_func(tf, entry_ip, ret_ip, fregs); + #ifdef CONFIG_PERF_EVENTS if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE)) - fexit_perf_func(tf, entry_ip, ret_ip, regs); + fexit_perf_func(tf, entry_ip, ret_ip, fregs); #endif } NOKPROBE_SYMBOL(fexit_dispatcher); diff --git a/kernel/trace/trace_probe_tmpl.h b/kernel/trace/trace_probe_tmpl.h index 3935b347f874..05445a745a07 100644 --- a/kernel/trace/trace_probe_tmpl.h +++ b/kernel/trace/trace_probe_tmpl.h @@ -232,7 +232,7 @@ process_fetch_insn_bottom(struct fetch_insn *code, unsigned long val, /* Sum up total data length for dynamic arrays (strings) */ static nokprobe_inline int -__get_data_size(struct trace_probe *tp, struct pt_regs *regs) +__get_data_size(struct trace_probe *tp, void *regs) { struct probe_arg *arg; int i, len, ret = 0; From patchwork Mon Nov 27 13:59:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170174 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3152053vqx; Mon, 27 Nov 2023 06:18:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IGALdYExAN74sROci7e8B5Jg2nOpzYACRb1y9XPvAPjoFN+rpRr5IGNzZWgKX474+dmVFqS X-Received: by 2002:a05:6808:1486:b0:3b8:5c93:e072 with SMTP id e6-20020a056808148600b003b85c93e072mr10401461oiw.44.1701094735036; Mon, 27 Nov 2023 06:18:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701094735; cv=none; d=google.com; s=arc-20160816; b=fXaDbhwYqm24vierQjYQ9ACq4h90w2lANBpupkq6AH2SWVzz31qdA5BmmBrZUuG7/Q 2xcpdGBdZw0zxMHiVuAhRfItz7NlocvCmqPe18mLS1qCUjdrm9Wk5an88nobWNr/3MeF vuMi5E730SPGZDJrs2mOkDN0M/gxaTWkRcYl1aV1kQ9OASz2lo0c7E+/I6VMxMqpxX6A Pfc8nGrdbxbPr+R0qBYukuh8ij49S9WC12/ji1sQ04mhlCDElGkb2/P8LcmS6Xca5bLc FCaVj3JVdX6rspQDaJSoHbhr0XpokZRYOYxwt4e2FHbvKIOs3AhqKzmbH1bKtDkYibbw xibg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=E1brWmeSwayCzbniOOCUraTFwlw+PNleAG8+mc9v1bk=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=xzyVKH3xCap0g2s0IJ6yFd/QGf8dgNNOa2NZB0IEsK08ZzPqKCnnokHFixxSFjegiQ jniu4S9yN2jlx1XvzzU3Rq9lk9L7S5WSgC4ymZLtf2Tp0GIDoGaYvuCBlU6sbYKTX4oQ IEflZluFu/zqt90FITw+ckVSFFstGUjEkb8kMdxg/Ug0cP4LDDibh4nNZJgL64ogrmUu iY/bxQoUyOkC2+wUMarte6n7V5507XGpmTWnF/AF8L8mGSyXD1MfXeqKBgTAzDA+mkCL CNd/D9ZBGQsWnFCBgJC5BM49Itt6Kr98QZ0K1BEvMyCV6ICT1mMFzZWmhzJqjneAYUBv ZAhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=SXWmHYau; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id ct9-20020a056808360900b003b8790adc7esi464349oib.151.2023.11.27.06.18.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:18:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=SXWmHYau; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 71E16805E462; Mon, 27 Nov 2023 06:18:52 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233652AbjK0OS1 (ORCPT + 99 others); Mon, 27 Nov 2023 09:18:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233655AbjK0ORt (ORCPT ); Mon, 27 Nov 2023 09:17:49 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A453735AC for ; Mon, 27 Nov 2023 05:59:24 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71A35C433AB; Mon, 27 Nov 2023 13:59:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093564; bh=x87p6LsSh0DkSyVn29mHeJbsXBvI/Rrv9TUScfOAGeQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SXWmHYauQUjST0Y11NW93XQWLBadrODno/ZlVMtGcPEUOSuANCO9uZ0UV+jWP8dtn AUACh18AzxrUNGftis5VvoJofRQgPCrf1DgXSUqs/MAv+Ke8k7GVNIAtK0n24ri8RU cBlezfnoTDA+/hP79kKs8UoO/SfK/1MUj7EfY0SI55AYDzgoVyfIyPHFN8Dpdg6HOd 49MTZi6yJAJJtKLTP8ECGosJMuwAZaKnGUPes4kPvl7akkaLPCicwEOdlmyDz21EUP D7Xl6w/CN/y+cHw/FnebSYpzbJCGHpH0FuXQ0Wcd4aWOP+dZ0Veg3dqIuOwRPSyfId lityG30fYv/Dw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 31/33] bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled Date: Mon, 27 Nov 2023 22:59:17 +0900 Message-Id: <170109355741.343914.12552513151576894785.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:18:52 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783727112861366866 X-GMAIL-MSGID: 1783727112861366866 From: Masami Hiramatsu (Google) Enable kprobe_multi feature if CONFIG_FPROBE is enabled. The pt_regs is converted from ftrace_regs by ftrace_partial_regs(), thus some registers may always returns 0. But it should be enough for function entry (access arguments) and exit (access return value). Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest --- Changes from previous series: NOTHING, Update against the new series. --- kernel/trace/bpf_trace.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index a4b5e34b0419..96d6e9993f75 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2501,7 +2501,7 @@ static int __init bpf_event_init(void) fs_initcall(bpf_event_init); #endif /* CONFIG_MODULES */ -#if defined(CONFIG_FPROBE) && defined(CONFIG_DYNAMIC_FTRACE_WITH_REGS) +#ifdef CONFIG_FPROBE struct bpf_kprobe_multi_link { struct bpf_link link; struct fprobe fp; @@ -2524,6 +2524,8 @@ struct user_syms { char *buf; }; +static DEFINE_PER_CPU(struct pt_regs, bpf_kprobe_multi_pt_regs); + static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 cnt) { unsigned long __user usymbol; @@ -2700,13 +2702,14 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx) static int kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, - unsigned long entry_ip, struct pt_regs *regs) + unsigned long entry_ip, struct ftrace_regs *fregs) { struct bpf_kprobe_multi_run_ctx run_ctx = { .link = link, .entry_ip = entry_ip, }; struct bpf_run_ctx *old_run_ctx; + struct pt_regs *regs; int err; if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { @@ -2716,6 +2719,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, migrate_disable(); rcu_read_lock(); + regs = ftrace_partial_regs(fregs, this_cpu_ptr(&bpf_kprobe_multi_pt_regs)); old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); err = bpf_prog_run(link->link.prog, regs); bpf_reset_run_ctx(old_run_ctx); @@ -2733,13 +2737,9 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, void *data) { struct bpf_kprobe_multi_link *link; - struct pt_regs *regs = ftrace_get_regs(fregs); - - if (!regs) - return 0; link = container_of(fp, struct bpf_kprobe_multi_link, fp); - kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); + kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), fregs); return 0; } @@ -2749,13 +2749,9 @@ kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip, void *data) { struct bpf_kprobe_multi_link *link; - struct pt_regs *regs = ftrace_get_regs(fregs); - - if (!regs) - return; link = container_of(fp, struct bpf_kprobe_multi_link, fp); - kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs); + kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), fregs); } static int symbols_cmp_r(const void *a, const void *b, const void *priv) @@ -3012,7 +3008,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr kvfree(cookies); return err; } -#else /* !CONFIG_FPROBE || !CONFIG_DYNAMIC_FTRACE_WITH_REGS */ +#else /* !CONFIG_FPROBE */ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; From patchwork Mon Nov 27 13:59:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170170 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3141065vqx; Mon, 27 Nov 2023 06:08:13 -0800 (PST) X-Google-Smtp-Source: AGHT+IESmRW9VtOgN5vbwERztCwXn+w5PzsvgPRjmZsV7UBaOcqo2AHaSAbvtPYrg55KbIYu48ax X-Received: by 2002:a6b:e602:0:b0:79f:d04d:ce5a with SMTP id g2-20020a6be602000000b0079fd04dce5amr8037673ioh.2.1701094092756; Mon, 27 Nov 2023 06:08:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701094092; cv=none; d=google.com; s=arc-20160816; b=jKZ7UTmZZ/xdlpdibktHDxL6beI8clQT011b8jycLGynugpMtc7NzfRSc+YNTEz9dS h6+zroGZ6aq0eXxv8bNQSGwfV2jePYtKhYEMJQV5/qXyzEIJYcPCJ21DQjpAaX3yNcJv Eb2yIVQITcD98zp43Sjo8SEWTptgLoD8Qy1myJ5b0/Z2PhoUIK9/I9NVtvlDmuk5VyPK PfeawIYrgNcVI/KmdbUs13OKo52JojOnfL6Jt2xAsXexrayzziGREDUiuKUAj+DOSS4c 11HoLutYzjzy09ArHBe4q8sEfcfhlRAYnUC+og7KcLDCg9NKHbDXRJkQw7eB04cRjUOK hriw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=NRt0pRDVMb2HoTOOgHKbTFmgFt039oUBjnBj47jnGM0=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=fcwQh4MQyzU/5Y1AiV4G6W9puR2ZHEJGQx8jf5vv6fhJmA5tKUj0kT98Wsv6vZOyIx DVC4mUp+hNoj7R6aTH1Npu2PzBKONszXKMdRyYgmbXQk0a3bMMzeyNW67b4oNYM43U9A kJ8+8pUtN9DNnNT8IQeRcMNg1bxLi3f7T0TLyNYM10siKya1fHExEhEAr2vTjYjWS6Y4 nkyFh6k3LuPufZiijFy4H22n6BAU/TjKkUKHrKn2PnOV0CC0O97W4zGsu3CXtATVdrvM +xoG7cSYS2afyz68Xlgx1bXX6ccxCeHT1oHDXygHe6xty4MaJBxs1bxUAOQLbffHjmwJ GWNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NEtBN7EH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id l9-20020a5e8809000000b007b3952c541esi1875948ioj.24.2023.11.27.06.08.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:08:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=NEtBN7EH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 0D3EA8093F7B; Mon, 27 Nov 2023 06:08:07 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233672AbjK0OHd (ORCPT + 99 others); Mon, 27 Nov 2023 09:07:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233604AbjK0OF3 (ORCPT ); Mon, 27 Nov 2023 09:05:29 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0893E3845 for ; Mon, 27 Nov 2023 05:59:37 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F1FBC433A9; Mon, 27 Nov 2023 13:59:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093576; bh=AhqKrJz5NqAxQZJTxzvS35/o9mTQRDXLSjKOCvX5P6o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NEtBN7EHFofY5t1c/VbOBEmRjCpFOPoi3qQWSrZHG/R1MtKrdAd0i2iC9y2AIyLjR 90RV2dFfJgTjA6x7DjV1UVkrxD2f/li+GIVqCSI4dltc2gTriyxyErKV17dlNi9FOh dPkRqizNUZcBH9LNkBsIuZoFl7OU6fYLQmqusU2aLpH+c9KRPqwu04uLSNUx6khJuw 360nuMSO4nUpOhdh+4PfMoQGWfWjfzjbhUHz+h0KHNYigCnwUV6IO9OMXtGQvZ146C Ui9oBh/hxf6iEcq2kxWrnNr9SAiCgm3EqMX7JwIrVe+bktTm4WkTaEB5SGncx8Fcz3 /jHuyfLEtiE4w== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 32/33] selftests: ftrace: Remove obsolate maxactive syntax check Date: Mon, 27 Nov 2023 22:59:30 +0900 Message-Id: <170109356964.343914.18101084627375981760.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:08:07 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783726439052839620 X-GMAIL-MSGID: 1783726439052839620 From: Masami Hiramatsu (Google) Since the fprobe event does not support maxactive anymore, stop testing the maxactive syntax error checking. Signed-off-by: Masami Hiramatsu (Google) --- .../ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc index 20e42c030095..66516073ff27 100644 --- a/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc +++ b/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc @@ -16,9 +16,7 @@ aarch64) REG=%r0 ;; esac -check_error 'f^100 vfs_read' # MAXACT_NO_KPROBE -check_error 'f^1a111 vfs_read' # BAD_MAXACT -check_error 'f^100000 vfs_read' # MAXACT_TOO_BIG +check_error 'f^100 vfs_read' # BAD_MAXACT check_error 'f ^non_exist_func' # BAD_PROBE_ADDR (enoent) check_error 'f ^vfs_read+10' # BAD_PROBE_ADDR From patchwork Mon Nov 27 13:59:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 170173 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce62:0:b0:403:3b70:6f57 with SMTP id o2csp3151768vqx; Mon, 27 Nov 2023 06:18:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IEziYJ/XZYqVYVqybq0ewDyQftlVwrTneLFlC+NHIdr+6W+7+1yBlvqq0u/zjmIKso2zomG X-Received: by 2002:a05:6871:e7c9:b0:1fa:306f:60d with SMTP id qc9-20020a056871e7c900b001fa306f060dmr9162887oac.4.1701094716891; Mon, 27 Nov 2023 06:18:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701094716; cv=none; d=google.com; s=arc-20160816; b=JJq6kCHaRWrmnehTrKQaITnwpTJyTzfyPdILVXtLEayI6Uh9thpg2eyZenM3Iz1/+U FS5ZW9fLUe7hOTz+BMovpAg7Oi68Qy94ekqjrpaIY8W5zXS6H/C7xguH7NpKs1+7Spi/ ftWreHWAtPA7QksaX2WtlzV86MRHB82UcAyo3Vd7nFOeLzfmTdk/6nlIvCG/SwzkEamr IwJ4z9TfkbKYbH5fTkun87z5FkA6qyvCjURypIyi80K1XNuDdKmBGISMAK27PK1LjV+/ BkUofIHI8HT6NUNv4tA00Ubdz4l4qCrPL560a2/kbz3OqLVTXM4zZUzJrR0Jw2SaEEWR OpoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=g/RVfnmZcQdfRZB+YRyiXpHH94CCdK1xBgexCLFpl6s=; fh=SIgps5XdV0XNwjZfT2uAI7g3mrspDldK9Qs8qQAfoa4=; b=D9sNPPga2AA9NiCYLzWBVBg8OgYdaz/lRtzXuxSQeqjm/XxYzMnE7UVLpswKRPTZFO o71viZv3/lex0pRA6bvZBLelCWlXmD9VsaCHtiXNsjldOI7ObANgyJYqn/0hg9rFsajv 8rIX8XdEBlWEDBX3PuZP5XJxc5MLnzxbZ1dA/HAaKOIobPP5kN8P01neWhC5c5RDq0gA 4zU2NnLGsgdJS1mijQdCl+8Zha4DVtMlJ2C3/NTzCBbykHJ/ecA79/xStoteZlKEVMmE /OkRe7oyxKjkofLgsfAfQoTQxxp0iYal/kUlxG9eXot7R88tv81dloHwwEkTxcSjsuy2 n9pg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=tELN8aoX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id e6-20020a9d63c6000000b006d837eb840bsi25878otl.26.2023.11.27.06.18.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 06:18:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=tELN8aoX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 6D6BC809ADF7; Mon, 27 Nov 2023 06:18:33 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233481AbjK0OSW (ORCPT + 99 others); Mon, 27 Nov 2023 09:18:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233729AbjK0ORr (ORCPT ); Mon, 27 Nov 2023 09:17:47 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 812FF3850 for ; Mon, 27 Nov 2023 05:59:49 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD5DCC433A9; Mon, 27 Nov 2023 13:59:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701093589; bh=8t7YVZmoVUR7dCk/2nFP4SmQbrkuMwJVZH30PnGw0sc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tELN8aoX40aJ3lB0DKKykbLK1uUy9V1D77TNJxxYCctsNRWVqd4ZCfCPBYhEmziqX jOeVOFDjwIT+EhAtfa4M1NgVS6HyZ5WdhnI4APV3AC4QltCobFb3RcuUYL8vxxHo8g Yxy5A8KBGBs3lwyfQFRHcS5DQX5KuRQQK07WL9NbkSnnVATphZ1LXOShQRQBoBfz3u DcDf21AsHeMNJ95Ny8KeSlYk0vUq86dXxGoALbnlTTTWGPZHAJs/m2Nwei/1No74mm 9YOSV0/4zsemgvLdfBBJrLIZIajJxa0e7arsmg5240Zp4FmFlxtSAoVH03+uFWWtTG hMMAtrxpDYfDQ== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v3 33/33] Documentation: probes: Update fprobe on function-graph tracer Date: Mon, 27 Nov 2023 22:59:42 +0900 Message-Id: <170109358189.343914.13273695712450878884.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170109317214.343914.4784420430328654397.stgit@devnote2> References: <170109317214.343914.4784420430328654397.stgit@devnote2> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 27 Nov 2023 06:18:33 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783727093879102323 X-GMAIL-MSGID: 1783727093879102323 From: Masami Hiramatsu (Google) Update fprobe documentation for the new fprobe on function-graph tracer. This includes some bahvior changes and pt_regs to ftrace_regs interface change. Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Update @fregs parameter explanation. --- Documentation/trace/fprobe.rst | 42 ++++++++++++++++++++++++++-------------- 1 file changed, 27 insertions(+), 15 deletions(-) diff --git a/Documentation/trace/fprobe.rst b/Documentation/trace/fprobe.rst index 196f52386aaa..f58bdc64504f 100644 --- a/Documentation/trace/fprobe.rst +++ b/Documentation/trace/fprobe.rst @@ -9,9 +9,10 @@ Fprobe - Function entry/exit probe Introduction ============ -Fprobe is a function entry/exit probe mechanism based on ftrace. -Instead of using ftrace full feature, if you only want to attach callbacks -on function entry and exit, similar to the kprobes and kretprobes, you can +Fprobe is a function entry/exit probe mechanism based on the function-graph +tracer. +Instead of tracing all functions, if you want to attach callbacks on specific +function entry and exit, similar to the kprobes and kretprobes, you can use fprobe. Compared with kprobes and kretprobes, fprobe gives faster instrumentation for multiple functions with single handler. This document describes how to use fprobe. @@ -91,12 +92,14 @@ The prototype of the entry/exit callback function are as follows: .. code-block:: c - int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data); + int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); - void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data); + void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); -Note that the @entry_ip is saved at function entry and passed to exit handler. -If the entry callback function returns !0, the corresponding exit callback will be cancelled. +Note that the @entry_ip is saved at function entry and passed to exit +handler. +If the entry callback function returns !0, the corresponding exit callback +will be cancelled. @fp This is the address of `fprobe` data structure related to this handler. @@ -112,12 +115,10 @@ If the entry callback function returns !0, the corresponding exit callback will This is the return address that the traced function will return to, somewhere in the caller. This can be used at both entry and exit. -@regs - This is the `pt_regs` data structure at the entry and exit. Note that - the instruction pointer of @regs may be different from the @entry_ip - in the entry_handler. If you need traced instruction pointer, you need - to use @entry_ip. On the other hand, in the exit_handler, the instruction - pointer of @regs is set to the current return address. +@fregs + This is the `ftrace_regs` data structure at the entry and exit. This + includes the function parameters, or the return values. So user can + access thos values via appropriate `ftrace_regs_*` APIs. @entry_data This is a local storage to share the data between entry and exit handlers. @@ -125,6 +126,17 @@ If the entry callback function returns !0, the corresponding exit callback will and `entry_data_size` field when registering the fprobe, the storage is allocated and passed to both `entry_handler` and `exit_handler`. +Entry data size and exit handlers on the same function +====================================================== + +Since the entry data is passed via per-task stack and it is has limited size, +the entry data size per probe is limited to `15 * sizeof(long)`. You also need +to take care that the different fprobes are probing on the same function, this +limit becomes smaller. The entry data size is aligned to `sizeof(long)` and +each fprobe which has exit handler uses a `sizeof(long)` space on the stack, +you should keep the number of fprobes on the same function as small as +possible. + Share the callbacks with kprobes ================================ @@ -165,8 +177,8 @@ This counter counts up when; - fprobe fails to take ftrace_recursion lock. This usually means that a function which is traced by other ftrace users is called from the entry_handler. - - fprobe fails to setup the function exit because of the shortage of rethook - (the shadow stack for hooking the function return.) + - fprobe fails to setup the function exit because of failing to allocate the + data buffer from the per-task shadow stack. The `fprobe::nmissed` field counts up in both cases. Therefore, the former skips both of entry and exit callback and the latter skips the exit