From patchwork Thu Nov 9 00:41:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 163188 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b129:0:b0:403:3b70:6f57 with SMTP id q9csp138446vqs; Wed, 8 Nov 2023 16:44:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IFCX3UfpqEBecXajrSFjrC3JYmZ7DUQYLHxo+RhwXG0Nu5Q5V38SUPEhmRQUYuLT0xY0IwW X-Received: by 2002:a05:6808:150b:b0:3ae:3bd:d3d2 with SMTP id u11-20020a056808150b00b003ae03bdd3d2mr251628oiw.10.1699490654427; Wed, 08 Nov 2023 16:44:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699490654; cv=none; d=google.com; s=arc-20160816; b=dj17Mne9OdHBKQprFRpYBHsVWGJT9JZ5BxJ9L8wc+LJ1jKsE/6JRzXxqsIPq3TW62s zDnrU4Qy3uD6KTXh42n7RmVt6PoHMDWZVwWOEYD7BO/FfKhgjJg5g0jVps23uTSHGRhM Ly+yWpqDoDazsF14X9ogZnAri5lx13lz/4f6bhPCBXaMOFIy86G7F9MtPC2tZVmiICWN 6j4+iL3khj+MJr2A3Bn/KRSBD1bwVTOKXTOXNYMUAqH9fzGJdlPqCD02F20BoaYR6tko Y1jt9j+z7qN1a7uN9+okOGg1jhHqp+n8/EBiTwMt1E23QI05dF5oAsLofHDWv1llg+nU TWmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5CcttKdlU6732C+0VppioCzEDegdCp3au+RP4OLhYoc=; fh=3ahOA1+93SqEqAeT96I/cgRVlate7pzd5ZZzl/6KbbU=; b=VxGMSa/jCGYf4Hyx9E64OFfbefa4QzVuacOZPc1B40nvby78oZgBaIq+5cwPzf7egc aDPmCzyhKVE3k7PO5BGtjoWnqyJOWIQi9nPdbgt7EqGh0nezWpPB9j65gCzX4ccQWzDd DvfLsleH65SmYLOBGqA2v0GgMjvSSIYhNOaRMU8Zr2muoOa1b8WdPw2yxLysNZOLCYCp 5v4Bgk4T6KlJjbc21zX3OrqZlNQPyWO07OEHbvpx8RHGxpkAcuBqJIZUH7n+3RaqaRec ICdBj5hJsjxOiidbAke1JeTYE0afJkTqSd28zWyrfyamw6y5Rtmr0WflrmpRu/eWuZ+c L3Hw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=knFtUxnN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id u2-20020a056a00158200b00690bc2ac50asi14596694pfk.246.2023.11.08.16.44.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 16:44:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=knFtUxnN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 4F4EE822CBC7; Wed, 8 Nov 2023 16:44:13 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232226AbjKIAoJ (ORCPT + 32 others); Wed, 8 Nov 2023 19:44:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232110AbjKIAno (ORCPT ); Wed, 8 Nov 2023 19:43:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CA60C6; Wed, 8 Nov 2023 16:43:42 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B19DC433D9; Thu, 9 Nov 2023 00:43:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699490622; bh=bFP5wZvQMY9wapcWbDmX8LVc7sfYwfgWM4sJBg6/eKk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=knFtUxnN6wPG6xhBZpvn0uI8m+GEtw2VjZm+LwHUtH0oFHjlAD9dg2IJQe92+LJVo i/zpKNYoU8p5Sp3TMuCWpT8yCJ79V+rCwu1t8S2oaP/09oNux8k0RcklZUJZMcnrTT T+7ENIjRESkNZmZNEiCI3Y/AHs7BDLJMAKhOA7pUr3h+URDmu9Zfh7JS7xwo3NKtTA S6CxrAXWXae4L/xmchd4/l5dNbNLGruDqq2cUKzGgMyYDKllUxM++1MnWN4JKWByTD 35wxRDx3MZmsAuKQlRQ7zaOICgNgGOwkcFVfqCSA1a+P6L3Hy3OlocmxzvmlZ7aIAt asSagC/Kat+sw== From: Josh Poimboeuf To: Peter Zijlstra , Steven Rostedt , Ingo Molnar , Arnaldo Carvalho de Melo Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Indu Bhagat , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Mark Brown , linux-toolchains@vger.kernel.org Subject: [PATCH RFC 05/10] perf/x86: Add HAVE_PERF_CALLCHAIN_DEFERRED Date: Wed, 8 Nov 2023 16:41:10 -0800 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 08 Nov 2023 16:44:13 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782045112377500240 X-GMAIL-MSGID: 1782045112377500240 Enable deferred user space unwinding on x86. Signed-off-by: Josh Poimboeuf --- arch/x86/Kconfig | 1 + arch/x86/events/core.c | 47 ++++++++++++++++++++++++++++-------------- 2 files changed, 32 insertions(+), 16 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 3762f41bb092..cacf11ac4b10 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -256,6 +256,7 @@ config X86 select HAVE_PERF_EVENTS_NMI select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI select HAVE_PCI + select HAVE_PERF_CALLCHAIN_DEFERRED select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 40ad1425ffa2..ae264437f794 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2816,8 +2816,8 @@ static unsigned long get_segment_base(unsigned int segment) #include -static inline int -perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *entry) +static inline int __perf_callchain_user32(struct pt_regs *regs, + struct perf_callchain_entry_ctx *entry) { /* 32-bit process in 64-bit kernel. */ unsigned long ss_base, cs_base; @@ -2831,7 +2831,6 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent ss_base = get_segment_base(regs->ss); fp = compat_ptr(ss_base + regs->bp); - pagefault_disable(); while (entry->nr < entry->max_stack) { if (!valid_user_frame(fp, sizeof(frame))) break; @@ -2844,19 +2843,18 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent perf_callchain_store(entry, cs_base + frame.return_address); fp = compat_ptr(ss_base + frame.next_frame); } - pagefault_enable(); return 1; } -#else -static inline int -perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *entry) +#else /* !CONFIG_IA32_EMULATION */ +static inline int __perf_callchain_user32(struct pt_regs *regs, + struct perf_callchain_entry_ctx *entry) { - return 0; + return 0; } -#endif +#endif /* CONFIG_IA32_EMULATION */ -void -perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) +void __perf_callchain_user(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs, bool atomic) { struct stack_frame frame; const struct stack_frame __user *fp; @@ -2876,13 +2874,15 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs perf_callchain_store(entry, regs->ip); - if (!nmi_uaccess_okay()) + if (atomic && !nmi_uaccess_okay()) return; - if (perf_callchain_user32(regs, entry)) - return; + if (atomic) + pagefault_disable(); + + if (__perf_callchain_user32(regs, entry)) + goto done; - pagefault_disable(); while (entry->nr < entry->max_stack) { if (!valid_user_frame(fp, sizeof(frame))) break; @@ -2895,7 +2895,22 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs perf_callchain_store(entry, frame.return_address); fp = (void __user *)frame.next_frame; } - pagefault_enable(); +done: + if (atomic) + pagefault_enable(); +} + + +void perf_callchain_user(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + return __perf_callchain_user(entry, regs, true); +} + +void perf_callchain_user_deferred(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + return __perf_callchain_user(entry, regs, false); } /*