From patchwork Thu Nov 9 00:41:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 163194 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b129:0:b0:403:3b70:6f57 with SMTP id q9csp141772vqs; Wed, 8 Nov 2023 16:54:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IEUctrXMWAVGyHhreaVTcigrsurmnNqud1xxMqG21254/NDbf3QsPdsS0a/H99e6mcTp5jr X-Received: by 2002:a05:6a00:2e85:b0:6be:43f8:4e0b with SMTP id fd5-20020a056a002e8500b006be43f84e0bmr3853161pfb.24.1699491277302; Wed, 08 Nov 2023 16:54:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699491277; cv=none; d=google.com; s=arc-20160816; b=FWYhiM58eXeDVCCpcBSZT/zulbY4eRAzWHbrxROPuo3pJMO0mZz6RX6GETOyAI063r 02rEFjsJ7hZPrzxx9yg687oR65HrlQLxIbizMS4fNN0o6wFOD9pYGBOklOHBZBS7sque WNFeFaLoe2cn/tsdUAvKx9LRCL2YJO35HhIFSLVOAW8tE15J8/oJJ5S9dCVC52Gh+jmE JHC3x1PxFcEZ7iUCMenF0gngIvcGfh16cy3KS1IhoqMB//KS4MII8EaWcjG+GHoEygeu WnOSMM3Az+c12dR1FcmTEj1NOwnnj3aytN70H/bjlm9k1OGzmEcv98n5SKYhY0pVksOG dPOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wyM31dxGXIvKgOwdvweJGsk9GsgIGrlzyTgonf0rvNo=; fh=3ahOA1+93SqEqAeT96I/cgRVlate7pzd5ZZzl/6KbbU=; b=cI0r+muxXNQU/HfNDSq+wpllMZsMopXjXFZfZnWBjA55Vat/w9mFjJrAo2SWpycLTb Knl2aL/LCKMWMYEL9XNz62jIEuhe3HxIa47n/EEBBVS2EOSWvbLXUI+GnBp9f50JsDEL A44pv/3jTCsFK/Un9rh0+GidOO68SArj2Lhx5Xqc6MHPqD7xkBZeegDFalJdlgD5ehUw OWyYivX5Ok5vjw4S2YLtMYyJbT6i7wbwf6cLT+Vzu41WYmVWZfcNxvCM/RWGmM7i+QvZ cD3IojmQHXkmIgojWf5IF7ETAowV+4OEsoO9pHWCx2KJdlk4a9QBFh4YmezC5hyLTGfp lRDA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=SZrVQDr+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id x64-20020a638643000000b005b4a9b2f404si5547329pgd.710.2023.11.08.16.54.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 16:54:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=SZrVQDr+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id BC64D82F56BF; Wed, 8 Nov 2023 16:54:22 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232397AbjKIAn4 (ORCPT + 32 others); Wed, 8 Nov 2023 19:43:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232100AbjKIAno (ORCPT ); Wed, 8 Nov 2023 19:43:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C802018E; Wed, 8 Nov 2023 16:43:41 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D3E93C43397; Thu, 9 Nov 2023 00:43:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699490621; bh=oURUhZ94tVLepGPeb4PAIE8X5Cgy1Q4AJy231PK2fX0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SZrVQDr+QQzBAHoxasIgqiHV0GqWsbsa8kMqjChG7olM+iPOxwx0SjBxFBPRN++0T Vt1iV9Eoq3QAVnkVzF1Gfs0jCzxipRz8cDXSjYq47ldPKBZtwAtkDAGTe87o/woW3g 7TVSvFC5dKGoDbmY3aTODF/YnRNAL4/hXduPynHJ+P0XWKu39zD0IGoCSVXYM139pq zFKlAtp3KTbXJQuxAIQlR0RnzanfIU20qrhXqj2ycNPVzTQDOXWqShv9KPj3QgJsU6 B14Z6kFlW/Sw3IvvOxhnI+ykc9vtms2Q+3lxh4El9yyVG1MEoRjoSCf/OByFHR8Kjt c2OBKn8gJLuNA== From: Josh Poimboeuf To: Peter Zijlstra , Steven Rostedt , Ingo Molnar , Arnaldo Carvalho de Melo Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Indu Bhagat , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Mark Brown , linux-toolchains@vger.kernel.org Subject: [PATCH RFC 04/10] perf: Introduce deferred user callchains Date: Wed, 8 Nov 2023 16:41:09 -0800 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Wed, 08 Nov 2023 16:54:22 -0800 (PST) X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782045765170560779 X-GMAIL-MSGID: 1782045765170560779 Instead of attempting to unwind user space from the NMI handler, defer it to run in task context by sending a self-IPI and then scheduling the unwind to run in the IRQ's exit task work before returning to user space. This allows the user stack page to be paged in if needed, avoids duplicate unwinds for kernel-bound workloads, and prepares for SFrame unwinding (so .sframe sections can be paged in on demand). Suggested-by: Steven Rostedt Suggested-by: Peter Zijlstra Signed-off-by: Josh Poimboeuf --- arch/Kconfig | 3 ++ include/linux/perf_event.h | 22 ++++++-- include/uapi/linux/perf_event.h | 1 + kernel/bpf/stackmap.c | 5 +- kernel/events/callchain.c | 7 ++- kernel/events/core.c | 90 ++++++++++++++++++++++++++++++--- 6 files changed, 115 insertions(+), 13 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index f4b210ab0612..690c82212224 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -425,6 +425,9 @@ config HAVE_HARDLOCKUP_DETECTOR_ARCH It uses the same command line parameters, and sysctl interface, as the generic hardlockup detectors. +config HAVE_PERF_CALLCHAIN_DEFERRED + bool + config HAVE_PERF_REGS bool help diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 2d8fa253b9df..2f232111dff2 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -786,6 +786,7 @@ struct perf_event { struct irq_work pending_irq; struct callback_head pending_task; unsigned int pending_work; + unsigned int pending_unwind; atomic_t event_limit; @@ -1113,7 +1114,10 @@ int perf_event_read_local(struct perf_event *event, u64 *value, extern u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running); -extern struct perf_callchain_entry *perf_callchain(struct perf_event *event, struct pt_regs *regs); +extern void perf_callchain(struct perf_sample_data *data, + struct perf_event *event, struct pt_regs *regs); +extern void perf_callchain_deferred(struct perf_sample_data *data, + struct perf_event *event, struct pt_regs *regs); static inline bool branch_sample_no_flags(const struct perf_event *event) { @@ -1189,6 +1193,7 @@ struct perf_sample_data { u64 data_page_size; u64 code_page_size; u64 aux_size; + bool deferred; } ____cacheline_aligned; /* default value for data source */ @@ -1206,6 +1211,7 @@ static inline void perf_sample_data_init(struct perf_sample_data *data, data->sample_flags = PERF_SAMPLE_PERIOD; data->period = period; data->dyn_size = 0; + data->deferred = false; if (addr) { data->addr = addr; @@ -1219,7 +1225,11 @@ static inline void perf_sample_save_callchain(struct perf_sample_data *data, { int size = 1; - data->callchain = perf_callchain(event, regs); + if (data->deferred) + perf_callchain_deferred(data, event, regs); + else + perf_callchain(data, event, regs); + size += data->callchain->nr; data->dyn_size += size * sizeof(u64); @@ -1534,12 +1544,18 @@ extern void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct p extern void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs); extern struct perf_callchain_entry * get_perf_callchain(struct pt_regs *regs, bool kernel, bool user, - u32 max_stack, bool add_mark); + u32 max_stack, bool add_mark, bool defer_user); extern int get_callchain_buffers(int max_stack); extern void put_callchain_buffers(void); extern struct perf_callchain_entry *get_callchain_entry(int *rctx); extern void put_callchain_entry(int rctx); +#ifdef CONFIG_HAVE_PERF_CALLCHAIN_DEFERRED +extern void perf_callchain_user_deferred(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs); +#else +static inline void perf_callchain_user_deferred(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) {} +#endif + extern int sysctl_perf_event_max_stack; extern int sysctl_perf_event_max_contexts_per_stack; diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index 39c6a250dd1b..9a1127af4cda 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -1237,6 +1237,7 @@ enum perf_callchain_context { PERF_CONTEXT_HV = (__u64)-32, PERF_CONTEXT_KERNEL = (__u64)-128, PERF_CONTEXT_USER = (__u64)-512, + PERF_CONTEXT_USER_DEFERRED = (__u64)-640, PERF_CONTEXT_GUEST = (__u64)-2048, PERF_CONTEXT_GUEST_KERNEL = (__u64)-2176, diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index e4827ca5378d..fcdd26715b12 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -294,8 +294,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map, if (max_depth > sysctl_perf_event_max_stack) max_depth = sysctl_perf_event_max_stack; - trace = get_perf_callchain(regs, kernel, user, max_depth, false); - + trace = get_perf_callchain(regs, kernel, user, max_depth, false, false); if (unlikely(!trace)) /* couldn't fetch the stack trace */ return -EFAULT; @@ -420,7 +419,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, trace = get_callchain_entry_for_task(task, max_depth); else trace = get_perf_callchain(regs, kernel, user, max_depth, - false); + false, false); if (unlikely(!trace)) goto err_fault; diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c index 2bee8b6fda0e..16571c8d6771 100644 --- a/kernel/events/callchain.c +++ b/kernel/events/callchain.c @@ -178,7 +178,7 @@ put_callchain_entry(int rctx) struct perf_callchain_entry * get_perf_callchain(struct pt_regs *regs, bool kernel, bool user, - u32 max_stack, bool add_mark) + u32 max_stack, bool add_mark, bool defer_user) { struct perf_callchain_entry *entry; struct perf_callchain_entry_ctx ctx; @@ -207,6 +207,11 @@ get_perf_callchain(struct pt_regs *regs, bool kernel, bool user, regs = task_pt_regs(current); } + if (defer_user) { + perf_callchain_store_context(&ctx, PERF_CONTEXT_USER_DEFERRED); + goto exit_put; + } + if (add_mark) perf_callchain_store_context(&ctx, PERF_CONTEXT_USER); diff --git a/kernel/events/core.c b/kernel/events/core.c index 5e41a3b70bcd..290e06b0071c 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6751,6 +6751,12 @@ static void perf_pending_irq(struct irq_work *entry) struct perf_event *event = container_of(entry, struct perf_event, pending_irq); int rctx; + if (!is_software_event(event)) { + if (event->pending_unwind) + task_work_add(current, &event->pending_task, TWA_RESUME); + return; + } + /* * If we 'fail' here, that's OK, it means recursion is already disabled * and we won't recurse 'further'. @@ -6772,11 +6778,57 @@ static void perf_pending_irq(struct irq_work *entry) perf_swevent_put_recursion_context(rctx); } +static void perf_pending_task_unwind(struct perf_event *event) +{ + struct pt_regs *regs = task_pt_regs(current); + struct perf_output_handle handle; + struct perf_event_header header; + struct perf_sample_data data; + struct perf_callchain_entry *callchain; + + callchain = kmalloc(sizeof(struct perf_callchain_entry) + + (sizeof(__u64) * event->attr.sample_max_stack) + + (sizeof(__u64) * 1) /* one context */, + GFP_KERNEL); + if (!callchain) + return; + + callchain->nr = 0; + data.callchain = callchain; + + perf_sample_data_init(&data, 0, event->hw.last_period); + + data.deferred = true; + + perf_prepare_sample(&data, event, regs); + + perf_prepare_header(&header, &data, event, regs); + + if (perf_output_begin(&handle, &data, event, header.size)) + goto done; + + perf_output_sample(&handle, &header, &data, event); + + perf_output_end(&handle); + +done: + kfree(callchain); +} + + static void perf_pending_task(struct callback_head *head) { struct perf_event *event = container_of(head, struct perf_event, pending_task); int rctx; + if (!is_software_event(event)) { + if (event->pending_unwind) { + perf_pending_task_unwind(event); + event->pending_unwind = 0; + } + return; + } + /* * If we 'fail' here, that's OK, it means recursion is already disabled * and we won't recurse 'further'. @@ -7587,22 +7639,48 @@ static u64 perf_get_page_size(unsigned long addr) static struct perf_callchain_entry __empty_callchain = { .nr = 0, }; -struct perf_callchain_entry * -perf_callchain(struct perf_event *event, struct pt_regs *regs) +void perf_callchain(struct perf_sample_data *data, struct perf_event *event, + struct pt_regs *regs) { bool kernel = !event->attr.exclude_callchain_kernel; bool user = !event->attr.exclude_callchain_user; const u32 max_stack = event->attr.sample_max_stack; - struct perf_callchain_entry *callchain; + bool defer_user = IS_ENABLED(CONFIG_HAVE_PERF_CALLCHAIN_DEFERRED); /* Disallow cross-task user callchains. */ user &= !event->ctx->task || event->ctx->task == current; if (!kernel && !user) - return &__empty_callchain; + goto empty; - callchain = get_perf_callchain(regs, kernel, user, max_stack, true); - return callchain ?: &__empty_callchain; + data->callchain = get_perf_callchain(regs, kernel, user, max_stack, true, defer_user); + if (!data->callchain) + goto empty; + + if (user && defer_user && !event->pending_unwind) { + event->pending_unwind = 1; + irq_work_queue(&event->pending_irq); + } + + return; + +empty: + data->callchain = &__empty_callchain; +} + +void perf_callchain_deferred(struct perf_sample_data *data, + struct perf_event *event, struct pt_regs *regs) +{ + struct perf_callchain_entry_ctx ctx; + + ctx.entry = data->callchain; + ctx.max_stack = event->attr.sample_max_stack; + ctx.nr = 0; + ctx.contexts = 0; + ctx.contexts_maxed = false; + + perf_callchain_store_context(&ctx, PERF_CONTEXT_USER); + perf_callchain_user_deferred(&ctx, regs); } static __always_inline u64 __cond_set(u64 flags, u64 s, u64 d)