From patchwork Mon Jun 19 10:49:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tip-bot2 for Thomas Gleixner X-Patchwork-Id: 109923 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2936275vqr; Mon, 19 Jun 2023 04:26:31 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6wDQdbTahe6pXJC+Lo6hGsxH1x0YTF2EyzvxIl69hFlLckBH8CdWQHTIpKdlmGCJfa/fTC X-Received: by 2002:a05:6358:c614:b0:130:e014:ff8a with SMTP id fd20-20020a056358c61400b00130e014ff8amr4055778rwb.31.1687173991536; Mon, 19 Jun 2023 04:26:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687173991; cv=none; d=google.com; s=arc-20160816; b=FtU6Uie6ieuKQ151PZj3bnbyO/xuoa7OGhcIfHEt19vZTZbmE/yh8HRV8NLHs+NtYf esKQ6k2lWDH55jQEeCplDYaoDXYghcm2VEHngrzk5eZu881GKK4GqSI78EzNiRDv+Umh BdD5efoSQ2Eu7Kn98KW4QyQYqC7XcR+DKPHLu6oQwuBXB2C7lVMgu7cemiiXRDC0AHpF KYEBdX4/xkDTcYnQDHkGBDL2Pg7JpmoglETELxaXiz+E8fv+X4BfHxHxwKY7oZzMKDBb GKghtRea8L2kIqz2oZ3xB75ougT8lxOIjpioHBuP7anqVvYvhKXfODU6RNxY0vOG5h82 +o1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:robot-unsubscribe :robot-id:message-id:mime-version:references:in-reply-to:cc:subject :to:reply-to:sender:from:dkim-signature:dkim-signature:date; bh=JLpmxaC3hPQe85VWfgs8yWcsP3kWSaneGCYgT+zvxro=; b=reV9ExcSd6fOJkuuP8y6naRkJPLUphVMmEa6tFQnA4KN71bv1CmSLg01yAbZwum+lr cLdYrI7wn268S2wZ8TIVtfg93PvYSnfcfn+iA222fI2XomXkletyBFy/xGXNtp2ljFWd edy2NlPyCiGsJy1RSC72EuCPjN+EYT5QHOA3gjgmOVGE83390dAyaxuUhBgj/jJvDYi9 CG3gQHYLheXtWGQ8cfGitUNia6Ay45iW1ab6B1b2/h1oPIMxIUgvcST0lpPgrTAxEapt 8sB7+POznteyK5T45se/v2jaM1Qsf+hU1gVx9epAWJoLEHbuYMUnV8SttZwQ3ewk9Jwm o8HQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=ha0fF8u+; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d132-20020a63368a000000b00542ad648fbasi16713942pga.188.2023.06.19.04.26.18; Mon, 19 Jun 2023 04:26:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=ha0fF8u+; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232063AbjFSKu3 (ORCPT + 99 others); Mon, 19 Jun 2023 06:50:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232168AbjFSKtc (ORCPT ); Mon, 19 Jun 2023 06:49:32 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BE26173D; Mon, 19 Jun 2023 03:49:26 -0700 (PDT) Date: Mon, 19 Jun 2023 10:49:24 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1687171765; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JLpmxaC3hPQe85VWfgs8yWcsP3kWSaneGCYgT+zvxro=; b=ha0fF8u+iYQvW1C7+CD2X7AZtmYEdCHc4pMhyL0u3WgLKhF6KK/jqWY94LT4UX8KuV/P5Y Q8FXQ2IOqLsTvZR7IJxFJgYN33QqPIgMTKVxUtgCXs+RerysQqawajCaCt/jwJXUr0cB9i 4L2W03TTkr7nejP92IKHd9RP+JnuTUSMVLb6+5CckYexh2C3gqBYyJbL5i2DS2d2p9mr4w nRpTHitlqPRSWqr87fyHwAJoEVYoHcKLEFvOQFmUNid+6BmGdyAH7joIvh8bEvXJcYpXW8 XahYypT/cLFiQL6o5eRZbK+02cFU4S1j4GFFR6JbR2+wHw8RVbP9ib5Q36o44Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1687171765; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JLpmxaC3hPQe85VWfgs8yWcsP3kWSaneGCYgT+zvxro=; b=NJdwaFEQ3pQ+VZV5Wy5QSXK6vgY2oreJsKG5j1K8Yxnvn6FYlx4fN005v+9MGMhd4w8YX3 t0EYcb4FvaYNvUDA== From: "tip-bot2 for Leonardo Bras" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: smp/core] trace,smp: Add tracepoints for scheduling remotelly called functions Cc: Valentin Schneider , Leonardo Bras , "Peter Zijlstra (Intel)" , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20230615065944.188876-7-leobras@redhat.com> References: <20230615065944.188876-7-leobras@redhat.com> MIME-Version: 1.0 Message-ID: <168717176421.404.11770639740909284184.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768752743183618622?= X-GMAIL-MSGID: =?utf-8?q?1769130155559328240?= The following commit has been merged into the smp/core branch of tip: Commit-ID: bf5a8c26ad7caf0772a1cd48c8a0924e48bdbaf0 Gitweb: https://git.kernel.org/tip/bf5a8c26ad7caf0772a1cd48c8a0924e48bdbaf0 Author: Leonardo Bras AuthorDate: Thu, 15 Jun 2023 03:59:47 -03:00 Committer: Peter Zijlstra CommitterDate: Fri, 16 Jun 2023 22:08:09 +02:00 trace,smp: Add tracepoints for scheduling remotelly called functions Add a tracepoint for when a CSD is queued to a remote CPU's call_single_queue. This allows finding exactly which CPU queued a given CSD when looking at a csd_function_{entry,exit} event, and also enables us to accurately measure IPI delivery time with e.g. a synthetic event: $ echo 'hist:keys=cpu,csd.hex:ts=common_timestamp.usecs' >\ /sys/kernel/tracing/events/smp/csd_queue_cpu/trigger $ echo 'csd_latency unsigned int dst_cpu; unsigned long csd; u64 time' >\ /sys/kernel/tracing/synthetic_events $ echo \ 'hist:keys=common_cpu,csd.hex:'\ 'time=common_timestamp.usecs-$ts:'\ 'onmatch(smp.csd_queue_cpu).trace(csd_latency,common_cpu,csd,$time)' >\ /sys/kernel/tracing/events/smp/csd_function_entry/trigger $ trace-cmd record -e 'synthetic:csd_latency' hackbench $ trace-cmd report <...>-467 [001] 21.824263: csd_queue_cpu: cpu=0 callsite=try_to_wake_up+0x2ea func=sched_ttwu_pending csd=0xffff8880076148b8 <...>-467 [001] 21.824280: ipi_send_cpu: cpu=0 callsite=try_to_wake_up+0x2ea callback=generic_smp_call_function_single_interrupt+0x0 <...>-489 [000] 21.824299: csd_function_entry: func=sched_ttwu_pending csd=0xffff8880076148b8 <...>-489 [000] 21.824320: csd_latency: dst_cpu=0, csd=18446612682193848504, time=36 Suggested-by: Valentin Schneider Signed-off-by: Leonardo Bras Tested-and-reviewed-by: Valentin Schneider Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20230615065944.188876-7-leobras@redhat.com --- include/trace/events/csd.h | 27 +++++++++++++++++++++++++++ kernel/smp.c | 16 +++++----------- 2 files changed, 32 insertions(+), 11 deletions(-) diff --git a/include/trace/events/csd.h b/include/trace/events/csd.h index af1df52..67e9d01 100644 --- a/include/trace/events/csd.h +++ b/include/trace/events/csd.h @@ -7,6 +7,33 @@ #include +TRACE_EVENT(csd_queue_cpu, + + TP_PROTO(const unsigned int cpu, + unsigned long callsite, + smp_call_func_t func, + struct __call_single_data *csd), + + TP_ARGS(cpu, callsite, func, csd), + + TP_STRUCT__entry( + __field(unsigned int, cpu) + __field(void *, callsite) + __field(void *, func) + __field(void *, csd) + ), + + TP_fast_assign( + __entry->cpu = cpu; + __entry->callsite = (void *)callsite; + __entry->func = func; + __entry->csd = csd; + ), + + TP_printk("cpu=%u callsite=%pS func=%ps csd=%p", + __entry->cpu, __entry->callsite, __entry->func, __entry->csd) + ); + /* * Tracepoints for a function which is called as an effect of smp_call_function.* */ diff --git a/kernel/smp.c b/kernel/smp.c index 1fa01a8..385179d 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -340,7 +340,7 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) * even if we haven't sent the smp_call IPI yet (e.g. the stopper * executes migration_cpu_stop() on the remote CPU). */ - if (trace_ipi_send_cpu_enabled()) { + if (trace_csd_queue_cpu_enabled()) { call_single_data_t *csd; smp_call_func_t func; @@ -348,7 +348,7 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) func = CSD_TYPE(csd) == CSD_TYPE_TTWU ? sched_ttwu_pending : csd->func; - trace_ipi_send_cpu(cpu, _RET_IP_, func); + trace_csd_queue_cpu(cpu, _RET_IP_, func, csd); } /* @@ -741,7 +741,7 @@ static void smp_call_function_many_cond(const struct cpumask *mask, int cpu, last_cpu, this_cpu = smp_processor_id(); struct call_function_data *cfd; bool wait = scf_flags & SCF_WAIT; - int nr_cpus = 0, nr_queued = 0; + int nr_cpus = 0; bool run_remote = false; bool run_local = false; @@ -799,22 +799,16 @@ static void smp_call_function_many_cond(const struct cpumask *mask, csd->node.src = smp_processor_id(); csd->node.dst = cpu; #endif + trace_csd_queue_cpu(cpu, _RET_IP_, func, csd); + if (llist_add(&csd->node.llist, &per_cpu(call_single_queue, cpu))) { __cpumask_set_cpu(cpu, cfd->cpumask_ipi); nr_cpus++; last_cpu = cpu; } - nr_queued++; } /* - * Trace each smp_function_call_*() as an IPI, actual IPIs - * will be traced with func==generic_smp_call_function_single_ipi(). - */ - if (nr_queued) - trace_ipi_send_cpumask(cfd->cpumask, _RET_IP_, func); - - /* * Choose the most efficient way to send an IPI. Note that the * number of CPUs might be zero due to concurrent changes to the * provided mask.