From patchwork Fri Dec 2 15:58:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 28957 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp938667wrr; Fri, 2 Dec 2022 08:02:50 -0800 (PST) X-Google-Smtp-Source: AA0mqf6iTxJtVoF4O/GIqGXXz9aAmgskOoy0WBe8BOjkbpZA5HzA9vre+kfb13tSLlpiVDxWrDcp X-Received: by 2002:a17:903:100c:b0:189:a682:c5a6 with SMTP id a12-20020a170903100c00b00189a682c5a6mr15539749plb.78.1669996970177; Fri, 02 Dec 2022 08:02:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669996970; cv=none; d=google.com; s=arc-20160816; b=HAZKxTvCRbilIwK/vVXlOPGSpKYrPZ0NgNJYnF1ne7Pvrjwg6d6uXIC2IfTyTiCtdC OOjmZbiCFQ3gQRocxKV1StwB9xRZ9BU1pclByshQffoPJxiNohgZcnGhzkEIpxs+1lRJ RVe0bNpdDz1Cmo50Vh+ClUod1TJIj0Stn/oegP/rabf+oq0zji/ILlh+GMzW6yOQFnDX 4q23huo6sRg2kxgn92TKqyUSI0Pgt6pQ5R/FZMhP/eVu9DdpXyjq8cF2nLD+uVtsAfub z7V6e6qvMxhmBGiIRgvfi31HYq7vmiHcfZPaGIsULk6vYlYuxw3mUVQ0Xx0sCc9E6CxH UIqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=imKWtJjnLxywPeyj1k0cz/LH0f5N0UttG38/bDrYO7s=; b=tGwBmKFmT7X6nNbseq6pGEPxDbaniPFgT2MCcWfJb2WcBhZ1u2A+g50B87cbHOL8Db ULY3IOnfhnpDpm/bIK+bnuf1ltfSjvoXuDJI9qsyv5MRQScs02w1oQL0GdNLVKOMV1SO J36fR5BZ+ihOxssmITdnpakVxLceeOkoMbUfW2kX0TLRk5xphgKB67kdW6N7e3sc7QEd a08lj9pQ1EffZY452B3WKe7mxj+wiYk37c+nDR4KsrsS5TDb3bU3NXrxC+W4ik0Ct3LX n+LqWfrVzGntI6evko1jDqio0icOyqE6FeGSmYDd8D0HJSlkuX1mM7QChvVjculYAbGz mzHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Iga6Wad8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b10-20020a6541ca000000b0047769403088si7731971pgq.627.2022.12.02.08.02.35; Fri, 02 Dec 2022 08:02:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Iga6Wad8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233933AbiLBQCG (ORCPT + 99 others); Fri, 2 Dec 2022 11:02:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233882AbiLBQBK (ORCPT ); Fri, 2 Dec 2022 11:01:10 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 696E2E51C9 for ; Fri, 2 Dec 2022 07:59:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669996775; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=imKWtJjnLxywPeyj1k0cz/LH0f5N0UttG38/bDrYO7s=; b=Iga6Wad8+eHNc05Vsjk66LXzmEpfA2SOy6MZuxXqD88tkTBhbYnrxco9lzX7o+CDoPvl+s 7dPmchRbagGzh5sxiWLSq/2/AAH9Ivg30ioujnx+CYnaCpFyTcufj/PZ7pEyjdNNAgUnS9 lbgGlmmE4CvuhoH/DmaIT/GBEN6CVas= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-232-AJnbpNZiOu2OJHlZZZSQxA-1; Fri, 02 Dec 2022 10:59:32 -0500 X-MC-Unique: AJnbpNZiOu2OJHlZZZSQxA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1D316833A09; Fri, 2 Dec 2022 15:59:31 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.77]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AD20020290A5; Fri, 2 Dec 2022 15:59:26 +0000 (UTC) From: Valentin Schneider To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org Cc: "Paul E. McKenney" , Steven Rostedt , Peter Zijlstra , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , Guo Ren , "David S. Miller" Subject: [PATCH v3 8/8] sched, smp: Trace smp callback causing an IPI Date: Fri, 2 Dec 2022 15:58:17 +0000 Message-Id: <20221202155817.2102944-9-vschneid@redhat.com> In-Reply-To: <20221202155817.2102944-1-vschneid@redhat.com> References: <20221202155817.2102944-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751118743283838408?= X-GMAIL-MSGID: =?utf-8?q?1751118743283838408?= Context ======= The newly-introduced ipi_send_cpumask tracepoint has a "callback" parameter which so far has only been fed with NULL. While CSD_TYPE_SYNC/ASYNC and CSD_TYPE_IRQ_WORK share a similar backing struct layout (meaning their callback func can be accessed without caring about the actual CSD type), CSD_TYPE_TTWU doesn't even have a function attached to its struct. This means we need to check the type of a CSD before eventually dereferencing its associated callback. This isn't as trivial as it sounds: the CSD type is stored in __call_single_node.u_flags, which get cleared right before the callback is executed via csd_unlock(). This implies checking the CSD type before it is enqueued on the call_single_queue, as the target CPU's queue can be flushed before we get to sending an IPI. Furthermore, send_call_function_single_ipi() only has a CPU parameter, and would need to have an additional argument to trickle down the invoked function. This is somewhat silly, as the extra argument will always be pushed down to the function even when nothing is being traced, which is unnecessary overhead. Changes ======= send_call_function_single_ipi() is only used by smp.c, and is defined in sched/core.c as it contains scheduler-specific ops (set_nr_if_polling() of a CPU's idle task). Split it into two parts: the scheduler bits remain in sched/core.c, and the actual IPI emission is moved into smp.c. This lets us define an __always_inline helper function that can take the related callback as parameter without creating useless register pressure in the non-traced path which only gains a (disabled) static branch. Do the same thing for the multi IPI case. Signed-off-by: Valentin Schneider --- kernel/sched/core.c | 18 +++++++----- kernel/sched/smp.h | 2 +- kernel/smp.c | 72 +++++++++++++++++++++++++++++++++------------ 3 files changed, 66 insertions(+), 26 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 40587b0d99329..e59aac936dcb9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3743,16 +3743,20 @@ void sched_ttwu_pending(void *arg) rq_unlock_irqrestore(rq, &rf); } -void send_call_function_single_ipi(int cpu) +/* + * Prepare the scene for sending an IPI for a remote smp_call + * + * Returns true if the caller can proceed with sending the IPI. + * Returns false otherwise. + */ +bool call_function_single_prep_ipi(int cpu) { - struct rq *rq = cpu_rq(cpu); - - if (!set_nr_if_polling(rq->idle)) { - trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL); - arch_send_call_function_single_ipi(cpu); - } else { + if (set_nr_if_polling(cpu_rq(cpu)->idle)) { trace_sched_wake_idle_without_ipi(cpu); + return false; } + + return true; } /* diff --git a/kernel/sched/smp.h b/kernel/sched/smp.h index 2eb23dd0f2856..21ac44428bb02 100644 --- a/kernel/sched/smp.h +++ b/kernel/sched/smp.h @@ -6,7 +6,7 @@ extern void sched_ttwu_pending(void *arg); -extern void send_call_function_single_ipi(int cpu); +extern bool call_function_single_prep_ipi(int cpu); #ifdef CONFIG_SMP extern void flush_smp_call_function_queue(void); diff --git a/kernel/smp.c b/kernel/smp.c index 821b5986721ac..5cd680a7e78ef 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -161,9 +161,18 @@ void __init call_function_init(void) } static __always_inline void -send_call_function_ipi_mask(const struct cpumask *mask) +send_call_function_single_ipi(int cpu, smp_call_func_t func) { - trace_ipi_send_cpumask(mask, _RET_IP_, NULL); + if (call_function_single_prep_ipi(cpu)) { + trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, func); + arch_send_call_function_single_ipi(cpu); + } +} + +static __always_inline void +send_call_function_ipi_mask(const struct cpumask *mask, smp_call_func_t func) +{ + trace_ipi_send_cpumask(mask, _RET_IP_, func); arch_send_call_function_ipi_mask(mask); } @@ -430,12 +439,16 @@ static void __smp_call_single_queue_debug(int cpu, struct llist_node *node) struct cfd_seq_local *seq = this_cpu_ptr(&cfd_seq_local); struct call_function_data *cfd = this_cpu_ptr(&cfd_data); struct cfd_percpu *pcpu = per_cpu_ptr(cfd->pcpu, cpu); + struct __call_single_data *csd; + + csd = container_of(node, call_single_data_t, node.llist); + WARN_ON_ONCE(!(CSD_TYPE(csd) & (CSD_TYPE_SYNC | CSD_TYPE_ASYNC))); cfd_seq_store(pcpu->seq_queue, this_cpu, cpu, CFD_SEQ_QUEUE); if (llist_add(node, &per_cpu(call_single_queue, cpu))) { cfd_seq_store(pcpu->seq_ipi, this_cpu, cpu, CFD_SEQ_IPI); cfd_seq_store(seq->ping, this_cpu, cpu, CFD_SEQ_PING); - send_call_function_single_ipi(cpu); + send_call_function_single_ipi(cpu, csd->func); cfd_seq_store(seq->pinged, this_cpu, cpu, CFD_SEQ_PINGED); } else { cfd_seq_store(pcpu->seq_noipi, this_cpu, cpu, CFD_SEQ_NOIPI); @@ -477,6 +490,25 @@ static __always_inline void csd_unlock(struct __call_single_data *csd) smp_store_release(&csd->node.u_flags, 0); } +static __always_inline void +raw_smp_call_single_queue(int cpu, struct llist_node *node, smp_call_func_t func) +{ + /* + * The list addition should be visible to the target CPU when it pops + * the head of the list to pull the entry off it in the IPI handler + * because of normal cache coherency rules implied by the underlying + * llist ops. + * + * If IPIs can go out of order to the cache coherency protocol + * in an architecture, sufficient synchronisation should be added + * to arch code to make it appear to obey cache coherency WRT + * locking and barrier primitives. Generic code isn't really + * equipped to do the right thing... + */ + if (llist_add(node, &per_cpu(call_single_queue, cpu))) + send_call_function_single_ipi(cpu, func); +} + static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data); void __smp_call_single_queue(int cpu, struct llist_node *node) @@ -493,21 +525,25 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) } } #endif - /* - * The list addition should be visible to the target CPU when it pops - * the head of the list to pull the entry off it in the IPI handler - * because of normal cache coherency rules implied by the underlying - * llist ops. - * - * If IPIs can go out of order to the cache coherency protocol - * in an architecture, sufficient synchronisation should be added - * to arch code to make it appear to obey cache coherency WRT - * locking and barrier primitives. Generic code isn't really - * equipped to do the right thing... + * We have to check the type of the CSD before queueing it, because + * once queued it can have its flags cleared by + * flush_smp_call_function_queue() + * even if we haven't sent the smp_call IPI yet (e.g. the stopper + * executes migration_cpu_stop() on the remote CPU). */ - if (llist_add(node, &per_cpu(call_single_queue, cpu))) - send_call_function_single_ipi(cpu); + if (trace_ipi_send_cpumask_enabled()) { + call_single_data_t *csd; + smp_call_func_t func; + + csd = container_of(node, call_single_data_t, node.llist); + func = CSD_TYPE(csd) == CSD_TYPE_TTWU ? + sched_ttwu_pending : csd->func; + + raw_smp_call_single_queue(cpu, node, func); + } else { + raw_smp_call_single_queue(cpu, node, NULL); + } } /* @@ -976,9 +1012,9 @@ static void smp_call_function_many_cond(const struct cpumask *mask, * provided mask. */ if (nr_cpus == 1) - send_call_function_single_ipi(last_cpu); + send_call_function_single_ipi(last_cpu, func); else if (likely(nr_cpus > 1)) - send_call_function_ipi_mask(cfd->cpumask_ipi); + send_call_function_ipi_mask(cfd->cpumask_ipi, func); cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->pinged, this_cpu, CFD_SEQ_NOCPU, CFD_SEQ_PINGED); }