Message ID | 20230508223124.1438167-3-imran.f.khan@oracle.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2494178vqo; Mon, 8 May 2023 16:21:07 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ48FMKPx+X1jSDoMlJAsOyzxmwg8ijp6FjFFBR5h9tr9oHrqYsAhN83/dGiYp+uBLHyiCb4 X-Received: by 2002:a05:6a21:6d9a:b0:100:607:b997 with SMTP id wl26-20020a056a216d9a00b001000607b997mr8486076pzb.49.1683588066774; Mon, 08 May 2023 16:21:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683588066; cv=none; d=google.com; s=arc-20160816; b=qchcpXkXN+SRCdJMxPoMTE0LU6gGYa7m9RnJFJm83GiPFZgKOFowe0MEszfvd8gLsZ 4Is76FYvTvmUuSwEWiTKJvAkYS0hQ60g3wdqajgwWBJ3t3OdwYQM3xHcLV59RgNeDkU+ af+GBxNo2JfoZJK34lqHgT5RIBri9fUYulo4TyMxwG2wXkMnfunpIxMZ+ulo+CF9kouN htn8v42UsPTRT5MYiqoB7Sx3rUWL41tMcrdUnnX+bP9nfkhiwwz6/Kj9UaFhwSutXYoe 1fHvo9gEEIO7ETIuxRK3B4mr3Dcj365bTqHL4kDb6Wmu6uF+b7axHF51MT+Q3yKO80lH Ustg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=B9yD/+CYrKVcypcEgjmGEaQX//z09PC+3sHaWn9qNXY=; b=k7nMzOTD0a+077bmfraujbWHvj0Xqva4gcjpAIAuUttTuSKkzIdDgAZCwmzSF9Y/Du f1ON1NdLc68AResjOqhpDipakPjqtiCoV0t8f79El4cRnhAKNaOqMG6xVZ9NEw2iYwLw MCJvkrx83vYJqnFnNVX0Qd8N+jWiBFFbpx+oyuxu3mvAlDXbsbRpB1KZQcZMHm/zWpHK XhrIfLW5htjDQDpKJ5ghKzLG4bXxYpAwKI7ewtqVWu86msRKNxRaD20YicTd5t6MjVju L+BZVdLuFigeGaU0u+J+xASSCSifCfadZMcMJEb9quo0u3fjNcMmRHU0w4f0hNxszBB4 UGEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-03-30 header.b=fip0cDRm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r9-20020aa79889000000b0062e024b49b8si989023pfl.150.2023.05.08.16.20.44; Mon, 08 May 2023 16:21:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-03-30 header.b=fip0cDRm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233828AbjEHWcB (ORCPT <rfc822;jeantsuru.cumc.mandola@gmail.com> + 99 others); Mon, 8 May 2023 18:32:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233976AbjEHWb7 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 8 May 2023 18:31:59 -0400 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F07A665AA for <linux-kernel@vger.kernel.org>; Mon, 8 May 2023 15:31:58 -0700 (PDT) Received: from pps.filterd (m0246632.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 348JOsrS002212; Mon, 8 May 2023 22:31:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2023-03-30; bh=B9yD/+CYrKVcypcEgjmGEaQX//z09PC+3sHaWn9qNXY=; b=fip0cDRmD8TGviS8pvRKyVtChRoKOPG4Uqplmf2abB2QcojmY1eCz5DLB2pLzNRgncMH uTfDfdYcK21C6QvR3vjRC7L44pFQ1M1vmWusmcg0fUskZe80tf9XSie5VXF9M3MKqYRh KzOBmZQlVz8LcGF1iG/g4LDjmrwF2HsHPB6DIcxxnRWczbxVrpnwgMqdZzV8exidM/Ma k2J+xvWY0tAeFuOEI4BfcSyXV7em+Msee1ed/MT2Luf5LS3PUCeBWkwFphD8wSCdreAd LibmmQPyRerz/Eo/b71/wSANSs1vBLO8IXMLhyE8oT2Zw2P7jB4DOenw7plQ+EWopzIb Gg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3qf7770ah6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 08 May 2023 22:31:48 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 348LZuea015246; Mon, 8 May 2023 22:31:48 GMT Received: from localhost.localdomain (dhcp-10-191-130-46.vpn.oracle.com [10.191.130.46]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3qf77f5pca-3; Mon, 08 May 2023 22:31:47 +0000 From: Imran Khan <imran.f.khan@oracle.com> To: peterz@infradead.org, paulmck@kernel.org, jgross@suse.com, vschneid@redhat.com, yury.norov@gmail.com, tglx@linutronix.de Cc: linux-kernel@vger.kernel.org Subject: [RESEND PATCH 2/2] smp: Reduce NMI traffic from CSD waiters to CSD destination. Date: Tue, 9 May 2023 08:31:24 +1000 Message-Id: <20230508223124.1438167-3-imran.f.khan@oracle.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230508223124.1438167-1-imran.f.khan@oracle.com> References: <20230508223124.1438167-1-imran.f.khan@oracle.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-05-08_16,2023-05-05_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 malwarescore=0 adultscore=0 spamscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305080149 X-Proofpoint-GUID: rDEMgUGpwh1Lrqbkr3GQOK8wEb2tQVkb X-Proofpoint-ORIG-GUID: rDEMgUGpwh1Lrqbkr3GQOK8wEb2tQVkb X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765370040406669366?= X-GMAIL-MSGID: =?utf-8?q?1765370040406669366?= |
Series |
smp: Reduce logging due to dump_stack of CSD waiters
|
|
Commit Message
Imran Khan
May 8, 2023, 10:31 p.m. UTC
On systems with hundreds of CPUs, if few hundred or most of the CPUs
detect a CSD hang, then all of these waiters endup sending an NMI to
destination CPU to dump its backtrace.
Depending on the number of such NMIs, destination CPU can spent
a significant amount of time handling these NMIs and thus making
it more difficult for this CPU to address those pending CSDs timely.
In worst case it can happen that by the time destination CPU is done
handling all of the above mentioned backtrace NMIs, csd wait time
may have elapsed and all of the waiters start sending backtrace NMI
again and this behaviour continues in loop.
To avoid the above mentioned scenario, issue backtrace NMI only from
first waiter. The other waiters to same CSD destination can make use
of backtrace obtained via fist waiter's NMI.
Signed-off-by: Imran Khan <imran.f.khan@oracle.com>
---
kernel/smp.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
Comments
On Tue, May 09, 2023 at 08:31:24AM +1000, Imran Khan wrote: > On systems with hundreds of CPUs, if few hundred or most of the CPUs > detect a CSD hang, then all of these waiters endup sending an NMI to > destination CPU to dump its backtrace. > Depending on the number of such NMIs, destination CPU can spent > a significant amount of time handling these NMIs and thus making > it more difficult for this CPU to address those pending CSDs timely. > In worst case it can happen that by the time destination CPU is done > handling all of the above mentioned backtrace NMIs, csd wait time > may have elapsed and all of the waiters start sending backtrace NMI > again and this behaviour continues in loop. > > To avoid the above mentioned scenario, issue backtrace NMI only from > first waiter. The other waiters to same CSD destination can make use > of backtrace obtained via fist waiter's NMI. > > Signed-off-by: Imran Khan <imran.f.khan@oracle.com> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> > --- > kernel/smp.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/kernel/smp.c b/kernel/smp.c > index b7ccba677a0a0..a1cd21ea8b308 100644 > --- a/kernel/smp.c > +++ b/kernel/smp.c > @@ -43,6 +43,8 @@ static DEFINE_PER_CPU_ALIGNED(struct call_function_data, cfd_data); > > static DEFINE_PER_CPU_SHARED_ALIGNED(struct llist_head, call_single_queue); > > +static DEFINE_PER_CPU(atomic_t, trigger_backtrace) = ATOMIC_INIT(1); > + > static void __flush_smp_call_function_queue(bool warn_cpu_offline); > > int smpcfd_prepare_cpu(unsigned int cpu) > @@ -242,7 +244,8 @@ static bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 * > *bug_id, !cpu_cur_csd ? "unresponsive" : "handling this request"); > } > if (cpu >= 0) { > - dump_cpu_task(cpu); > + if (atomic_cmpxchg_acquire(&per_cpu(trigger_backtrace, cpu), 1, 0)) > + dump_cpu_task(cpu); > if (!cpu_cur_csd) { > pr_alert("csd: Re-sending CSD lock (#%d) IPI from CPU#%02d to CPU#%02d\n", *bug_id, raw_smp_processor_id(), cpu); > arch_send_call_function_single_ipi(cpu); > @@ -423,9 +426,14 @@ static void __flush_smp_call_function_queue(bool warn_cpu_offline) > struct llist_node *entry, *prev; > struct llist_head *head; > static bool warned; > + atomic_t *tbt; > > lockdep_assert_irqs_disabled(); > > + /* Allow waiters to send backtrace NMI from here onwards */ > + tbt = this_cpu_ptr(&trigger_backtrace); > + atomic_set_release(tbt, 1); > + > head = this_cpu_ptr(&call_single_queue); > entry = llist_del_all(head); > entry = llist_reverse_order(entry); > -- > 2.34.1 >
Hello Paul, On 16/5/2023 10:09 pm, Paul E. McKenney wrote: > On Tue, May 09, 2023 at 08:31:24AM +1000, Imran Khan wrote: >> On systems with hundreds of CPUs, if few hundred or most of the CPUs >> detect a CSD hang, then all of these waiters endup sending an NMI to >> destination CPU to dump its backtrace. >> Depending on the number of such NMIs, destination CPU can spent >> a significant amount of time handling these NMIs and thus making >> it more difficult for this CPU to address those pending CSDs timely. >> In worst case it can happen that by the time destination CPU is done >> handling all of the above mentioned backtrace NMIs, csd wait time >> may have elapsed and all of the waiters start sending backtrace NMI >> again and this behaviour continues in loop. >> >> To avoid the above mentioned scenario, issue backtrace NMI only from >> first waiter. The other waiters to same CSD destination can make use >> of backtrace obtained via fist waiter's NMI. >> >> Signed-off-by: Imran Khan <imran.f.khan@oracle.com> > > Reviewed-by: Paul E. McKenney <paulmck@kernel.org> > Thanks a lot for reviewing this and [1]. Could you kindly let me know if you plan to pick these in your tree, at some point of time. Thanks, Imran [1]: https://lore.kernel.org/all/088edfa0-c1b7-407f-8b20-caf0fecfbb79@paulmck-laptop/ >> --- >> kernel/smp.c | 10 +++++++++- >> 1 file changed, 9 insertions(+), 1 deletion(-) >> >> diff --git a/kernel/smp.c b/kernel/smp.c >> index b7ccba677a0a0..a1cd21ea8b308 100644 >> --- a/kernel/smp.c >> +++ b/kernel/smp.c >> @@ -43,6 +43,8 @@ static DEFINE_PER_CPU_ALIGNED(struct call_function_data, cfd_data); >> >> static DEFINE_PER_CPU_SHARED_ALIGNED(struct llist_head, call_single_queue); >> >> +static DEFINE_PER_CPU(atomic_t, trigger_backtrace) = ATOMIC_INIT(1); >> + >> static void __flush_smp_call_function_queue(bool warn_cpu_offline); >> >> int smpcfd_prepare_cpu(unsigned int cpu) >> @@ -242,7 +244,8 @@ static bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 * >> *bug_id, !cpu_cur_csd ? "unresponsive" : "handling this request"); >> } >> if (cpu >= 0) { >> - dump_cpu_task(cpu); >> + if (atomic_cmpxchg_acquire(&per_cpu(trigger_backtrace, cpu), 1, 0)) >> + dump_cpu_task(cpu); >> if (!cpu_cur_csd) { >> pr_alert("csd: Re-sending CSD lock (#%d) IPI from CPU#%02d to CPU#%02d\n", *bug_id, raw_smp_processor_id(), cpu); >> arch_send_call_function_single_ipi(cpu); >> @@ -423,9 +426,14 @@ static void __flush_smp_call_function_queue(bool warn_cpu_offline) >> struct llist_node *entry, *prev; >> struct llist_head *head; >> static bool warned; >> + atomic_t *tbt; >> >> lockdep_assert_irqs_disabled(); >> >> + /* Allow waiters to send backtrace NMI from here onwards */ >> + tbt = this_cpu_ptr(&trigger_backtrace); >> + atomic_set_release(tbt, 1); >> + >> head = this_cpu_ptr(&call_single_queue); >> entry = llist_del_all(head); >> entry = llist_reverse_order(entry); >> -- >> 2.34.1 >>
On Tue, May 30, 2023 at 11:24:00AM +1000, Imran Khan wrote: > Hello Paul, > > On 16/5/2023 10:09 pm, Paul E. McKenney wrote: > > On Tue, May 09, 2023 at 08:31:24AM +1000, Imran Khan wrote: > >> On systems with hundreds of CPUs, if few hundred or most of the CPUs > >> detect a CSD hang, then all of these waiters endup sending an NMI to > >> destination CPU to dump its backtrace. > >> Depending on the number of such NMIs, destination CPU can spent > >> a significant amount of time handling these NMIs and thus making > >> it more difficult for this CPU to address those pending CSDs timely. > >> In worst case it can happen that by the time destination CPU is done > >> handling all of the above mentioned backtrace NMIs, csd wait time > >> may have elapsed and all of the waiters start sending backtrace NMI > >> again and this behaviour continues in loop. > >> > >> To avoid the above mentioned scenario, issue backtrace NMI only from > >> first waiter. The other waiters to same CSD destination can make use > >> of backtrace obtained via fist waiter's NMI. > >> > >> Signed-off-by: Imran Khan <imran.f.khan@oracle.com> > > > > Reviewed-by: Paul E. McKenney <paulmck@kernel.org> > > Thanks a lot for reviewing this and [1]. Could you kindly let me know > if you plan to pick these in your tree, at some point of time. I have done so, and they should make it to -next early next week, assuming testing goes well. Thanx, Paul > Thanks, > Imran > > [1]: > https://lore.kernel.org/all/088edfa0-c1b7-407f-8b20-caf0fecfbb79@paulmck-laptop/ > > >> --- > >> kernel/smp.c | 10 +++++++++- > >> 1 file changed, 9 insertions(+), 1 deletion(-) > >> > >> diff --git a/kernel/smp.c b/kernel/smp.c > >> index b7ccba677a0a0..a1cd21ea8b308 100644 > >> --- a/kernel/smp.c > >> +++ b/kernel/smp.c > >> @@ -43,6 +43,8 @@ static DEFINE_PER_CPU_ALIGNED(struct call_function_data, cfd_data); > >> > >> static DEFINE_PER_CPU_SHARED_ALIGNED(struct llist_head, call_single_queue); > >> > >> +static DEFINE_PER_CPU(atomic_t, trigger_backtrace) = ATOMIC_INIT(1); > >> + > >> static void __flush_smp_call_function_queue(bool warn_cpu_offline); > >> > >> int smpcfd_prepare_cpu(unsigned int cpu) > >> @@ -242,7 +244,8 @@ static bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 * > >> *bug_id, !cpu_cur_csd ? "unresponsive" : "handling this request"); > >> } > >> if (cpu >= 0) { > >> - dump_cpu_task(cpu); > >> + if (atomic_cmpxchg_acquire(&per_cpu(trigger_backtrace, cpu), 1, 0)) > >> + dump_cpu_task(cpu); > >> if (!cpu_cur_csd) { > >> pr_alert("csd: Re-sending CSD lock (#%d) IPI from CPU#%02d to CPU#%02d\n", *bug_id, raw_smp_processor_id(), cpu); > >> arch_send_call_function_single_ipi(cpu); > >> @@ -423,9 +426,14 @@ static void __flush_smp_call_function_queue(bool warn_cpu_offline) > >> struct llist_node *entry, *prev; > >> struct llist_head *head; > >> static bool warned; > >> + atomic_t *tbt; > >> > >> lockdep_assert_irqs_disabled(); > >> > >> + /* Allow waiters to send backtrace NMI from here onwards */ > >> + tbt = this_cpu_ptr(&trigger_backtrace); > >> + atomic_set_release(tbt, 1); > >> + > >> head = this_cpu_ptr(&call_single_queue); > >> entry = llist_del_all(head); > >> entry = llist_reverse_order(entry); > >> -- > >> 2.34.1 > >>
diff --git a/kernel/smp.c b/kernel/smp.c index b7ccba677a0a0..a1cd21ea8b308 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -43,6 +43,8 @@ static DEFINE_PER_CPU_ALIGNED(struct call_function_data, cfd_data); static DEFINE_PER_CPU_SHARED_ALIGNED(struct llist_head, call_single_queue); +static DEFINE_PER_CPU(atomic_t, trigger_backtrace) = ATOMIC_INIT(1); + static void __flush_smp_call_function_queue(bool warn_cpu_offline); int smpcfd_prepare_cpu(unsigned int cpu) @@ -242,7 +244,8 @@ static bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 * *bug_id, !cpu_cur_csd ? "unresponsive" : "handling this request"); } if (cpu >= 0) { - dump_cpu_task(cpu); + if (atomic_cmpxchg_acquire(&per_cpu(trigger_backtrace, cpu), 1, 0)) + dump_cpu_task(cpu); if (!cpu_cur_csd) { pr_alert("csd: Re-sending CSD lock (#%d) IPI from CPU#%02d to CPU#%02d\n", *bug_id, raw_smp_processor_id(), cpu); arch_send_call_function_single_ipi(cpu); @@ -423,9 +426,14 @@ static void __flush_smp_call_function_queue(bool warn_cpu_offline) struct llist_node *entry, *prev; struct llist_head *head; static bool warned; + atomic_t *tbt; lockdep_assert_irqs_disabled(); + /* Allow waiters to send backtrace NMI from here onwards */ + tbt = this_cpu_ptr(&trigger_backtrace); + atomic_set_release(tbt, 1); + head = this_cpu_ptr(&call_single_queue); entry = llist_del_all(head); entry = llist_reverse_order(entry);