Message ID | 20240130074744.45759-1-yaoma@linux.alibaba.com |
---|---|
Headers |
Return-Path: <linux-kernel+bounces-44064-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp1058670dyb; Mon, 29 Jan 2024 23:48:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IHgtj6Dl56a4xG5IpOqtYVwQqwUY9YeLfRYnIIylzcqB5LVaabzv2R+3VfsoyAq6DLfyi+b X-Received: by 2002:a05:622a:2d6:b0:42a:7e25:c202 with SMTP id a22-20020a05622a02d600b0042a7e25c202mr841294qtx.3.1706600891868; Mon, 29 Jan 2024 23:48:11 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706600891; cv=pass; d=google.com; s=arc-20160816; b=at5fWptkNamoFRcv6OfpUzaMhvhxiTyvXdh8iOMMbeh1BX3gwOy1vq1TafxG2u359b Bz+6Lc6H4f4SmtihO6Ks6VQLy64z26V/VtL6mNQM1Fde9PAHOtVILwfLyTLpwIExXThb 6BT41c5SqjTtrgcIHuhhUMinfoEkI7jMTlKWsCbrS1Uzg58/SXNBNqctXjPceu57M/oQ 5N56feveMSbZyvV5nI3146kHStcVHra5zCH3/glZBy0vQpTcHrOMAtie6jjn57gj73p+ ATUQG0BXLZBIl0MbokzhuqcpCc3YW4e0siB4xGIbN25ee2PlrD2L6xoW0ZUxhVJ7SjVt uNwg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=sRk2HXZ+P6gMiJU8Q6E3wGvbidxUtQKZLf2zoOIxXPg=; fh=iwSnT5NYnDHQTJ12VpDVBCo/Xa2twiuP43PvK1ma9TE=; b=TyoYIx1V4XfY7fEe+7U8VJ1kBj4eQWa2a5ZydtALjKA9XlXn5vRIH70wxkYmQFC5jE Sy0EU9ghHZyDZqtwgu9JT1hA0dMrFopqL0WbcY5qTDDCCYZtqo1Kd6brfzBSFYTny7Yn f6RIM/LZy4Znyv3HjRmWj9td0j7eYkVSy7UEFp0MXSIbO5WY1a9clSMwaxPOXUgwX8MI KdrSKzkfEQWs2mNp4SrtZtcRc7kU0ssngIXnOTG4+7H1vK4Y/6Gd6CB3IudnJoO776mh I9Bq4gRLDb81yd/T9bswjmILtgVfhU0Y0iuV2Y3kp63TUo72/fnO/8tsJ3plCEjfnfZU Bxgw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@linux.alibaba.com header.s=default header.b="ViRpY/vh"; arc=pass (i=1 spf=pass spfdomain=linux.alibaba.com dkim=pass dkdomain=linux.alibaba.com dmarc=pass fromdomain=linux.alibaba.com); spf=pass (google.com: domain of linux-kernel+bounces-44064-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-44064-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.alibaba.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id h17-20020ac85e11000000b0042a749a8200si9384184qtx.564.2024.01.29.23.48.11 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 23:48:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-44064-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.alibaba.com header.s=default header.b="ViRpY/vh"; arc=pass (i=1 spf=pass spfdomain=linux.alibaba.com dkim=pass dkdomain=linux.alibaba.com dmarc=pass fromdomain=linux.alibaba.com); spf=pass (google.com: domain of linux-kernel+bounces-44064-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-44064-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.alibaba.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 917291C23889 for <ouuuleilei@gmail.com>; Tue, 30 Jan 2024 07:48:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E8FCE54F93; Tue, 30 Jan 2024 07:47:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="ViRpY/vh" Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CEF54C614 for <linux-kernel@vger.kernel.org>; Tue, 30 Jan 2024 07:47:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706600875; cv=none; b=bPlJ9zfAf/BSi18QMwgrtdWJ2m6BYWUi+I8FPpyEpB52h+abyX0knhqZxGvdemdNoigFbAhzrNa6W6Mbeswh1FvwPrftAkYjghb3I1M5VDuHmmQ4YrkKp0ILDDdBtO39/QCVn+1/+J+LGaROd4KdiU+hH9UMOQ/B/K6CQ2h4lu4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706600875; c=relaxed/simple; bh=ZD9FjIFEh9g0f9kLwm6/eKa1ui1iFwhuOPdBntCyj6U=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version:Content-Type; b=WAS+HjfNuD5jsM/DO89etNPldOqvlzp6iw+yTZQAFoQpcwRE3h5s7WdJ4w3Y7tg3Qk0n7U/2tvAfUZpxjzrOVLr6qY4McgZ/vUBzDyNtKflZeENk5aXxUO8RPLZIVi2T6zYZLXSGTRSSMY8vNNFoWcHqHPxQVce9jzq7Nx+l3PI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=ViRpY/vh; arc=none smtp.client-ip=115.124.30.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1706600870; h=From:To:Subject:Date:Message-Id:MIME-Version:Content-Type; bh=sRk2HXZ+P6gMiJU8Q6E3wGvbidxUtQKZLf2zoOIxXPg=; b=ViRpY/vhOANXK2gTTrdULYBcmCE4MEa0YDbd8IVUREQe73aRfvB/cwDGpIAwSmvxZt/Cmjs4FdVqD6o6iLsq09YfJ1Ir/PUCIl4t1xel2N9pyJFh3c3CLpycGHrcTXO2feOTQ4IRUlCmwed2T5I//kscGUF0OizsHWy9EDcNopQ= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=yaoma@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0W.fTvZz_1706600868; Received: from localhost.localdomain(mailfrom:yaoma@linux.alibaba.com fp:SMTPD_---0W.fTvZz_1706600868) by smtp.aliyun-inc.com; Tue, 30 Jan 2024 15:47:50 +0800 From: Bitao Hu <yaoma@linux.alibaba.com> To: dianders@chromium.org, akpm@linux-foundation.org, pmladek@suse.com, lecopzer.chen@mediatek.com, kernelfans@gmail.com, liusong@linux.alibaba.com Cc: linux-kernel@vger.kernel.org, yaoma@linux.alibaba.com Subject: [PATCHv2 0/2] *** Detect interrupt storm in softlockup *** Date: Tue, 30 Jan 2024 15:47:42 +0800 Message-Id: <20240130074744.45759-1-yaoma@linux.alibaba.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789500736763208772 X-GMAIL-MSGID: 1789500736763208772 |
Series |
*** Detect interrupt storm in softlockup ***
|
|
Message
Bitao Hu
Jan. 30, 2024, 7:47 a.m. UTC
I have implemented a low-overhead method for detecting interrupt storm in softlockup. Please review it, all comments are welcome. Changes from v1 to v2: - From Douglas, optimize the memory of cpustats. With the maximum number of CPUs, that's now this. 2 * 8192 * 4 + 1 * 8192 * 5 * 4 + 1 * 8192 = 237,568 bytes. - From Liu Song, refactor the code format and add necessary comments. - From Douglas, use interrupt counts instead of interrupt time to determine the cause of softlockup. - Remove the cmdline parameter added in PATCHv1. Bitao Hu (2): watchdog/softlockup: low-overhead detection of interrupt storm watchdog/softlockup: report the most frequent interrupts kernel/watchdog.c | 222 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 222 insertions(+)
Comments
在 2024/1/30 15:47, Bitao Hu 写道: > When the watchdog determines that the current soft lockup is due > to an interrupt storm based on CPU utilization, reporting the > most frequent interrupts could be good enough for further > troubleshooting. > > Below is an example of interrupt storm. The call tree does not > provide useful information, but we can analyze which interrupt > caused the soft lockup by comparing the counts of interrupts. > > [ 2987.488075] watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [kworker/9:1:214] > [ 2987.488607] CPU#9 Utilization every 4s during lockup: > [ 2987.488941] #1: 0% system, 0% softirq, 100% hardirq, 0% idle > [ 2987.489357] #2: 0% system, 0% softirq, 100% hardirq, 0% idle > [ 2987.489771] #3: 0% system, 0% softirq, 100% hardirq, 0% idle > [ 2987.490186] #4: 0% system, 0% softirq, 100% hardirq, 0% idle > [ 2987.490601] #5: 0% system, 0% softirq, 100% hardirq, 0% idle > [ 2987.491034] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQs: > [ 2987.491493] #1: 330985 irq#7(IPI) > [ 2987.491743] #2: 5000 irq#10(arch_timer) > [ 2987.492039] #3: 9 irq#91(nvme0q2) > [ 2987.492318] #4: 3 irq#118(virtio1-output.12) > ... > [ 2987.492728] Call trace: > [ 2987.492729] __do_softirq+0xa8/0x364 > > Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com> > --- > kernel/watchdog.c | 150 ++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 150 insertions(+) > > diff --git a/kernel/watchdog.c b/kernel/watchdog.c > index 0efe9604c3c2..38fb18e17d71 100644 > --- a/kernel/watchdog.c > +++ b/kernel/watchdog.c > @@ -25,6 +25,9 @@ > #include <linux/stop_machine.h> > #include <linux/kernel_stat.h> > #include <linux/math64.h> > +#include <linux/irq.h> > +#include <linux/bitops.h> > +#include <linux/irqdesc.h> > > #include <asm/irq_regs.h> > #include <linux/kvm_para.h> > @@ -431,11 +434,15 @@ void touch_softlockup_watchdog_sync(void) > __this_cpu_write(watchdog_report_ts, SOFTLOCKUP_DELAY_REPORT); > } > > +static void set_potential_softlockup(unsigned long now, unsigned long touch_ts); > + > static int is_softlockup(unsigned long touch_ts, > unsigned long period_ts, > unsigned long now) > { > if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) { > + /* Softlockup may occur in the current period */ > + set_potential_softlockup(now, period_ts); > /* Warn about unreasonable delays. */ > if (time_after(now, period_ts + get_softlockup_thresh())) > return now - touch_ts; > @@ -457,6 +464,8 @@ static enum cpu_usage_stat idx_to_stat[NUM_STATS_PER_GROUP] = { > CPUTIME_SYSTEM, CPUTIME_SOFTIRQ, CPUTIME_IRQ, CPUTIME_IDLE > }; > > +static void print_hardirq_counts(void); > + > static void update_cpustat(void) > { > u8 i; > @@ -504,10 +513,150 @@ static void print_cpustat(void) > utilization[i][STATS_SYSTEM], utilization[i][STATS_SOFTIRQ], > utilization[i][STATS_HARDIRQ], utilization[i][STATS_IDLE]); > } > + print_hardirq_counts(); > +} > + > +#define HARDIRQ_PERCENT_THRESH 50 > +#define NUM_HARDIRQ_REPORT 5 > +static DECLARE_BITMAP(softlockup_hardirq_cpus, CONFIG_NR_CPUS); > +static DEFINE_PER_CPU(u32 *, hardirq_counts); > + > +static void find_counts_top(u32 *irq_counts, int *irq, u32 perirq_counts, int perirq_id, int range) > +{ > + unsigned int i, j; > + > + for (i = 0; i < range; i++) { > + if (perirq_counts > irq_counts[i]) { > + for (j = range - 1; j > i; j--) { > + irq_counts[j] = irq_counts[j - 1]; > + irq[j] = irq[j - 1]; > + } > + irq_counts[j] = perirq_counts; > + irq[j] = perirq_id; > + break; > + } > + } > +} > + > +/* > + * If the proportion of time spent handling irq exceeds HARDIRQ_PERCENT_THRESH% > + * during sample_period, then it is necessary to record the counts of each irq. > + */ > +static inline bool need_record_irq_counts(int type) > +{ > + int tail = this_cpu_read(cpustat_tail); > + u8 utilization; > + > + if (--tail == -1) > + tail = 4; > + utilization = this_cpu_read(cpustat_utilization[tail][type]); > + return utilization > HARDIRQ_PERCENT_THRESH; > } > + > +/* > + * Mark softlockup as potentially caused by hardirq > + */ > +static void set_potential_softlockup_hardirq(void) > +{ > + u32 i; > + u32 *counts = __this_cpu_read(hardirq_counts); > + int cpu = smp_processor_id(); > + struct irq_desc *desc; > + > + if (!need_record_irq_counts(STATS_HARDIRQ)) > + return; > + > + if (!test_bit(cpu, softlockup_hardirq_cpus)) { > + counts = kmalloc_array(nr_irqs, sizeof(u32), GFP_ATOMIC); > + if (!counts) > + return; > + for_each_irq_desc(i, desc) { > + if (!desc) > + continue; > + counts[i] = desc->kstat_irqs ? > + *this_cpu_ptr(desc->kstat_irqs) : 0; > + } > + __this_cpu_write(hardirq_counts, counts); > + set_bit(cpu, softlockup_hardirq_cpus); > + } > +} > + > +static void clear_potential_softlockup_hardirq(void) > +{ > + u32 *counts = __this_cpu_read(hardirq_counts); > + int cpu = smp_processor_id(); > + > + if (test_bit(cpu, softlockup_hardirq_cpus)) { > + kfree(counts); > + counts = NULL; > + __this_cpu_write(hardirq_counts, counts); > + clear_bit(cpu, softlockup_hardirq_cpus); > + } > +} > + > +/* > + * Mark that softlockup may occur > + */ > +static void set_potential_softlockup(unsigned long now, unsigned long period_ts) > +{ > + if (time_after_eq(now, period_ts + get_softlockup_thresh() / 5)) > + set_potential_softlockup_hardirq(); > +} > + > +static void clear_potential_softlockup(void) > +{ > + clear_potential_softlockup_hardirq(); > +} > + > +static void print_hardirq_counts(void) > +{ > + u32 i; > + struct irq_desc *desc; > + u32 counts_diff; > + u32 *counts = __this_cpu_read(hardirq_counts); > + int cpu = smp_processor_id(); > + u32 hardirq_counts_top[NUM_HARDIRQ_REPORT] = {0, 0, 0, 0, 0}; > + int hardirq_top[NUM_HARDIRQ_REPORT] = {-1, -1, -1, -1, -1}; "hardirq_counts_top" and "hardirq_top" seem like two members of a struct, working together to record the most suspicious irq, so wouldn't using a struct make it clearer? > + > + if (test_bit(cpu, softlockup_hardirq_cpus)) { > + /* Find the top NUM_HARDIRQ_REPORT most frequent interrupts */ > + for_each_irq_desc(i, desc) { > + if (!desc) > + continue; > + counts_diff = desc->kstat_irqs ? > + *this_cpu_ptr(desc->kstat_irqs) - counts[i] : 0; > + find_counts_top(hardirq_counts_top, hardirq_top, > + counts_diff, i, NUM_HARDIRQ_REPORT); > + } > + /* > + * We do not want the "watchdog: " prefix on every line, > + * hence we use "printk" instead of "pr_crit". > + */ > + printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. Most frequent HardIRQs:\n", > + smp_processor_id(), HARDIRQ_PERCENT_THRESH); > + for (i = 0; i < NUM_HARDIRQ_REPORT; i++) { > + if (hardirq_top[i] == -1) > + break; > + desc = irq_to_desc(hardirq_top[i]); > + if (desc && desc->action) > + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d(%s)\n", > + i+1, hardirq_counts_top[i], > + hardirq_top[i], desc->action->name); > + else > + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n", > + i+1, hardirq_counts_top[i], > + hardirq_top[i]); > + } > + if (!need_record_irq_counts(STATS_HARDIRQ)) > + clear_potential_softlockup_hardirq(); > + } > +} > + > #else > static inline void update_cpustat(void) { } > static inline void print_cpustat(void) { } > +static inline void set_potential_softlockup(unsigned long now, unsigned long period_ts) { } > +static inline void clear_potential_softlockup(void) { } > #endif > > /* watchdog detector functions */ > @@ -525,6 +674,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work); > static int softlockup_fn(void *data) > { > update_touch_ts(); > + clear_potential_softlockup(); > complete(this_cpu_ptr(&softlockup_completion)); > > return 0;
On 2024/1/31 09:50, Liu Song wrote: > > 在 2024/1/30 15:47, Bitao Hu 写道: >> When the watchdog determines that the current soft lockup is due >> to an interrupt storm based on CPU utilization, reporting the >> most frequent interrupts could be good enough for further >> troubleshooting. >> >> Below is an example of interrupt storm. The call tree does not >> provide useful information, but we can analyze which interrupt >> caused the soft lockup by comparing the counts of interrupts. >> >> [ 2987.488075] watchdog: BUG: soft lockup - CPU#9 stuck for 23s! >> [kworker/9:1:214] >> [ 2987.488607] CPU#9 Utilization every 4s during lockup: >> [ 2987.488941] #1: 0% system, 0% softirq, 100% >> hardirq, 0% idle >> [ 2987.489357] #2: 0% system, 0% softirq, 100% >> hardirq, 0% idle >> [ 2987.489771] #3: 0% system, 0% softirq, 100% >> hardirq, 0% idle >> [ 2987.490186] #4: 0% system, 0% softirq, 100% >> hardirq, 0% idle >> [ 2987.490601] #5: 0% system, 0% softirq, 100% >> hardirq, 0% idle >> [ 2987.491034] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent >> HardIRQs: >> [ 2987.491493] #1: 330985 irq#7(IPI) >> [ 2987.491743] #2: 5000 irq#10(arch_timer) >> [ 2987.492039] #3: 9 irq#91(nvme0q2) >> [ 2987.492318] #4: 3 irq#118(virtio1-output.12) >> ... >> [ 2987.492728] Call trace: >> [ 2987.492729] __do_softirq+0xa8/0x364 >> >> Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com> >> --- >> kernel/watchdog.c | 150 ++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 150 insertions(+) >> >> diff --git a/kernel/watchdog.c b/kernel/watchdog.c >> index 0efe9604c3c2..38fb18e17d71 100644 >> --- a/kernel/watchdog.c >> +++ b/kernel/watchdog.c >> @@ -25,6 +25,9 @@ >> #include <linux/stop_machine.h> >> #include <linux/kernel_stat.h> >> #include <linux/math64.h> >> +#include <linux/irq.h> >> +#include <linux/bitops.h> >> +#include <linux/irqdesc.h> >> #include <asm/irq_regs.h> >> #include <linux/kvm_para.h> >> @@ -431,11 +434,15 @@ void touch_softlockup_watchdog_sync(void) >> __this_cpu_write(watchdog_report_ts, SOFTLOCKUP_DELAY_REPORT); >> } >> +static void set_potential_softlockup(unsigned long now, unsigned long >> touch_ts); >> + >> static int is_softlockup(unsigned long touch_ts, >> unsigned long period_ts, >> unsigned long now) >> { >> if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && >> watchdog_thresh) { >> + /* Softlockup may occur in the current period */ >> + set_potential_softlockup(now, period_ts); >> /* Warn about unreasonable delays. */ >> if (time_after(now, period_ts + get_softlockup_thresh())) >> return now - touch_ts; >> @@ -457,6 +464,8 @@ static enum cpu_usage_stat >> idx_to_stat[NUM_STATS_PER_GROUP] = { >> CPUTIME_SYSTEM, CPUTIME_SOFTIRQ, CPUTIME_IRQ, CPUTIME_IDLE >> }; >> +static void print_hardirq_counts(void); >> + >> static void update_cpustat(void) >> { >> u8 i; >> @@ -504,10 +513,150 @@ static void print_cpustat(void) >> utilization[i][STATS_SYSTEM], >> utilization[i][STATS_SOFTIRQ], >> utilization[i][STATS_HARDIRQ], utilization[i][STATS_IDLE]); >> } >> + print_hardirq_counts(); >> +} >> + >> +#define HARDIRQ_PERCENT_THRESH 50 >> +#define NUM_HARDIRQ_REPORT 5 >> +static DECLARE_BITMAP(softlockup_hardirq_cpus, CONFIG_NR_CPUS); >> +static DEFINE_PER_CPU(u32 *, hardirq_counts); >> + >> +static void find_counts_top(u32 *irq_counts, int *irq, u32 >> perirq_counts, int perirq_id, int range) >> +{ >> + unsigned int i, j; >> + >> + for (i = 0; i < range; i++) { >> + if (perirq_counts > irq_counts[i]) { >> + for (j = range - 1; j > i; j--) { >> + irq_counts[j] = irq_counts[j - 1]; >> + irq[j] = irq[j - 1]; >> + } >> + irq_counts[j] = perirq_counts; >> + irq[j] = perirq_id; >> + break; >> + } >> + } >> +} >> + >> +/* >> + * If the proportion of time spent handling irq exceeds >> HARDIRQ_PERCENT_THRESH% >> + * during sample_period, then it is necessary to record the counts of >> each irq. >> + */ >> +static inline bool need_record_irq_counts(int type) >> +{ >> + int tail = this_cpu_read(cpustat_tail); >> + u8 utilization; >> + >> + if (--tail == -1) >> + tail = 4; >> + utilization = this_cpu_read(cpustat_utilization[tail][type]); >> + return utilization > HARDIRQ_PERCENT_THRESH; >> } >> + >> +/* >> + * Mark softlockup as potentially caused by hardirq >> + */ >> +static void set_potential_softlockup_hardirq(void) >> +{ >> + u32 i; >> + u32 *counts = __this_cpu_read(hardirq_counts); >> + int cpu = smp_processor_id(); >> + struct irq_desc *desc; >> + >> + if (!need_record_irq_counts(STATS_HARDIRQ)) >> + return; >> + >> + if (!test_bit(cpu, softlockup_hardirq_cpus)) { >> + counts = kmalloc_array(nr_irqs, sizeof(u32), GFP_ATOMIC); >> + if (!counts) >> + return; >> + for_each_irq_desc(i, desc) { >> + if (!desc) >> + continue; >> + counts[i] = desc->kstat_irqs ? >> + *this_cpu_ptr(desc->kstat_irqs) : 0; >> + } >> + __this_cpu_write(hardirq_counts, counts); >> + set_bit(cpu, softlockup_hardirq_cpus); >> + } >> +} >> + >> +static void clear_potential_softlockup_hardirq(void) >> +{ >> + u32 *counts = __this_cpu_read(hardirq_counts); >> + int cpu = smp_processor_id(); >> + >> + if (test_bit(cpu, softlockup_hardirq_cpus)) { >> + kfree(counts); >> + counts = NULL; >> + __this_cpu_write(hardirq_counts, counts); >> + clear_bit(cpu, softlockup_hardirq_cpus); >> + } >> +} >> + >> +/* >> + * Mark that softlockup may occur >> + */ >> +static void set_potential_softlockup(unsigned long now, unsigned long >> period_ts) >> +{ >> + if (time_after_eq(now, period_ts + get_softlockup_thresh() / 5)) >> + set_potential_softlockup_hardirq(); >> +} >> + >> +static void clear_potential_softlockup(void) >> +{ >> + clear_potential_softlockup_hardirq(); >> +} >> + >> +static void print_hardirq_counts(void) >> +{ >> + u32 i; >> + struct irq_desc *desc; >> + u32 counts_diff; >> + u32 *counts = __this_cpu_read(hardirq_counts); >> + int cpu = smp_processor_id(); >> + u32 hardirq_counts_top[NUM_HARDIRQ_REPORT] = {0, 0, 0, 0, 0}; >> + int hardirq_top[NUM_HARDIRQ_REPORT] = {-1, -1, -1, -1, -1}; > "hardirq_counts_top" and "hardirq_top" seem like two members of a struct, > working together to record the most suspicious irq, so wouldn't using a > struct make it clearer? OK, I will define a struct. >> + >> + if (test_bit(cpu, softlockup_hardirq_cpus)) { >> + /* Find the top NUM_HARDIRQ_REPORT most frequent interrupts */ >> + for_each_irq_desc(i, desc) { >> + if (!desc) >> + continue; >> + counts_diff = desc->kstat_irqs ? >> + *this_cpu_ptr(desc->kstat_irqs) - counts[i] : 0; >> + find_counts_top(hardirq_counts_top, hardirq_top, >> + counts_diff, i, NUM_HARDIRQ_REPORT); >> + } >> + /* >> + * We do not want the "watchdog: " prefix on every line, >> + * hence we use "printk" instead of "pr_crit". >> + */ >> + printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. >> Most frequent HardIRQs:\n", >> + smp_processor_id(), HARDIRQ_PERCENT_THRESH); >> + for (i = 0; i < NUM_HARDIRQ_REPORT; i++) { >> + if (hardirq_top[i] == -1) >> + break; >> + desc = irq_to_desc(hardirq_top[i]); >> + if (desc && desc->action) >> + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d(%s)\n", >> + i+1, hardirq_counts_top[i], >> + hardirq_top[i], desc->action->name); >> + else >> + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n", >> + i+1, hardirq_counts_top[i], >> + hardirq_top[i]); >> + } >> + if (!need_record_irq_counts(STATS_HARDIRQ)) >> + clear_potential_softlockup_hardirq(); >> + } >> +} >> + >> #else >> static inline void update_cpustat(void) { } >> static inline void print_cpustat(void) { } >> +static inline void set_potential_softlockup(unsigned long now, >> unsigned long period_ts) { } >> +static inline void clear_potential_softlockup(void) { } >> #endif >> /* watchdog detector functions */ >> @@ -525,6 +674,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, >> softlockup_stop_work); >> static int softlockup_fn(void *data) >> { >> update_touch_ts(); >> + clear_potential_softlockup(); >> complete(this_cpu_ptr(&softlockup_completion)); >> return 0;