From patchwork Fri Sep 29 18:16:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 146971 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp553009vqb; Sat, 30 Sep 2023 12:46:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH+LB6oezFo2K2z7sYO1vwiw7zmmow/Ccils3JwbVfMRtlJ3bTwNBbslWOEVvkfpZ8RbYF6 X-Received: by 2002:a17:90a:7445:b0:277:1070:7494 with SMTP id o5-20020a17090a744500b0027710707494mr6600759pjk.46.1696103198388; Sat, 30 Sep 2023 12:46:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696103198; cv=none; d=google.com; s=arc-20160816; b=MpU+a2v7oSCS/TArSsukt0DnKYrmi1VAtHlLIugTl16hjboPK/gtLKSIDSL7HbrKT7 964DhxuXU0VlXoJiWW8TRa1gf4R8KvYQH29ij5jXLIj2PH0aFXixB17z3seIlUCMuVyq E4beeeWon2shVcJDCMsYN9+6+OCSEgNcYqdgZA94+npZLzrx7QpDr4qydDbTPrsWOwOq xELn608a3DTOaQIb8fYb6VA8D8Rwbtz0Xb73e5smoGy+nwrdpsmxfJcj3Kiy/BO/abvM w/q7P4upW2NtsgE6kF7lC82080eW5kYTuHhOEMEtyuJ7TGP7K+0nLeWfezROB9x/qNm+ 3EtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rRIQ3e3KcU9ONUyMyxv9gjiIVtu4D58iWW9fdqPLfRk=; fh=S7iaUXeayMPNyPzmDJBX6zQJxeHISzeeDLUOtGlzdSo=; b=lBHS4L4/KBW8fsXxhBFa4TeUMXOkDKzgVe9nFz4J4+lZeg62W0fWmBvYS00tz04ry9 KUmUkNSwttSVR4xk3epLNlKqW+pA+K2n+4Y2ZHj+oDJf9q3uJxBMHhmpDGB8bi41TU1g RS0/U3D3kragUOpucv8FCMTM5sRQ9Hcli/DmF6Ay2IucRzZthpNm0zUgsMROwJ+txxqB tzuFY1lGWmk1PF+YQWGXxbay9FiC4sTm1yDjxL3bCZCHTgQix8whi5M5i+4WM/BdBor2 eJHQ70An12acmRJu2Wp7RrJtovy7GH/yByyOUxEPwO85EO1s3kkBA/B+m0j4iwXuk6dF SUug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=bAJmF6uo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id mm16-20020a17090b359000b00263a2156cd5si4080400pjb.30.2023.09.30.12.46.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Sep 2023 12:46:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=bAJmF6uo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 5719F81F37A2; Fri, 29 Sep 2023 13:14:33 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233278AbjI2UN4 (ORCPT + 19 others); Fri, 29 Sep 2023 16:13:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231429AbjI2UNy (ORCPT ); Fri, 29 Sep 2023 16:13:54 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 294FCDE; Fri, 29 Sep 2023 13:13:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696018431; x=1727554431; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XZ6csUCCHRl8sa67Qg/U2Db9fi/b8PMaWfugRMCXfLA=; b=bAJmF6uoL/fuMHF24+q6i1J14ZJFncN0PS04llgb92RaGG6/JHywdEzO NIzfQrhtXDXtvvx3g3yWa/QrMgJ22gYzeipjPGqeOlM+EKbkUYBKd+Z9M g8RLzaEYo7NJFZ2WNU6E8FiqI16583gZXIjiCMWrWltBGSEeTqU3gzqeI 8clv2u4DlURWjfVWfLLS4kJOKGXz4PasrQTZNWnJVLAPbny2vuef44Xs8 lMVgufTZ4UJKIIaDAJ6nxJmXH/BP2qXdGwP0ReZiVBZA42lpNs1hqbxrP c2SqAIaO+XuagGm1v1qeDogMyDeJREYSO5bLXKosiGauaPzovq6+/3Wng g==; X-IronPort-AV: E=McAfee;i="6600,9927,10848"; a="362604240" X-IronPort-AV: E=Sophos;i="6.03,188,1694761200"; d="scan'208";a="362604240" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2023 11:16:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,188,1694761200"; d="scan'208";a="921795" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2023 11:16:37 -0700 From: Tony Luck To: Borislav Petkov Cc: Yazen Ghannam , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v8 1/3] x86/mce: Remove old CMCI storm mitigation code Date: Fri, 29 Sep 2023 11:16:24 -0700 Message-ID: <20230929181626.210782-2-tony.luck@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230929181626.210782-1-tony.luck@intel.com> References: <20230718210813.291190-1-tony.luck@intel.com> <20230929181626.210782-1-tony.luck@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Fri, 29 Sep 2023 13:14:33 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1762190929039148390 X-GMAIL-MSGID: 1778493107690315843 When a "storm" of CMCI is detected this code mitigates by disabling CMCI interrupt signalling from all of the banks owned by the CPU that saw the storm. There are problems with this approach: 1) It is very coarse grained. In all likelihood only one of the banks was generating the interrupts, but CMCI is disabled for all. This means Linux may delay seeing and processing errors logged from other banks. 2) Although CMCI stands for Corrected Machine Check Interrupt, it is also used to signal when an uncorrected error is logged. This is a problem because these errors should be handled in a timely manner. Delete all this code in preparation for a finer grained solution. Reviewed-by: Yazen Ghannam Tested-by: Yazen Ghannam Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/mce/internal.h | 6 -- arch/x86/kernel/cpu/mce/core.c | 20 +--- arch/x86/kernel/cpu/mce/intel.c | 145 ----------------------------- 3 files changed, 1 insertion(+), 170 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index bcf1b3c66c9c..616732ec16f7 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -41,18 +41,12 @@ struct dentry *mce_get_debugfs_dir(void); extern mce_banks_t mce_banks_ce_disabled; #ifdef CONFIG_X86_MCE_INTEL -unsigned long cmci_intel_adjust_timer(unsigned long interval); -bool mce_intel_cmci_poll(void); -void mce_intel_hcpu_update(unsigned long cpu); void cmci_disable_bank(int bank); void intel_init_cmci(void); void intel_init_lmce(void); void intel_clear_lmce(void); bool intel_filter_mce(struct mce *m); #else -# define cmci_intel_adjust_timer mce_adjust_timer_default -static inline bool mce_intel_cmci_poll(void) { return false; } -static inline void mce_intel_hcpu_update(unsigned long cpu) { } static inline void cmci_disable_bank(int bank) { } static inline void intel_init_cmci(void) { } static inline void intel_init_lmce(void) { } diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 6f35f724cc14..f6e87443b37a 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1611,13 +1611,6 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL; static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */ static DEFINE_PER_CPU(struct timer_list, mce_timer); -static unsigned long mce_adjust_timer_default(unsigned long interval) -{ - return interval; -} - -static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default; - static void __start_timer(struct timer_list *t, unsigned long interval) { unsigned long when = jiffies + interval; @@ -1647,15 +1640,9 @@ static void mce_timer_fn(struct timer_list *t) iv = __this_cpu_read(mce_next_interval); - if (mce_available(this_cpu_ptr(&cpu_info))) { + if (mce_available(this_cpu_ptr(&cpu_info))) mc_poll_banks(); - if (mce_intel_cmci_poll()) { - iv = mce_adjust_timer(iv); - goto done; - } - } - /* * Alert userspace if needed. If we logged an MCE, reduce the polling * interval, otherwise increase the polling interval. @@ -1665,7 +1652,6 @@ static void mce_timer_fn(struct timer_list *t) else iv = min(iv * 2, round_jiffies_relative(check_interval * HZ)); -done: __this_cpu_write(mce_next_interval, iv); __start_timer(t, iv); } @@ -2005,7 +1991,6 @@ static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c) intel_init_cmci(); intel_init_lmce(); - mce_adjust_timer = cmci_intel_adjust_timer; } static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c) @@ -2018,7 +2003,6 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) switch (c->x86_vendor) { case X86_VENDOR_INTEL: mce_intel_feature_init(c); - mce_adjust_timer = cmci_intel_adjust_timer; break; case X86_VENDOR_AMD: { @@ -2675,8 +2659,6 @@ static void mce_reenable_cpu(void) static int mce_cpu_dead(unsigned int cpu) { - mce_intel_hcpu_update(cpu); - /* intentionally ignoring frozen here */ if (!cpuhp_tasks_frozen) cmci_rediscover(); diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index f5323551c1a9..b07656408f67 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -41,15 +41,6 @@ */ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned); -/* - * CMCI storm detection backoff counter - * - * During storm, we reset this counter to INITIAL_CHECK_INTERVAL in case we've - * encountered an error. If not, we decrement it by one. We signal the end of - * the CMCI storm when it reaches 0. - */ -static DEFINE_PER_CPU(int, cmci_backoff_cnt); - /* * cmci_discover_lock protects against parallel discovery attempts * which could race against each other. @@ -64,21 +55,6 @@ static DEFINE_RAW_SPINLOCK(cmci_discover_lock); static DEFINE_SPINLOCK(cmci_poll_lock); #define CMCI_THRESHOLD 1 -#define CMCI_POLL_INTERVAL (30 * HZ) -#define CMCI_STORM_INTERVAL (HZ) -#define CMCI_STORM_THRESHOLD 15 - -static DEFINE_PER_CPU(unsigned long, cmci_time_stamp); -static DEFINE_PER_CPU(unsigned int, cmci_storm_cnt); -static DEFINE_PER_CPU(unsigned int, cmci_storm_state); - -enum { - CMCI_STORM_NONE, - CMCI_STORM_ACTIVE, - CMCI_STORM_SUBSIDED, -}; - -static atomic_t cmci_storm_on_cpus; static int cmci_supported(int *banks) { @@ -134,124 +110,6 @@ static bool lmce_supported(void) return tmp & FEAT_CTL_LMCE_ENABLED; } -bool mce_intel_cmci_poll(void) -{ - if (__this_cpu_read(cmci_storm_state) == CMCI_STORM_NONE) - return false; - - /* - * Reset the counter if we've logged an error in the last poll - * during the storm. - */ - if (machine_check_poll(0, this_cpu_ptr(&mce_banks_owned))) - this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL); - else - this_cpu_dec(cmci_backoff_cnt); - - return true; -} - -void mce_intel_hcpu_update(unsigned long cpu) -{ - if (per_cpu(cmci_storm_state, cpu) == CMCI_STORM_ACTIVE) - atomic_dec(&cmci_storm_on_cpus); - - per_cpu(cmci_storm_state, cpu) = CMCI_STORM_NONE; -} - -static void cmci_toggle_interrupt_mode(bool on) -{ - unsigned long flags, *owned; - int bank; - u64 val; - - raw_spin_lock_irqsave(&cmci_discover_lock, flags); - owned = this_cpu_ptr(mce_banks_owned); - for_each_set_bit(bank, owned, MAX_NR_BANKS) { - rdmsrl(MSR_IA32_MCx_CTL2(bank), val); - - if (on) - val |= MCI_CTL2_CMCI_EN; - else - val &= ~MCI_CTL2_CMCI_EN; - - wrmsrl(MSR_IA32_MCx_CTL2(bank), val); - } - raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); -} - -unsigned long cmci_intel_adjust_timer(unsigned long interval) -{ - if ((this_cpu_read(cmci_backoff_cnt) > 0) && - (__this_cpu_read(cmci_storm_state) == CMCI_STORM_ACTIVE)) { - mce_notify_irq(); - return CMCI_STORM_INTERVAL; - } - - switch (__this_cpu_read(cmci_storm_state)) { - case CMCI_STORM_ACTIVE: - - /* - * We switch back to interrupt mode once the poll timer has - * silenced itself. That means no events recorded and the timer - * interval is back to our poll interval. - */ - __this_cpu_write(cmci_storm_state, CMCI_STORM_SUBSIDED); - if (!atomic_sub_return(1, &cmci_storm_on_cpus)) - pr_notice("CMCI storm subsided: switching to interrupt mode\n"); - - fallthrough; - - case CMCI_STORM_SUBSIDED: - /* - * We wait for all CPUs to go back to SUBSIDED state. When that - * happens we switch back to interrupt mode. - */ - if (!atomic_read(&cmci_storm_on_cpus)) { - __this_cpu_write(cmci_storm_state, CMCI_STORM_NONE); - cmci_toggle_interrupt_mode(true); - cmci_recheck(); - } - return CMCI_POLL_INTERVAL; - default: - - /* We have shiny weather. Let the poll do whatever it thinks. */ - return interval; - } -} - -static bool cmci_storm_detect(void) -{ - unsigned int cnt = __this_cpu_read(cmci_storm_cnt); - unsigned long ts = __this_cpu_read(cmci_time_stamp); - unsigned long now = jiffies; - int r; - - if (__this_cpu_read(cmci_storm_state) != CMCI_STORM_NONE) - return true; - - if (time_before_eq(now, ts + CMCI_STORM_INTERVAL)) { - cnt++; - } else { - cnt = 1; - __this_cpu_write(cmci_time_stamp, now); - } - __this_cpu_write(cmci_storm_cnt, cnt); - - if (cnt <= CMCI_STORM_THRESHOLD) - return false; - - cmci_toggle_interrupt_mode(false); - __this_cpu_write(cmci_storm_state, CMCI_STORM_ACTIVE); - r = atomic_add_return(1, &cmci_storm_on_cpus); - mce_timer_kick(CMCI_STORM_INTERVAL); - this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL); - - if (r == 1) - pr_notice("CMCI storm detected: switching to poll mode\n"); - return true; -} - /* * The interrupt handler. This is called on every event. * Just call the poller directly to log any events. @@ -260,9 +118,6 @@ static bool cmci_storm_detect(void) */ static void intel_threshold_interrupt(void) { - if (cmci_storm_detect()) - return; - machine_check_poll(MCP_TIMESTAMP, this_cpu_ptr(&mce_banks_owned)); } From patchwork Fri Sep 29 18:16:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 146843 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp140198vqb; Fri, 29 Sep 2023 18:26:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF93yjhG3AKxSSVWxWy7cEesgBwuAHpaAewdPV9zkWAjifT5cMeRTEhAfdvRA5ifJWp0j5q X-Received: by 2002:a05:6a21:3e07:b0:15f:16f5:8595 with SMTP id bk7-20020a056a213e0700b0015f16f58595mr5175666pzc.52.1696037182455; Fri, 29 Sep 2023 18:26:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696037182; cv=none; d=google.com; s=arc-20160816; b=SsjhvWE2aw5HQ+r7flcq2L9/BzGl4n/3Jo0iRbRZ8+1wX/cg88rOBV5NDgtElTpLGQ zVcD/NbsWjyT8k/swbC0oyU/v6y4zeauOcc8DJHTr9KMVo3au+9pEUGJBZ5BPijbaKZG +86huzXYalGOi0KTkgn915Gy7DTMg6ZY8ttrpB4JV0yxfElsZvpGeMZPA3Pt567IGRAK t3+VJVrCy2HSPIpBtFccy5NUcpGTS9HfArGWhFZZKTCQkA34z3kkV08ndATTYwEZSYlu nEWpPVs+7oUGefexTznFL7Pd/hWJZgIKSY7hGgWD7sJfbxrlukWeZyzRF6GJHULcyezQ MQ7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wGEaToCGCHOozp53iRrHfNucjksHDlsKCJuzn55a0Po=; fh=S7iaUXeayMPNyPzmDJBX6zQJxeHISzeeDLUOtGlzdSo=; b=bpe3ra26VOgMdOq+gXHXJdM2MLw6396/KlIkNXfheNOJ+DTL/Tu/foTXIgPDJLvzZ0 NSIqIQ6zTTulV1vE5MpvLSZkae07Es8Tc7CS6Jr8+8PVBHnN2Bsjr6HDYJs8jEeMV2O5 C8i+kk1NOPO3a58/IxX5diA2XTwu2RnHjjWf/SU/P6DYe6gaQcJ4YyGwPuEc3vhtQ3rE Vk6AdiHVsmLy7p+9punSffWo7UtBSAAL8VoyXArveZru/wIaQlc/E86ZKz3wHykRNq+b X0ELvaTvJqbfWplEDxezV74j6T4DmfhhvqTYQthR6YOzJnxpPB8HlcuyrVYMpBhf29Fp BI/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=afhx60Ej; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id q7-20020a170902a3c700b001c35864fdbbsi438919plb.406.2023.09.29.18.26.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 18:26:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=afhx60Ej; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 057D38725CCC; Fri, 29 Sep 2023 13:14:10 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233081AbjI2UOC (ORCPT + 19 others); Fri, 29 Sep 2023 16:14:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233243AbjI2UNz (ORCPT ); Fri, 29 Sep 2023 16:13:55 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97861DE; Fri, 29 Sep 2023 13:13:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696018433; x=1727554433; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r9Y1nm7ako9u8ZWyJ5Uv1/Q78sCXkVFxKSkSwbn7zu4=; b=afhx60Ejy0UL3Iwk+VdswT6uY6rynIVDHDwAcDoKmzOk220TGZBA9ZQG 0MDT+ZVzc3yogaVx8tr5N5u8MtC40jn8OymBKIyPdMLwE9h779qNWgbaB qhL/lLTSd2MFOWQN3zXD12eA75evuXwrNAubLc+7TLYNFj2EcR9oMkbHv 6eA/uOs+5fg215bYX4NUPwslzg+3duisi/nK77geH83HzfIGPtwiJfHIW qw/xiNGwPiBf8oTPtevUq6DUosqJpGAwnd1vTKurytqjUTBJrqvaVqq/i btmHw6RVcrXru6Wvc6dy9qxWADcExXak2Ngd7dGOIjWZbSLFohJSUAk0V g==; X-IronPort-AV: E=McAfee;i="6600,9927,10848"; a="362604245" X-IronPort-AV: E=Sophos;i="6.03,188,1694761200"; d="scan'208";a="362604245" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2023 11:16:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,188,1694761200"; d="scan'208";a="921798" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2023 11:16:37 -0700 From: Tony Luck To: Borislav Petkov Cc: Yazen Ghannam , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v8 2/3] x86/mce: Add per-bank CMCI storm mitigation Date: Fri, 29 Sep 2023 11:16:25 -0700 Message-ID: <20230929181626.210782-3-tony.luck@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230929181626.210782-1-tony.luck@intel.com> References: <20230718210813.291190-1-tony.luck@intel.com> <20230929181626.210782-1-tony.luck@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Fri, 29 Sep 2023 13:14:10 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1762191557205294420 X-GMAIL-MSGID: 1778423884583585350 This is the core functionality to track CMCI storms at the machine check bank granularity. Subsequent patches will add the vendor specific hooks to supply input to the storm detection and take actions on the start/end of a storm. Maintain a bitmap history for each bank showing whether the bank logged an corrected error or not each time it is polled. In normal operation the interval between polls of this banks determines how far to shift the history. The 64 bit width corresponds to about one second. When a storm is observed a CPU vendor specific action is taken to reduce or stop CMCI from the bank that is the source of the storm. The bank is added to the bitmap of banks for this CPU to poll. The polling rate is increased to once per second. During a storm each bit in the history indicates the status of the bank each time it is polled. Thus the history covers just over a minute. Declare a storm for that bank if the number of corrected interrupts seen in that history is above some threshold (defined as 5 in this series, could be tuned later if there is data to suggest a better value). A storm on a bank ends if enough consecutive polls of the bank show no corrected errors (defined as 30, may also change). That calls the CPU vendor specific function to revert to normal operational mode, and changes the polling rate back to the default. Updated with review comments from Yazen. Link: https://lore.kernel.org/r/c76723df-f2f1-4888-9e05-61917145503c@amd.com Signed-off-by: Tony Luck --- Note that the storm begin/end warnings are now printed using printk_deferred(KERN_NOTICE ...) instead of pr_notice. This is my solution for a locking issue when in poll mode because of the "cmci_poll_lock" introduced by: commit c3629dd7e67d ("x86/mce: Prevent duplicate error records") Is this the right way to handle this? arch/x86/kernel/cpu/mce/internal.h | 39 +++++++++++++- arch/x86/kernel/cpu/mce/core.c | 25 ++++++--- arch/x86/kernel/cpu/mce/threshold.c | 83 +++++++++++++++++++++++++++++ 3 files changed, 138 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 616732ec16f7..5e75f5f81464 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -54,7 +54,44 @@ static inline void intel_clear_lmce(void) { } static inline bool intel_filter_mce(struct mce *m) { return false; } #endif -void mce_timer_kick(unsigned long interval); +void mce_timer_kick(bool storm); +void cmci_storm_begin(unsigned int bank); +void cmci_storm_end(unsigned int bank); +void mce_track_storm(struct mce *mce); + +/* + * history: bitmask tracking whether errors were seen or not seen in + * the most recent polls of a bank. + * timestamp: last time (in jiffies) that the bank was polled + * storm: Is this bank in storm mode? + */ +struct storm_bank { + u64 history; + u64 timestamp; + bool storm; +}; + +/* + * banks: per-cpu, per-bank details + * stormy_bank_count: count of MC banks in storm state + * poll_mode: CPU is in poll mode + */ +struct mca_storm_desc { + struct storm_bank banks[MAX_NR_BANKS]; + u8 stormy_bank_count; + bool poll_mode; +}; +DECLARE_PER_CPU(struct mca_storm_desc, storm_desc); + +/* How many errors within the history buffer mark the start of a storm. */ +#define STORM_BEGIN_THRESHOLD 5 + +/* + * How many polls of machine check bank without an error before declaring + * the storm is over. Since it is tracked by the bitmaks in the history + * field of struct storm_bank the mask is 30 bits [0 ... 29]. + */ +#define STORM_END_POLL_THRESHOLD 29 #ifdef CONFIG_ACPI_APEI int apei_write_mce(struct mce *m); diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index f6e87443b37a..fccbf4a7e783 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -680,6 +680,8 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b) barrier(); m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS)); + mce_track_storm(&m); + /* If this entry is not valid, ignore it */ if (!(m.status & MCI_STATUS_VAL)) continue; @@ -1652,22 +1654,29 @@ static void mce_timer_fn(struct timer_list *t) else iv = min(iv * 2, round_jiffies_relative(check_interval * HZ)); - __this_cpu_write(mce_next_interval, iv); - __start_timer(t, iv); + if (__this_cpu_read(storm_desc.poll_mode)) { + __start_timer(t, HZ); + } else { + __this_cpu_write(mce_next_interval, iv); + __start_timer(t, iv); + } } /* - * Ensure that the timer is firing in @interval from now. + * When a storm starts on any bank on this CPU, switch to polling + * once per second. When the storm ends, revert to the default + * polling interval. */ -void mce_timer_kick(unsigned long interval) +void mce_timer_kick(bool storm) { struct timer_list *t = this_cpu_ptr(&mce_timer); - unsigned long iv = __this_cpu_read(mce_next_interval); - __start_timer(t, interval); + __this_cpu_write(storm_desc.poll_mode, storm); - if (interval < iv) - __this_cpu_write(mce_next_interval, interval); + if (storm) + __start_timer(t, HZ); + else + __this_cpu_write(mce_next_interval, check_interval * HZ); } /* Must not be called in IRQ context where del_timer_sync() can deadlock */ diff --git a/arch/x86/kernel/cpu/mce/threshold.c b/arch/x86/kernel/cpu/mce/threshold.c index ef4e7bb5fd88..c9e32d92cf8f 100644 --- a/arch/x86/kernel/cpu/mce/threshold.c +++ b/arch/x86/kernel/cpu/mce/threshold.c @@ -29,3 +29,86 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_threshold) trace_threshold_apic_exit(THRESHOLD_APIC_VECTOR); apic_eoi(); } + +DEFINE_PER_CPU(struct mca_storm_desc, storm_desc); + +static void mce_handle_storm(unsigned int bank, bool on) +{ + switch (boot_cpu_data.x86_vendor) { + } +} + +void cmci_storm_begin(unsigned int bank) +{ + struct mca_storm_desc *storm = this_cpu_ptr(&storm_desc); + + __set_bit(bank, this_cpu_ptr(mce_poll_banks)); + storm->banks[bank].storm = true; + + /* + * If this is the first bank on this CPU to enter storm mode + * start polling. + */ + if (++storm->stormy_bank_count == 1) + mce_timer_kick(true); +} + +void cmci_storm_end(unsigned int bank) +{ + struct mca_storm_desc *storm = this_cpu_ptr(&storm_desc); + + __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); + storm->banks[bank].history = 0; + storm->banks[bank].storm = false; + + /* If no banks left in storm mode, stop polling. */ + if (!this_cpu_dec_return(storm_desc.stormy_bank_count)) + mce_timer_kick(false); +} + +void mce_track_storm(struct mce *mce) +{ + struct mca_storm_desc *storm = this_cpu_ptr(&storm_desc); + unsigned long now = jiffies, delta; + unsigned int shift = 1; + u64 history = 0; + + /* + * When a bank is in storm mode it is polled once per second and + * the history mask will record about the last minute of poll results. + * If it is not in storm mode, then the bank is only checked when + * there is a CMCI interrupt. Check how long it has been since + * this bank was last checked, and adjust the amount of "shift" + * to apply to history. + */ + if (!storm->banks[mce->bank].storm) { + delta = now - storm->banks[mce->bank].timestamp; + shift = (delta + HZ) / HZ; + } + + /* If it has been a long time since the last poll, clear history. */ + if (shift < 64) + history = storm->banks[mce->bank].history << shift; + + storm->banks[mce->bank].timestamp = now; + + /* History keeps track of corrected errors. VAL=1 && UC=0 */ + if ((mce->status & MCI_STATUS_VAL) && mce_is_correctable(mce)) + history |= 1; + + storm->banks[mce->bank].history = history; + + if (storm->banks[mce->bank].storm) { + if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD, 0)) + return; + printk_deferred(KERN_NOTICE "CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), mce->bank); + mce_handle_storm(mce->bank, false); + cmci_storm_end(mce->bank); + } else { + if (hweight64(history) < STORM_BEGIN_THRESHOLD) + return; + printk_deferred(KERN_NOTICE "CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), mce->bank); + mce_handle_storm(mce->bank, true); + cmci_storm_begin(mce->bank); + } +} From patchwork Fri Sep 29 18:16:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 146809 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp84381vqb; Fri, 29 Sep 2023 15:51:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFZIxzzi2O3QnyGdtqCepAxocHmHL2W6kZwNOQdIiCb/BnGY2QcOI1zmTE3Qah6GUyBhv3E X-Received: by 2002:a05:6a20:9382:b0:157:e4c6:766a with SMTP id x2-20020a056a20938200b00157e4c6766amr5825770pzh.41.1696027915867; Fri, 29 Sep 2023 15:51:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696027915; cv=none; d=google.com; s=arc-20160816; b=DZ6zKyezhIS1CTUxfLGiJIjSyY4Imvcud4LefkCxvQUkfnPcNn11/HSU/+Fx7I9eSi woVAWnJv4AYp+w2u96B1YpQ/CVJXfmwp+Sr6JS33wKunXDCwp6Y72qJinvqaPDI+tAKd NJCjo786UvX64CdIBWmC3i4BMtLRSi8+1sviqNpMxPnlVcouz6QcG+8DSzWLbnZL8pi8 z1+nekvDe11EzAZjuNqRr2PgzPq6ueL2Ew9PctqOlws1xMwOsDOrGFFgeRYonCxhj2ib JHRB4dUN0pang7CpR2AozFWCYsrP3+VpoMll+gLpnylQWcAz2aRgF+YooRc9hVQqO4Cn jBEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RjUTnwBDWGmLGh4DVD8syX8eFRDDZhPAmW6nmtD+O7o=; fh=S7iaUXeayMPNyPzmDJBX6zQJxeHISzeeDLUOtGlzdSo=; b=We2paLRizvv1drmu25VeE91FiJdI2AfSr2kdaiMw/fYWxi6Df5nL/5NCDh7cKN/pE0 tvckeHVw9aXwPiRBJzFaVJNp6m+5rVDRr30fqQcm8bCqWV6rCXv12xoSiQzxLRTyCsD2 QWydMlwxLDDn77yMSDkZbmq++0ijFfK8F9FfZfE6JXJ5Br5v3AM8mezBmHWwSyh0ZHHK p+kqXzYwGgGU1j3yvmHBfFmL7pSp3sjTutfduUWUqPzIka+ir8VZAHKZVGauwhieapHS OqRa1EVyAkca1OAHM06HMrmMQht32fWRfbg15n1in5AgrQugkLvlT3WiV/FfKq+FPYSO tL2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KRuYrD2H; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id e14-20020a17090301ce00b001c73e443f8csi4584914plh.162.2023.09.29.15.51.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 15:51:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KRuYrD2H; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 153C68725CE7; Fri, 29 Sep 2023 13:14:14 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233516AbjI2UOF (ORCPT + 19 others); Fri, 29 Sep 2023 16:14:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233365AbjI2UN5 (ORCPT ); Fri, 29 Sep 2023 16:13:57 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C10A1113; Fri, 29 Sep 2023 13:13:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696018434; x=1727554434; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=44GGa5u85n68sCCpI9/I0KKYcCZytYXWstRVKtyBfx4=; b=KRuYrD2HZEImA2YAzVeI/ytDV5FLUUfztUfaJE3Gx4/G5qUvhvKytBFY 346w5M8e3QJk67gWpCBE8C2qOMZpzSdqdFqRfM21sYo0pjFqJYQsg+TZl o3MucD/3nrFgrAShcLWasI4HrqCDv40lZFMuqGZivfYd2krEf+BjIpqVU Xdcczwnv5Jq0pcSdDNbYoV0D4CmjNsYBZDXTGOvPnCCH7weOgQB3STMvV m3EERLFsKlck3pwtVkbTT8/H1bR6XlkDTzWgZ+KpKgWT9RwV/bV7CVA6c Fdcw8jN4yrMxvP9+ZgdmF1tfj5GlIupfBWeWSoKttADGKsyJ1+sWXODaL Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10848"; a="362604250" X-IronPort-AV: E=Sophos;i="6.03,188,1694761200"; d="scan'208";a="362604250" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2023 11:16:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,188,1694761200"; d="scan'208";a="921803" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2023 11:16:37 -0700 From: Tony Luck To: Borislav Petkov Cc: Yazen Ghannam , Smita.KoralahalliChannabasappa@amd.com, dave.hansen@linux.intel.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v8 3/3] x86/mce: Handle Intel threshold interrupt storms Date: Fri, 29 Sep 2023 11:16:26 -0700 Message-ID: <20230929181626.210782-4-tony.luck@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230929181626.210782-1-tony.luck@intel.com> References: <20230718210813.291190-1-tony.luck@intel.com> <20230929181626.210782-1-tony.luck@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Fri, 29 Sep 2023 13:14:14 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1768886506178437009 X-GMAIL-MSGID: 1778414168151847028 Add an Intel specific hook into machine_check_poll() to keep track of per-CPU, per-bank corrected error logs (with a stub for the CONFIG_MCE_INTEL=n case). When a storm is observed the rate of interrupts is reduced by setting a large threshold value for this bank in IA32_MCi_CTL2. This bank is added to the bitmap of banks for this CPU to poll. The polling rate is increased to once per second. When a storm ends reset the threshold in IA32_MCi_CTL2 back to 1, remove the bank from the bitmap for polling, and change the polling rate back to the default. If a CPU with banks in storm mode is taken offline, the new CPU that inherits ownership of those banks takes over management of storm(s) in the inherited bank(s). The cmci_discover() function was already very large. These changes pushed it well over the top. Refactor with three helper functions to bring it back under control. Updated with review comments from Yazen. Link: https://lore.kernel.org/r/6ae4df67-ba0b-4b50-8c1d-a5d382105ad2@amd.com Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/mce/internal.h | 2 + arch/x86/kernel/cpu/mce/intel.c | 207 +++++++++++++++++++++------- arch/x86/kernel/cpu/mce/threshold.c | 3 + 3 files changed, 161 insertions(+), 51 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 5e75f5f81464..90a7bc3e1cb5 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -41,12 +41,14 @@ struct dentry *mce_get_debugfs_dir(void); extern mce_banks_t mce_banks_ce_disabled; #ifdef CONFIG_X86_MCE_INTEL +void mce_intel_handle_storm(int bank, bool on); void cmci_disable_bank(int bank); void intel_init_cmci(void); void intel_init_lmce(void); void intel_clear_lmce(void); bool intel_filter_mce(struct mce *m); #else +static inline void mce_intel_handle_storm(int bank, bool on) { } static inline void cmci_disable_bank(int bank) { } static inline void intel_init_cmci(void) { } static inline void intel_init_lmce(void) { } diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index b07656408f67..970c94240600 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -54,8 +54,27 @@ static DEFINE_RAW_SPINLOCK(cmci_discover_lock); */ static DEFINE_SPINLOCK(cmci_poll_lock); +/* Linux non-storm CMCI threshold (may be overridden by BIOS) */ #define CMCI_THRESHOLD 1 +/* + * MCi_CTL2 threshold for each bank when there is no storm. + * Default value for each bank may have been set by BIOS. + */ +static u16 cmci_threshold[MAX_NR_BANKS]; + +/* + * High threshold to limit CMCI rate during storms. Max supported is + * 0x7FFF. Use this slightly smaller value so it has a distinctive + * signature when some asks "Why am I not seeing all corrected errors?" + * A high threshold is used instead of just disabling CMCI for a + * bank because both corrected and uncorrected errors may be logged + * in the same bank and signalled with CMCI. The threshold only applies + * to corrected errors, so keeping CMCI enabled means that uncorrected + * errors will still be processed in a timely fashion. + */ +#define CMCI_STORM_THRESHOLD 32749 + static int cmci_supported(int *banks) { u64 cap; @@ -110,6 +129,31 @@ static bool lmce_supported(void) return tmp & FEAT_CTL_LMCE_ENABLED; } +/* + * Set a new CMCI threshold value. Preserve the state of the + * MCI_CTL2_CMCI_EN bit in case this happens during a + * cmci_rediscover() operation. + */ +static void cmci_set_threshold(int bank, int thresh) +{ + unsigned long flags; + u64 val; + + raw_spin_lock_irqsave(&cmci_discover_lock, flags); + rdmsrl(MSR_IA32_MCx_CTL2(bank), val); + val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK; + wrmsrl(MSR_IA32_MCx_CTL2(bank), val | thresh); + raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); +} + +void mce_intel_handle_storm(int bank, bool on) +{ + if (on) + cmci_set_threshold(bank, CMCI_STORM_THRESHOLD); + else + cmci_set_threshold(bank, cmci_threshold[bank]); +} + /* * The interrupt handler. This is called on every event. * Just call the poller directly to log any events. @@ -121,72 +165,130 @@ static void intel_threshold_interrupt(void) machine_check_poll(MCP_TIMESTAMP, this_cpu_ptr(&mce_banks_owned)); } +/* + * Check all the reasons why current CPU cannot claim + * ownership of a bank. + * 1: CPU already owns this bank + * 2: BIOS owns this bank + * 3: Some other CPU owns this bank + */ +static bool cmci_skip_bank(int bank, u64 *val) +{ + unsigned long *owned = (void *)this_cpu_ptr(&mce_banks_owned); + + if (test_bit(bank, owned)) + return true; + + /* Skip banks in firmware first mode */ + if (test_bit(bank, mce_banks_ce_disabled)) + return true; + + rdmsrl(MSR_IA32_MCx_CTL2(bank), *val); + + /* Already owned by someone else? */ + if (*val & MCI_CTL2_CMCI_EN) { + clear_bit(bank, owned); + __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); + return true; + } + + return false; +} + +/* + * Decide which CMCI interrupt threshold to use: + * 1: If this bank is in storm mode from whichever CPU was + * the previous owner, stay in storm mode. + * 2: If ignoring any threshold set by BIOS, set Linux default + * 3: Try to honor BIOS threshold (unless buggy BIOS set it at zero). + */ +static u64 cmci_pick_threshold(u64 val, int *bios_zero_thresh) +{ + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) + return val; + + if (!mca_cfg.bios_cmci_threshold) { + val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK; + val |= CMCI_THRESHOLD; + } else if (!(val & MCI_CTL2_CMCI_THRESHOLD_MASK)) { + /* + * If bios_cmci_threshold boot option was specified + * but the threshold is zero, we'll try to initialize + * it to 1. + */ + *bios_zero_thresh = 1; + val |= CMCI_THRESHOLD; + } + + return val; +} + +/* + * Try to claim ownership of a bank. + */ +static void cmci_claim_bank(int bank, u64 val, int bios_zero_thresh, int *bios_wrong_thresh) +{ + struct mca_storm_desc *storm = this_cpu_ptr(&storm_desc); + + val |= MCI_CTL2_CMCI_EN; + wrmsrl(MSR_IA32_MCx_CTL2(bank), val); + rdmsrl(MSR_IA32_MCx_CTL2(bank), val); + + /* If the enable bit did not stick, this bank should be polled. */ + if (!(val & MCI_CTL2_CMCI_EN)) { + WARN_ON(!test_bit(bank, this_cpu_ptr(mce_poll_banks))); + return; + } + + /* This CPU successfully set the enable bit. */ + set_bit(bank, (void *)this_cpu_ptr(&mce_banks_owned)); + + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) { + pr_notice("CPU%d BANK%d CMCI inherited storm\n", smp_processor_id(), bank); + storm->banks[bank].history = ~0ull; + storm->banks[bank].timestamp = jiffies; + cmci_storm_begin(bank); + } else { + __clear_bit(bank, this_cpu_ptr(mce_poll_banks)); + } + + /* + * We are able to set thresholds for some banks that + * had a threshold of 0. This means the BIOS has not + * set the thresholds properly or does not work with + * this boot option. Note down now and report later. + */ + if (mca_cfg.bios_cmci_threshold && bios_zero_thresh && + (val & MCI_CTL2_CMCI_THRESHOLD_MASK)) + *bios_wrong_thresh = 1; + + /* Save default threshold for each bank */ + if (cmci_threshold[bank] == 0) + cmci_threshold[bank] = val & MCI_CTL2_CMCI_THRESHOLD_MASK; +} + /* * Enable CMCI (Corrected Machine Check Interrupt) for available MCE banks * on this CPU. Use the algorithm recommended in the SDM to discover shared - * banks. + * banks. Called during initial bootstrap, and also for hotplug CPU operations + * to rediscover/reassign machine check banks. */ static void cmci_discover(int banks) { - unsigned long *owned = (void *)this_cpu_ptr(&mce_banks_owned); - unsigned long flags; - int i; int bios_wrong_thresh = 0; + unsigned long flags; + int i; raw_spin_lock_irqsave(&cmci_discover_lock, flags); for (i = 0; i < banks; i++) { u64 val; int bios_zero_thresh = 0; - if (test_bit(i, owned)) + if (cmci_skip_bank(i, &val)) continue; - /* Skip banks in firmware first mode */ - if (test_bit(i, mce_banks_ce_disabled)) - continue; - - rdmsrl(MSR_IA32_MCx_CTL2(i), val); - - /* Already owned by someone else? */ - if (val & MCI_CTL2_CMCI_EN) { - clear_bit(i, owned); - __clear_bit(i, this_cpu_ptr(mce_poll_banks)); - continue; - } - - if (!mca_cfg.bios_cmci_threshold) { - val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK; - val |= CMCI_THRESHOLD; - } else if (!(val & MCI_CTL2_CMCI_THRESHOLD_MASK)) { - /* - * If bios_cmci_threshold boot option was specified - * but the threshold is zero, we'll try to initialize - * it to 1. - */ - bios_zero_thresh = 1; - val |= CMCI_THRESHOLD; - } - - val |= MCI_CTL2_CMCI_EN; - wrmsrl(MSR_IA32_MCx_CTL2(i), val); - rdmsrl(MSR_IA32_MCx_CTL2(i), val); - - /* Did the enable bit stick? -- the bank supports CMCI */ - if (val & MCI_CTL2_CMCI_EN) { - set_bit(i, owned); - __clear_bit(i, this_cpu_ptr(mce_poll_banks)); - /* - * We are able to set thresholds for some banks that - * had a threshold of 0. This means the BIOS has not - * set the thresholds properly or does not work with - * this boot option. Note down now and report later. - */ - if (mca_cfg.bios_cmci_threshold && bios_zero_thresh && - (val & MCI_CTL2_CMCI_THRESHOLD_MASK)) - bios_wrong_thresh = 1; - } else { - WARN_ON(!test_bit(i, this_cpu_ptr(mce_poll_banks))); - } + val = cmci_pick_threshold(val, &bios_zero_thresh); + cmci_claim_bank(i, val, bios_zero_thresh, &bios_wrong_thresh); } raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); if (mca_cfg.bios_cmci_threshold && bios_wrong_thresh) { @@ -225,6 +327,9 @@ static void __cmci_disable_bank(int bank) val &= ~MCI_CTL2_CMCI_EN; wrmsrl(MSR_IA32_MCx_CTL2(bank), val); __clear_bit(bank, this_cpu_ptr(mce_banks_owned)); + + if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) + cmci_storm_end(bank); } /* diff --git a/arch/x86/kernel/cpu/mce/threshold.c b/arch/x86/kernel/cpu/mce/threshold.c index c9e32d92cf8f..1460259ad8b3 100644 --- a/arch/x86/kernel/cpu/mce/threshold.c +++ b/arch/x86/kernel/cpu/mce/threshold.c @@ -35,6 +35,9 @@ DEFINE_PER_CPU(struct mca_storm_desc, storm_desc); static void mce_handle_storm(unsigned int bank, bool on) { switch (boot_cpu_data.x86_vendor) { + case X86_VENDOR_INTEL: + mce_intel_handle_storm(bank, on); + break; } }