Message ID | 20230915184904.1976183-1-evan@rivosinc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp1346109vqi; Fri, 15 Sep 2023 15:06:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEgsxhhSEcW4WTltIppN2SX4ALx4JtkKHvpaOG/IRNoqzoJ7GMLh22/A22h/0WcTnFzyqWi X-Received: by 2002:a17:903:2286:b0:1b8:76fc:5bf6 with SMTP id b6-20020a170903228600b001b876fc5bf6mr3399670plh.43.1694815612619; Fri, 15 Sep 2023 15:06:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694815612; cv=none; d=google.com; s=arc-20160816; b=Z+j4QtsaTXOX7FrbqoMRHftn+TVL83iaksRcp6ePw6z6WhtwveknTd377C/MFTORkw 7MoyOMcdtz4fIy+4FARS+wV+Wqr0vT0zjYP5CNFKfc717I2rj7YuGMI+y6jaJ6aFxJn3 AzEx6mUiwhPXoik1bYgkWOA4sj+gn6mFwDVgX2J+Y+P6Mq6zbFYJdDzJQsM0R9366vej /ls77CCLTCF2bRzY99ZYczZ9s0WM8tDjZfyWQAenrffyX0aHBgJq99y3/HDDcI9Ip/pG K1ZOI2Y4DqMO2NVdEiFX/UGBTZy9fE408OXJntfRRWTHxNNSSv8jJJy/29fe5WD0lfYr L3Dw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=uYSmTOF4KYS04RS6/kkM9Rhx+w8lvbwAPz9Fg6XUkeE=; fh=sd6FVKY4uuIfgAjQTLtNCVXJ4toCLJusya8yjdSm63M=; b=SS6MXrqHmxY/Y8Xuix/HJ17SZk55ttS2/ag7giTf/U15y20q2A9OPZD9+BrliwTsom YWdBpo+qpWBm8orvVH93c0hJM+geV9t03cylIi40h0sdhk38Rjjx3Kh2S1uAuZyaWZIv xBrQJes+9/sUHb2vlX9Qw7G4WaPVG8Ae0TBR+at1KGTjMBUul3zB56/6xiEV4KS8QqmZ xwdBS6S7486h8bQX26+mSQqHjH4CLMRYe4s9Ui25LkMEwm2PsGx4G2E0zSICXdSDewRw 9LuBbggSqv6UDwue+tz+XsRb1/aNWxCYU1xGMvf6N270zrQwZIsqx6mDG6hufPR5eMKM UNGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=DJmguOdi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id w2-20020a170902c78200b001bba4470991si3880695pla.498.2023.09.15.15.06.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Sep 2023 15:06:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=DJmguOdi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 28D28808286B; Fri, 15 Sep 2023 11:51:48 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236601AbjIOSvO (ORCPT <rfc822;ruipengqi7@gmail.com> + 30 others); Fri, 15 Sep 2023 14:51:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236997AbjIOSvF (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 15 Sep 2023 14:51:05 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AFF446BC for <linux-kernel@vger.kernel.org>; Fri, 15 Sep 2023 11:49:42 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id 98e67ed59e1d1-274928c74b0so1015660a91.3 for <linux-kernel@vger.kernel.org>; Fri, 15 Sep 2023 11:49:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1694803780; x=1695408580; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=uYSmTOF4KYS04RS6/kkM9Rhx+w8lvbwAPz9Fg6XUkeE=; b=DJmguOdiOtprnjBvUxG82Na5H7guQg6deMgaKMaDMLDHuYPUryvAuJYU3HH2ZKuZ05 XWDEeFG/yDhOKlODgCMDcC6JWGrXrVbWZLHrO/SRfrbsMmxUlBwBmnhX7kbG135QyrfD ZLJHzfYdXz5wM0UMQK95zP3A86EwXdgU6miTDEHixq9+CJAcZHzRLhEe/X7N/veOZm30 tWM2c/Cc++Ww0fDortZJptpZz+9bqgWvVyxMBYcjbxZde3ULyTQJUzbS3IWUO7FdyHds nEQr9Mt5rbhOcUs1/h7wpkSXWDnLlpyrOuVM2FaGlaY59TKnp1Vz4IDpk0iWYgTGttda b9oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694803780; x=1695408580; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=uYSmTOF4KYS04RS6/kkM9Rhx+w8lvbwAPz9Fg6XUkeE=; b=WEr/Ox2Y2vJMmSZiPfpniY16GveizVUVrzCCqSGgXaM2eeiOc+WyzMvy93D/ftM8RY hFQALPh+hCcRtFCzHnIVHbL9Tlu0Ig/WcK4UeQhEm9cFfMHE6GfBbS4jIRXz4VFiwOO/ hERMUbnY+7j1jaHCALsvZTY0453enNoLc4UZb8hoAn3VniEOkRKsaTOF0Byy7cgYKSSL Zyb8ZSeb8zYDvBJR5jxnFLKH7w4z2YakDPoOgrqrYrJBCLlQfOfP0o7eIH+tU6KSInT8 j/k8y8+QtTW3GvzWJlB8LtoGFy6FJl/JG9d5DXm8zAmuqP+lkFI12QPYWxPYJ6XRSI72 WXuw== X-Gm-Message-State: AOJu0YwsIKWMjWAVAEkcrRJP0HkpSLoPcAU+Ree6B/V0Vy5bQorBGH+G CwW/TET8VNt9clB1PlMm+1f2ew== X-Received: by 2002:a17:90a:c913:b0:268:1d1e:baaf with SMTP id v19-20020a17090ac91300b002681d1ebaafmr2542117pjt.17.1694803780056; Fri, 15 Sep 2023 11:49:40 -0700 (PDT) Received: from evan.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id o6-20020a17090ad24600b00268188ea4b9sm3397849pjw.19.2023.09.15.11.49.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Sep 2023 11:49:39 -0700 (PDT) From: Evan Green <evan@rivosinc.com> To: Palmer Dabbelt <palmer@rivosinc.com> Cc: David Laight <David.Laight@aculab.com>, Jisheng Zhang <jszhang@kernel.org>, Evan Green <evan@rivosinc.com>, Albert Ou <aou@eecs.berkeley.edu>, Andrew Jones <ajones@ventanamicro.com>, Anup Patel <apatel@ventanamicro.com>, Conor Dooley <conor.dooley@microchip.com>, Greentime Hu <greentime.hu@sifive.com>, Heiko Stuebner <heiko@sntech.de>, Ley Foon Tan <leyfoon.tan@starfivetech.com>, Marc Zyngier <maz@kernel.org>, Palmer Dabbelt <palmer@dabbelt.com>, Paul Walmsley <paul.walmsley@sifive.com>, Sunil V L <sunilvl@ventanamicro.com>, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Subject: [PATCH] RISC-V: Probe misaligned access speed in parallel Date: Fri, 15 Sep 2023 11:49:03 -0700 Message-Id: <20230915184904.1976183-1-evan@rivosinc.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Fri, 15 Sep 2023 11:51:48 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777142975938831767 X-GMAIL-MSGID: 1777142975938831767 |
Series |
RISC-V: Probe misaligned access speed in parallel
|
|
Commit Message
Evan Green
Sept. 15, 2023, 6:49 p.m. UTC
Probing for misaligned access speed takes about 0.06 seconds. On a
system with 64 cores, doing this in smp_callin() means it's done
serially, extending boot time by 3.8 seconds. That's a lot of boot time.
Instead of measuring each CPU serially, let's do the measurements on
all CPUs in parallel. If we disable preemption on all CPUs, the
jiffies stop ticking, so we can do this in stages of 1) everybody
except core 0, then 2) core 0.
The measurement call in smp_callin() stays around, but is now
conditionalized to only run if a new CPU shows up after the round of
in-parallel measurements has run. The goal is to have the measurement
call not run during boot or suspend/resume, but only on a hotplug
addition.
Signed-off-by: Evan Green <evan@rivosinc.com>
---
Jisheng, I didn't add your Tested-by tag since the patch evolved from
the one you tested. Hopefully this one brings you the same result.
---
arch/riscv/include/asm/cpufeature.h | 3 ++-
arch/riscv/kernel/cpufeature.c | 28 +++++++++++++++++++++++-----
arch/riscv/kernel/smpboot.c | 11 ++++++++++-
3 files changed, 35 insertions(+), 7 deletions(-)
Comments
Yo Evan, On Fri, Sep 15, 2023 at 11:49:03AM -0700, Evan Green wrote: > Probing for misaligned access speed takes about 0.06 seconds. On a > system with 64 cores, doing this in smp_callin() means it's done > serially, extending boot time by 3.8 seconds. That's a lot of boot time. > > Instead of measuring each CPU serially, let's do the measurements on > all CPUs in parallel. If we disable preemption on all CPUs, the > jiffies stop ticking, so we can do this in stages of 1) everybody > except core 0, then 2) core 0. > > The measurement call in smp_callin() stays around, but is now > conditionalized to only run if a new CPU shows up after the round of > in-parallel measurements has run. The goal is to have the measurement > call not run during boot or suspend/resume, but only on a hotplug > addition. > > Signed-off-by: Evan Green <evan@rivosinc.com> > > --- > > Jisheng, I didn't add your Tested-by tag since the patch evolved from > the one you tested. Hopefully this one brings you the same result. Ya know, I think there's scope to add Reported-by:, Closes: and Fixes: tags to this patch, mentioning explicitly that this has regressed boot time for many core systems, so that this can be fixes material. What do you think? > --- > arch/riscv/include/asm/cpufeature.h | 3 ++- > arch/riscv/kernel/cpufeature.c | 28 +++++++++++++++++++++++----- > arch/riscv/kernel/smpboot.c | 11 ++++++++++- > 3 files changed, 35 insertions(+), 7 deletions(-) > > diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h > index d0345bd659c9..19e7817eba10 100644 > --- a/arch/riscv/include/asm/cpufeature.h > +++ b/arch/riscv/include/asm/cpufeature.h > @@ -30,6 +30,7 @@ DECLARE_PER_CPU(long, misaligned_access_speed); > /* Per-cpu ISA extensions. */ > extern struct riscv_isainfo hart_isa[NR_CPUS]; > > -void check_unaligned_access(int cpu); > +extern bool misaligned_speed_measured; > +int check_unaligned_access(void *unused); > > #endif > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c > index 1cfbba65d11a..8eb36e1dfb95 100644 > --- a/arch/riscv/kernel/cpufeature.c > +++ b/arch/riscv/kernel/cpufeature.c > @@ -42,6 +42,9 @@ struct riscv_isainfo hart_isa[NR_CPUS]; > /* Performance information */ > DEFINE_PER_CPU(long, misaligned_access_speed); > > +/* Boot-time in-parallel unaligned access measurement has occurred. */ > +bool misaligned_speed_measured; If you did something like s/measured/complete/ I think you could drop the comment. Tis whatever though :) Conor.
On Fri, Sep 15, 2023 at 11:49:03AM -0700, Evan Green wrote: > Probing for misaligned access speed takes about 0.06 seconds. On a > system with 64 cores, doing this in smp_callin() means it's done > serially, extending boot time by 3.8 seconds. That's a lot of boot time. > > Instead of measuring each CPU serially, let's do the measurements on > all CPUs in parallel. If we disable preemption on all CPUs, the > jiffies stop ticking, so we can do this in stages of 1) everybody > except core 0, then 2) core 0. > > The measurement call in smp_callin() stays around, but is now > conditionalized to only run if a new CPU shows up after the round of > in-parallel measurements has run. The goal is to have the measurement > call not run during boot or suspend/resume, but only on a hotplug > addition. Yay! I had just recently tested suspend/resume and wanted to report the probe as an issue, but I hadn't gotten around to it. This patch resolves the issue, so Test-by: Andrew Jones <ajones@ventanamicro.com> > > Signed-off-by: Evan Green <evan@rivosinc.com> > > --- > > Jisheng, I didn't add your Tested-by tag since the patch evolved from > the one you tested. Hopefully this one brings you the same result. > > --- > arch/riscv/include/asm/cpufeature.h | 3 ++- > arch/riscv/kernel/cpufeature.c | 28 +++++++++++++++++++++++----- > arch/riscv/kernel/smpboot.c | 11 ++++++++++- > 3 files changed, 35 insertions(+), 7 deletions(-) > > diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h > index d0345bd659c9..19e7817eba10 100644 > --- a/arch/riscv/include/asm/cpufeature.h > +++ b/arch/riscv/include/asm/cpufeature.h > @@ -30,6 +30,7 @@ DECLARE_PER_CPU(long, misaligned_access_speed); > /* Per-cpu ISA extensions. */ > extern struct riscv_isainfo hart_isa[NR_CPUS]; > > -void check_unaligned_access(int cpu); > +extern bool misaligned_speed_measured; Do we need this new state or could we just always check the boot cpu's state to get the same information? per_cpu(misaligned_access_speed, 0) != RISCV_HWPROBE_MISALIGNED_UNKNOWN > +int check_unaligned_access(void *unused); > > #endif > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c > index 1cfbba65d11a..8eb36e1dfb95 100644 > --- a/arch/riscv/kernel/cpufeature.c > +++ b/arch/riscv/kernel/cpufeature.c > @@ -42,6 +42,9 @@ struct riscv_isainfo hart_isa[NR_CPUS]; > /* Performance information */ > DEFINE_PER_CPU(long, misaligned_access_speed); > > +/* Boot-time in-parallel unaligned access measurement has occurred. */ > +bool misaligned_speed_measured; > + > /** > * riscv_isa_extension_base() - Get base extension word > * > @@ -556,8 +559,9 @@ unsigned long riscv_get_elf_hwcap(void) > return hwcap; > } > > -void check_unaligned_access(int cpu) > +int check_unaligned_access(void *unused) > { > + int cpu = smp_processor_id(); > u64 start_cycles, end_cycles; > u64 word_cycles; > u64 byte_cycles; > @@ -571,7 +575,7 @@ void check_unaligned_access(int cpu) > page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE)); > if (!page) { > pr_warn("Can't alloc pages to measure memcpy performance"); > - return; > + return 0; > } > > /* Make an unaligned destination buffer. */ > @@ -643,15 +647,29 @@ void check_unaligned_access(int cpu) > > out: > __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); > + return 0; > +} > + > +static void check_unaligned_access_nonboot_cpu(void *param) > +{ > + if (smp_processor_id() != 0) > + check_unaligned_access(param); > } > > -static int check_unaligned_access_boot_cpu(void) > +static int check_unaligned_access_all_cpus(void) > { > - check_unaligned_access(0); > + /* Check everybody except 0, who stays behind to tend jiffies. */ > + on_each_cpu(check_unaligned_access_nonboot_cpu, NULL, 1); > + > + /* Check core 0. */ > + smp_call_on_cpu(0, check_unaligned_access, NULL, true); > + > + /* Boot-time measurements are complete. */ > + misaligned_speed_measured = true; > return 0; > } > > -arch_initcall(check_unaligned_access_boot_cpu); > +arch_initcall(check_unaligned_access_all_cpus); > > #ifdef CONFIG_RISCV_ALTERNATIVE > /* > diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c > index 1b8da4e40a4d..39322ae20a75 100644 > --- a/arch/riscv/kernel/smpboot.c > +++ b/arch/riscv/kernel/smpboot.c > @@ -27,6 +27,7 @@ > #include <linux/sched/mm.h> > #include <asm/cpu_ops.h> > #include <asm/cpufeature.h> > +#include <asm/hwprobe.h> > #include <asm/irq.h> > #include <asm/mmu_context.h> > #include <asm/numa.h> > @@ -246,7 +247,15 @@ asmlinkage __visible void smp_callin(void) > > numa_add_cpu(curr_cpuid); > set_cpu_online(curr_cpuid, 1); > - check_unaligned_access(curr_cpuid); > + > + /* > + * Boot-time misaligned access speed measurements are done in parallel > + * in an initcall. Only measure here for hotplug. > + */ > + if (misaligned_speed_measured && > + (per_cpu(misaligned_access_speed, curr_cpuid) == RISCV_HWPROBE_MISALIGNED_UNKNOWN)) { > + check_unaligned_access(NULL); > + } > > if (has_vector()) { > if (riscv_v_setup_vsize()) > -- > 2.34.1 > Besides my reluctance to add another global variable, this looks good to me. Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Thanks, drew
On Fri, Sep 15, 2023 at 11:49:03AM -0700, Evan Green wrote: > Probing for misaligned access speed takes about 0.06 seconds. On a > system with 64 cores, doing this in smp_callin() means it's done > serially, extending boot time by 3.8 seconds. That's a lot of boot time. > > Instead of measuring each CPU serially, let's do the measurements on > all CPUs in parallel. If we disable preemption on all CPUs, the > jiffies stop ticking, so we can do this in stages of 1) everybody > except core 0, then 2) core 0. > > The measurement call in smp_callin() stays around, but is now > conditionalized to only run if a new CPU shows up after the round of > in-parallel measurements has run. The goal is to have the measurement > call not run during boot or suspend/resume, but only on a hotplug > addition. > > Signed-off-by: Evan Green <evan@rivosinc.com> Reported-by: Jisheng Zhang <jszhang@kernel.org> > > --- > > Jisheng, I didn't add your Tested-by tag since the patch evolved from > the one you tested. Hopefully this one brings you the same result. > > --- > arch/riscv/include/asm/cpufeature.h | 3 ++- > arch/riscv/kernel/cpufeature.c | 28 +++++++++++++++++++++++----- > arch/riscv/kernel/smpboot.c | 11 ++++++++++- > 3 files changed, 35 insertions(+), 7 deletions(-) > > diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h > index d0345bd659c9..19e7817eba10 100644 > --- a/arch/riscv/include/asm/cpufeature.h > +++ b/arch/riscv/include/asm/cpufeature.h > @@ -30,6 +30,7 @@ DECLARE_PER_CPU(long, misaligned_access_speed); > /* Per-cpu ISA extensions. */ > extern struct riscv_isainfo hart_isa[NR_CPUS]; > > -void check_unaligned_access(int cpu); > +extern bool misaligned_speed_measured; > +int check_unaligned_access(void *unused); > > #endif > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c > index 1cfbba65d11a..8eb36e1dfb95 100644 > --- a/arch/riscv/kernel/cpufeature.c > +++ b/arch/riscv/kernel/cpufeature.c > @@ -42,6 +42,9 @@ struct riscv_isainfo hart_isa[NR_CPUS]; > /* Performance information */ > DEFINE_PER_CPU(long, misaligned_access_speed); > > +/* Boot-time in-parallel unaligned access measurement has occurred. */ > +bool misaligned_speed_measured; This var can be avoided, see below. > + > /** > * riscv_isa_extension_base() - Get base extension word > * > @@ -556,8 +559,9 @@ unsigned long riscv_get_elf_hwcap(void) > return hwcap; > } > > -void check_unaligned_access(int cpu) > +int check_unaligned_access(void *unused) > { > + int cpu = smp_processor_id(); > u64 start_cycles, end_cycles; > u64 word_cycles; > u64 byte_cycles; > @@ -571,7 +575,7 @@ void check_unaligned_access(int cpu) > page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE)); > if (!page) { > pr_warn("Can't alloc pages to measure memcpy performance"); > - return; > + return 0; > } > > /* Make an unaligned destination buffer. */ > @@ -643,15 +647,29 @@ void check_unaligned_access(int cpu) > > out: > __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); > + return 0; > +} > + > +static void check_unaligned_access_nonboot_cpu(void *param) > +{ > + if (smp_processor_id() != 0) > + check_unaligned_access(param); > } > > -static int check_unaligned_access_boot_cpu(void) > +static int check_unaligned_access_all_cpus(void) > { > - check_unaligned_access(0); > + /* Check everybody except 0, who stays behind to tend jiffies. */ > + on_each_cpu(check_unaligned_access_nonboot_cpu, NULL, 1); > + > + /* Check core 0. */ > + smp_call_on_cpu(0, check_unaligned_access, NULL, true); > + > + /* Boot-time measurements are complete. */ > + misaligned_speed_measured = true; > return 0; > } > > -arch_initcall(check_unaligned_access_boot_cpu); > +arch_initcall(check_unaligned_access_all_cpus); > > #ifdef CONFIG_RISCV_ALTERNATIVE > /* > diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c > index 1b8da4e40a4d..39322ae20a75 100644 > --- a/arch/riscv/kernel/smpboot.c > +++ b/arch/riscv/kernel/smpboot.c > @@ -27,6 +27,7 @@ > #include <linux/sched/mm.h> > #include <asm/cpu_ops.h> > #include <asm/cpufeature.h> > +#include <asm/hwprobe.h> > #include <asm/irq.h> > #include <asm/mmu_context.h> > #include <asm/numa.h> > @@ -246,7 +247,15 @@ asmlinkage __visible void smp_callin(void) > > numa_add_cpu(curr_cpuid); > set_cpu_online(curr_cpuid, 1); > - check_unaligned_access(curr_cpuid); > + > + /* > + * Boot-time misaligned access speed measurements are done in parallel > + * in an initcall. Only measure here for hotplug. > + */ > + if (misaligned_speed_measured && > + (per_cpu(misaligned_access_speed, curr_cpuid) == RISCV_HWPROBE_MISALIGNED_UNKNOWN)) { I believe this check is for cpu not-booted during boot time but hotplug in after that, if so I'm not sure whether misaligned_speed_measured can be replaced with (system_state == SYSTEM_RUNNING) then we don't need misaligned_speed_measured at all. > + check_unaligned_access(NULL); > + } > > if (has_vector()) { > if (riscv_v_setup_vsize()) > -- > 2.34.1 >
On Sat, Sep 16, 2023 at 04:39:54PM +0800, Jisheng Zhang wrote: > On Fri, Sep 15, 2023 at 11:49:03AM -0700, Evan Green wrote: > > Probing for misaligned access speed takes about 0.06 seconds. On a > > system with 64 cores, doing this in smp_callin() means it's done > > serially, extending boot time by 3.8 seconds. That's a lot of boot time. > > > > Instead of measuring each CPU serially, let's do the measurements on > > all CPUs in parallel. If we disable preemption on all CPUs, the > > jiffies stop ticking, so we can do this in stages of 1) everybody > > except core 0, then 2) core 0. > > > > The measurement call in smp_callin() stays around, but is now > > conditionalized to only run if a new CPU shows up after the round of > > in-parallel measurements has run. The goal is to have the measurement > > call not run during boot or suspend/resume, but only on a hotplug > > addition. > > > > Signed-off-by: Evan Green <evan@rivosinc.com> > > Reported-by: Jisheng Zhang <jszhang@kernel.org> Hi Evan, Palmer, This patch seems missing in v6.6, I dunno what happened. And this patch doesn't fix the boot time regression but also fix a real bug during cpu hotplug on and off. Here is the reproduce script: while true do echo 0 > /sys/devices/system/cpu/cpu1/online echo 1 > /sys/devices/system/cpu/cpu1/online done Here is the BUG log on qemu: [ 20.950753] CPU1: failed to come online [ 20.951875] ------------[ cut here ]------------ [ 20.952070] kernel BUG at kernel/time/hrtimer.c:2227! [ 20.952341] Kernel BUG [#1] [ 20.952366] Modules linked in: [ 20.952515] CPU: 0 PID: 46 Comm: sh Not tainted 6.6.0 #3 [ 20.952607] Hardware name: riscv-virtio,qemu (DT) [ 20.952695] epc : hrtimers_dead_cpu+0x22e/0x230 [ 20.952808] ra : cpuhp_invoke_callback+0xe4/0x54e [ 20.952844] epc : ffffffff8007d6c0 ra : ffffffff8000f904 sp : ff600000011ebb30 [ 20.952863] gp : ffffffff80d081d0 tp : ff6000000134da00 t0 : 0000000000000040 [ 20.952880] t1 : 0000000000000000 t2 : 0000000000000000 s0 : ff600000011ebbb0 [ 20.952895] s1 : 0000000000000001 a0 : 0000000000000001 a1 : 000000000000002c [ 20.952911] a2 : 0000000000000000 a3 : 0000000000000000 a4 : 0000000000000000 [ 20.952926] a5 : 0000000000000001 a6 : 0000000000000538 a7 : 0000000000000000 [ 20.952941] s2 : 000000000000002c s3 : 0000000000000000 s4 : ff6000003ffd4390 [ 20.952957] s5 : ffffffff80d0a1f8 s6 : 0000000000000000 s7 : ffffffff8007d492 [ 20.952972] s8 : 0000000000000001 s9 : fffffffffffffffb s10: 0000000000000000 [ 20.952987] s11: 00005555820dc708 t3 : 0000000000000002 t4 : 0000000000000402 [ 20.953002] t5 : ff600000010f0710 t6 : ff600000010f0718 [ 20.953016] status: 0000000200000120 badaddr: 0000000000000000 cause: 0000000000000003 [ 20.953124] [<ffffffff8007d6c0>] hrtimers_dead_cpu+0x22e/0x230 [ 20.953226] [<ffffffff8000f904>] cpuhp_invoke_callback+0xe4/0x54e [ 20.953241] [<ffffffff80010fb8>] _cpu_up+0x200/0x2a2 [ 20.953254] [<ffffffff800110ac>] cpu_up+0x52/0x8a [ 20.953266] [<ffffffff80011654>] cpu_device_up+0x14/0x1c [ 20.953279] [<ffffffff8029abb6>] cpu_subsys_online+0x1e/0x68 [ 20.953296] [<ffffffff802957de>] device_online+0x3c/0x70 [ 20.953306] [<ffffffff8029587a>] online_store+0x68/0x8c [ 20.953317] [<ffffffff802909ba>] dev_attr_store+0xe/0x1a [ 20.953330] [<ffffffff801df8aa>] sysfs_kf_write+0x2a/0x34 [ 20.953346] [<ffffffff801def06>] kernfs_fop_write_iter+0xde/0x162 [ 20.953360] [<ffffffff8018154a>] vfs_write+0x136/0x320 [ 20.953372] [<ffffffff801818e4>] ksys_write+0x4a/0xb4 [ 20.953383] [<ffffffff80181962>] __riscv_sys_write+0x14/0x1c [ 20.953394] [<ffffffff803dec7e>] do_trap_ecall_u+0x4a/0x110 [ 20.953420] [<ffffffff80003666>] ret_from_exception+0x0/0x66 [ 20.953648] Code: 7c42 7ca2 7d02 6de2 4501 6109 8082 c0ef 7463 bd1d (9002) 1141 [ 20.953897] ---[ end trace 0000000000000000 ]--- [ 20.954068] Kernel panic - not syncing: Fatal exception in interrupt [ 20.954128] SMP: stopping secondary CPUs [ 22.749953] SMP: failed to stop secondary CPUs 0-1 [ 22.803768] ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]--- > > > > --- > > > > Jisheng, I didn't add your Tested-by tag since the patch evolved from > > the one you tested. Hopefully this one brings you the same result. > > > > --- > > arch/riscv/include/asm/cpufeature.h | 3 ++- > > arch/riscv/kernel/cpufeature.c | 28 +++++++++++++++++++++++----- > > arch/riscv/kernel/smpboot.c | 11 ++++++++++- > > 3 files changed, 35 insertions(+), 7 deletions(-) > > > > diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h > > index d0345bd659c9..19e7817eba10 100644 > > --- a/arch/riscv/include/asm/cpufeature.h > > +++ b/arch/riscv/include/asm/cpufeature.h > > @@ -30,6 +30,7 @@ DECLARE_PER_CPU(long, misaligned_access_speed); > > /* Per-cpu ISA extensions. */ > > extern struct riscv_isainfo hart_isa[NR_CPUS]; > > > > -void check_unaligned_access(int cpu); > > +extern bool misaligned_speed_measured; > > +int check_unaligned_access(void *unused); > > > > #endif > > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c > > index 1cfbba65d11a..8eb36e1dfb95 100644 > > --- a/arch/riscv/kernel/cpufeature.c > > +++ b/arch/riscv/kernel/cpufeature.c > > @@ -42,6 +42,9 @@ struct riscv_isainfo hart_isa[NR_CPUS]; > > /* Performance information */ > > DEFINE_PER_CPU(long, misaligned_access_speed); > > > > +/* Boot-time in-parallel unaligned access measurement has occurred. */ > > +bool misaligned_speed_measured; > > This var can be avoided, see below. > > > + > > /** > > * riscv_isa_extension_base() - Get base extension word > > * > > @@ -556,8 +559,9 @@ unsigned long riscv_get_elf_hwcap(void) > > return hwcap; > > } > > > > -void check_unaligned_access(int cpu) > > +int check_unaligned_access(void *unused) > > { > > + int cpu = smp_processor_id(); > > u64 start_cycles, end_cycles; > > u64 word_cycles; > > u64 byte_cycles; > > @@ -571,7 +575,7 @@ void check_unaligned_access(int cpu) > > page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE)); > > if (!page) { > > pr_warn("Can't alloc pages to measure memcpy performance"); > > - return; > > + return 0; > > } > > > > /* Make an unaligned destination buffer. */ > > @@ -643,15 +647,29 @@ void check_unaligned_access(int cpu) > > > > out: > > __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); > > + return 0; > > +} > > + > > +static void check_unaligned_access_nonboot_cpu(void *param) > > +{ > > + if (smp_processor_id() != 0) > > + check_unaligned_access(param); > > } > > > > -static int check_unaligned_access_boot_cpu(void) > > +static int check_unaligned_access_all_cpus(void) > > { > > - check_unaligned_access(0); > > + /* Check everybody except 0, who stays behind to tend jiffies. */ > > + on_each_cpu(check_unaligned_access_nonboot_cpu, NULL, 1); > > + > > + /* Check core 0. */ > > + smp_call_on_cpu(0, check_unaligned_access, NULL, true); > > + > > + /* Boot-time measurements are complete. */ > > + misaligned_speed_measured = true; > > return 0; > > } > > > > -arch_initcall(check_unaligned_access_boot_cpu); > > +arch_initcall(check_unaligned_access_all_cpus); > > > > #ifdef CONFIG_RISCV_ALTERNATIVE > > /* > > diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c > > index 1b8da4e40a4d..39322ae20a75 100644 > > --- a/arch/riscv/kernel/smpboot.c > > +++ b/arch/riscv/kernel/smpboot.c > > @@ -27,6 +27,7 @@ > > #include <linux/sched/mm.h> > > #include <asm/cpu_ops.h> > > #include <asm/cpufeature.h> > > +#include <asm/hwprobe.h> > > #include <asm/irq.h> > > #include <asm/mmu_context.h> > > #include <asm/numa.h> > > @@ -246,7 +247,15 @@ asmlinkage __visible void smp_callin(void) > > > > numa_add_cpu(curr_cpuid); > > set_cpu_online(curr_cpuid, 1); > > - check_unaligned_access(curr_cpuid); > > + > > + /* > > + * Boot-time misaligned access speed measurements are done in parallel > > + * in an initcall. Only measure here for hotplug. > > + */ > > + if (misaligned_speed_measured && > > + (per_cpu(misaligned_access_speed, curr_cpuid) == RISCV_HWPROBE_MISALIGNED_UNKNOWN)) { > > I believe this check is for cpu not-booted during boot time but hotplug in > after that, if so I'm not sure whether > misaligned_speed_measured can be replaced with > (system_state == SYSTEM_RUNNING) > then we don't need misaligned_speed_measured at all. > > > + check_unaligned_access(NULL); > > + } > > > > if (has_vector()) { > > if (riscv_v_setup_vsize()) > > -- > > 2.34.1 > >
On Wed, Nov 1, 2023 at 4:44 AM Jisheng Zhang <jszhang@kernel.org> wrote: > > On Sat, Sep 16, 2023 at 04:39:54PM +0800, Jisheng Zhang wrote: > > On Fri, Sep 15, 2023 at 11:49:03AM -0700, Evan Green wrote: > > > Probing for misaligned access speed takes about 0.06 seconds. On a > > > system with 64 cores, doing this in smp_callin() means it's done > > > serially, extending boot time by 3.8 seconds. That's a lot of boot time. > > > > > > Instead of measuring each CPU serially, let's do the measurements on > > > all CPUs in parallel. If we disable preemption on all CPUs, the > > > jiffies stop ticking, so we can do this in stages of 1) everybody > > > except core 0, then 2) core 0. > > > > > > The measurement call in smp_callin() stays around, but is now > > > conditionalized to only run if a new CPU shows up after the round of > > > in-parallel measurements has run. The goal is to have the measurement > > > call not run during boot or suspend/resume, but only on a hotplug > > > addition. > > > > > > Signed-off-by: Evan Green <evan@rivosinc.com> > > > > Reported-by: Jisheng Zhang <jszhang@kernel.org> > > Hi Evan, Palmer, > > This patch seems missing in v6.6, I dunno what happened. > > And this patch doesn't fix the boot time regression but also fix a real > bug during cpu hotplug on and off. Hi Jisheng, Just to clarify, you're saying this both fixes the boot regression, and fixes a hotplug crash? I was slightly thrown off by the "doesn't fix the boot time regression", holler if there's still something wrong with boot time. The splat you pasted suggests the CPU isn't coming back online. Off the top of my head I can't think of what that might be or why this patch would fix it. I tried this on an old palmer/for-next and didn't repro the issue: # echo 0 > online [ 31.777280] CPU3: off [ 31.777740] CPU3 may not have stopped: 3 # echo 1 > online [ 36.236313] cpu3: Ratio of byte access time to unaligned word access is 7.26, unaligned accesses are fast FWIW, Palmer's for-next branch now has the v2 of this patch. I verified that branch is booting, and hotplug seems to work as well. -Evan > > Here is the reproduce script: > > while true > do > echo 0 > /sys/devices/system/cpu/cpu1/online > echo 1 > /sys/devices/system/cpu/cpu1/online > done > > > Here is the BUG log on qemu: > > [ 20.950753] CPU1: failed to come online > [ 20.951875] ------------[ cut here ]------------ > [ 20.952070] kernel BUG at kernel/time/hrtimer.c:2227! > [ 20.952341] Kernel BUG [#1] > [ 20.952366] Modules linked in: > [ 20.952515] CPU: 0 PID: 46 Comm: sh Not tainted 6.6.0 #3 > [ 20.952607] Hardware name: riscv-virtio,qemu (DT) > [ 20.952695] epc : hrtimers_dead_cpu+0x22e/0x230 > [ 20.952808] ra : cpuhp_invoke_callback+0xe4/0x54e > [ 20.952844] epc : ffffffff8007d6c0 ra : ffffffff8000f904 sp : ff600000011ebb30 > [ 20.952863] gp : ffffffff80d081d0 tp : ff6000000134da00 t0 : 0000000000000040 > [ 20.952880] t1 : 0000000000000000 t2 : 0000000000000000 s0 : ff600000011ebbb0 > [ 20.952895] s1 : 0000000000000001 a0 : 0000000000000001 a1 : 000000000000002c > [ 20.952911] a2 : 0000000000000000 a3 : 0000000000000000 a4 : 0000000000000000 > [ 20.952926] a5 : 0000000000000001 a6 : 0000000000000538 a7 : 0000000000000000 > [ 20.952941] s2 : 000000000000002c s3 : 0000000000000000 s4 : ff6000003ffd4390 > [ 20.952957] s5 : ffffffff80d0a1f8 s6 : 0000000000000000 s7 : ffffffff8007d492 > [ 20.952972] s8 : 0000000000000001 s9 : fffffffffffffffb s10: 0000000000000000 > [ 20.952987] s11: 00005555820dc708 t3 : 0000000000000002 t4 : 0000000000000402 > [ 20.953002] t5 : ff600000010f0710 t6 : ff600000010f0718 > [ 20.953016] status: 0000000200000120 badaddr: 0000000000000000 cause: 0000000000000003 > [ 20.953124] [<ffffffff8007d6c0>] hrtimers_dead_cpu+0x22e/0x230 > [ 20.953226] [<ffffffff8000f904>] cpuhp_invoke_callback+0xe4/0x54e > [ 20.953241] [<ffffffff80010fb8>] _cpu_up+0x200/0x2a2 > [ 20.953254] [<ffffffff800110ac>] cpu_up+0x52/0x8a > [ 20.953266] [<ffffffff80011654>] cpu_device_up+0x14/0x1c > [ 20.953279] [<ffffffff8029abb6>] cpu_subsys_online+0x1e/0x68 > [ 20.953296] [<ffffffff802957de>] device_online+0x3c/0x70 > [ 20.953306] [<ffffffff8029587a>] online_store+0x68/0x8c > [ 20.953317] [<ffffffff802909ba>] dev_attr_store+0xe/0x1a > [ 20.953330] [<ffffffff801df8aa>] sysfs_kf_write+0x2a/0x34 > [ 20.953346] [<ffffffff801def06>] kernfs_fop_write_iter+0xde/0x162 > [ 20.953360] [<ffffffff8018154a>] vfs_write+0x136/0x320 > [ 20.953372] [<ffffffff801818e4>] ksys_write+0x4a/0xb4 > [ 20.953383] [<ffffffff80181962>] __riscv_sys_write+0x14/0x1c > [ 20.953394] [<ffffffff803dec7e>] do_trap_ecall_u+0x4a/0x110 > [ 20.953420] [<ffffffff80003666>] ret_from_exception+0x0/0x66 > [ 20.953648] Code: 7c42 7ca2 7d02 6de2 4501 6109 8082 c0ef 7463 bd1d (9002) 1141 > [ 20.953897] ---[ end trace 0000000000000000 ]--- > [ 20.954068] Kernel panic - not syncing: Fatal exception in interrupt > [ 20.954128] SMP: stopping secondary CPUs > [ 22.749953] SMP: failed to stop secondary CPUs 0-1 > [ 22.803768] ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]--- > > > > > > > > --- > > > > > > Jisheng, I didn't add your Tested-by tag since the patch evolved from > > > the one you tested. Hopefully this one brings you the same result. > > > > > > --- > > > arch/riscv/include/asm/cpufeature.h | 3 ++- > > > arch/riscv/kernel/cpufeature.c | 28 +++++++++++++++++++++++----- > > > arch/riscv/kernel/smpboot.c | 11 ++++++++++- > > > 3 files changed, 35 insertions(+), 7 deletions(-) > > > > > > diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h > > > index d0345bd659c9..19e7817eba10 100644 > > > --- a/arch/riscv/include/asm/cpufeature.h > > > +++ b/arch/riscv/include/asm/cpufeature.h > > > @@ -30,6 +30,7 @@ DECLARE_PER_CPU(long, misaligned_access_speed); > > > /* Per-cpu ISA extensions. */ > > > extern struct riscv_isainfo hart_isa[NR_CPUS]; > > > > > > -void check_unaligned_access(int cpu); > > > +extern bool misaligned_speed_measured; > > > +int check_unaligned_access(void *unused); > > > > > > #endif > > > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c > > > index 1cfbba65d11a..8eb36e1dfb95 100644 > > > --- a/arch/riscv/kernel/cpufeature.c > > > +++ b/arch/riscv/kernel/cpufeature.c > > > @@ -42,6 +42,9 @@ struct riscv_isainfo hart_isa[NR_CPUS]; > > > /* Performance information */ > > > DEFINE_PER_CPU(long, misaligned_access_speed); > > > > > > +/* Boot-time in-parallel unaligned access measurement has occurred. */ > > > +bool misaligned_speed_measured; > > > > This var can be avoided, see below. > > > > > + > > > /** > > > * riscv_isa_extension_base() - Get base extension word > > > * > > > @@ -556,8 +559,9 @@ unsigned long riscv_get_elf_hwcap(void) > > > return hwcap; > > > } > > > > > > -void check_unaligned_access(int cpu) > > > +int check_unaligned_access(void *unused) > > > { > > > + int cpu = smp_processor_id(); > > > u64 start_cycles, end_cycles; > > > u64 word_cycles; > > > u64 byte_cycles; > > > @@ -571,7 +575,7 @@ void check_unaligned_access(int cpu) > > > page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE)); > > > if (!page) { > > > pr_warn("Can't alloc pages to measure memcpy performance"); > > > - return; > > > + return 0; > > > } > > > > > > /* Make an unaligned destination buffer. */ > > > @@ -643,15 +647,29 @@ void check_unaligned_access(int cpu) > > > > > > out: > > > __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); > > > + return 0; > > > +} > > > + > > > +static void check_unaligned_access_nonboot_cpu(void *param) > > > +{ > > > + if (smp_processor_id() != 0) > > > + check_unaligned_access(param); > > > } > > > > > > -static int check_unaligned_access_boot_cpu(void) > > > +static int check_unaligned_access_all_cpus(void) > > > { > > > - check_unaligned_access(0); > > > + /* Check everybody except 0, who stays behind to tend jiffies. */ > > > + on_each_cpu(check_unaligned_access_nonboot_cpu, NULL, 1); > > > + > > > + /* Check core 0. */ > > > + smp_call_on_cpu(0, check_unaligned_access, NULL, true); > > > + > > > + /* Boot-time measurements are complete. */ > > > + misaligned_speed_measured = true; > > > return 0; > > > } > > > > > > -arch_initcall(check_unaligned_access_boot_cpu); > > > +arch_initcall(check_unaligned_access_all_cpus); > > > > > > #ifdef CONFIG_RISCV_ALTERNATIVE > > > /* > > > diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c > > > index 1b8da4e40a4d..39322ae20a75 100644 > > > --- a/arch/riscv/kernel/smpboot.c > > > +++ b/arch/riscv/kernel/smpboot.c > > > @@ -27,6 +27,7 @@ > > > #include <linux/sched/mm.h> > > > #include <asm/cpu_ops.h> > > > #include <asm/cpufeature.h> > > > +#include <asm/hwprobe.h> > > > #include <asm/irq.h> > > > #include <asm/mmu_context.h> > > > #include <asm/numa.h> > > > @@ -246,7 +247,15 @@ asmlinkage __visible void smp_callin(void) > > > > > > numa_add_cpu(curr_cpuid); > > > set_cpu_online(curr_cpuid, 1); > > > - check_unaligned_access(curr_cpuid); > > > + > > > + /* > > > + * Boot-time misaligned access speed measurements are done in parallel > > > + * in an initcall. Only measure here for hotplug. > > > + */ > > > + if (misaligned_speed_measured && > > > + (per_cpu(misaligned_access_speed, curr_cpuid) == RISCV_HWPROBE_MISALIGNED_UNKNOWN)) { > > > > I believe this check is for cpu not-booted during boot time but hotplug in > > after that, if so I'm not sure whether > > misaligned_speed_measured can be replaced with > > (system_state == SYSTEM_RUNNING) > > then we don't need misaligned_speed_measured at all. > > > > > + check_unaligned_access(NULL); > > > + } > > > > > > if (has_vector()) { > > > if (riscv_v_setup_vsize()) > > > -- > > > 2.34.1 > > >
On Wed, Nov 01, 2023 at 10:28:53AM -0700, Evan Green wrote: > On Wed, Nov 1, 2023 at 4:44 AM Jisheng Zhang <jszhang@kernel.org> wrote: > > > > On Sat, Sep 16, 2023 at 04:39:54PM +0800, Jisheng Zhang wrote: > > > On Fri, Sep 15, 2023 at 11:49:03AM -0700, Evan Green wrote: > > > > Probing for misaligned access speed takes about 0.06 seconds. On a > > > > system with 64 cores, doing this in smp_callin() means it's done > > > > serially, extending boot time by 3.8 seconds. That's a lot of boot time. > > > > > > > > Instead of measuring each CPU serially, let's do the measurements on > > > > all CPUs in parallel. If we disable preemption on all CPUs, the > > > > jiffies stop ticking, so we can do this in stages of 1) everybody > > > > except core 0, then 2) core 0. > > > > > > > > The measurement call in smp_callin() stays around, but is now > > > > conditionalized to only run if a new CPU shows up after the round of > > > > in-parallel measurements has run. The goal is to have the measurement > > > > call not run during boot or suspend/resume, but only on a hotplug > > > > addition. > > > > > > > > Signed-off-by: Evan Green <evan@rivosinc.com> > > > > > > Reported-by: Jisheng Zhang <jszhang@kernel.org> > > > > Hi Evan, Palmer, > > > > This patch seems missing in v6.6, I dunno what happened. > > > > And this patch doesn't fix the boot time regression but also fix a real > > bug during cpu hotplug on and off. > > Hi Jisheng, > Just to clarify, you're saying this both fixes the boot regression, > and fixes a hotplug crash? I was slightly thrown off by the "doesn't > fix the boot time regression", holler if there's still something wrong typo: should be "not only fix the boot time regression but also ..." > with boot time. > > The splat you pasted suggests the CPU isn't coming back online. Off > the top of my head I can't think of what that might be or why this > patch would fix it. I tried this on an old palmer/for-next and didn't > repro the issue: > > # echo 0 > online > [ 31.777280] CPU3: off > [ 31.777740] CPU3 may not have stopped: 3 > # echo 1 > online > [ 36.236313] cpu3: Ratio of byte access time to unaligned word > access is 7.26, unaligned accesses are fast you need to run the script for some time, 3 ~ 5 minutes for example. Only hotplug cpu off then on for once isn't enough > > FWIW, Palmer's for-next branch now has the v2 of this patch. I I want v2 patch be merged > verified that branch is booting, and hotplug seems to work as well. can you try stress cpu hotplug without your patch? I.E try on v6.6 Thanks > > > > > > Here is the reproduce script: > > > > while true > > do > > echo 0 > /sys/devices/system/cpu/cpu1/online > > echo 1 > /sys/devices/system/cpu/cpu1/online > > done > > > > > > Here is the BUG log on qemu: > > > > [ 20.950753] CPU1: failed to come online > > [ 20.951875] ------------[ cut here ]------------ > > [ 20.952070] kernel BUG at kernel/time/hrtimer.c:2227! > > [ 20.952341] Kernel BUG [#1] > > [ 20.952366] Modules linked in: > > [ 20.952515] CPU: 0 PID: 46 Comm: sh Not tainted 6.6.0 #3 > > [ 20.952607] Hardware name: riscv-virtio,qemu (DT) > > [ 20.952695] epc : hrtimers_dead_cpu+0x22e/0x230 > > [ 20.952808] ra : cpuhp_invoke_callback+0xe4/0x54e > > [ 20.952844] epc : ffffffff8007d6c0 ra : ffffffff8000f904 sp : ff600000011ebb30 > > [ 20.952863] gp : ffffffff80d081d0 tp : ff6000000134da00 t0 : 0000000000000040 > > [ 20.952880] t1 : 0000000000000000 t2 : 0000000000000000 s0 : ff600000011ebbb0 > > [ 20.952895] s1 : 0000000000000001 a0 : 0000000000000001 a1 : 000000000000002c > > [ 20.952911] a2 : 0000000000000000 a3 : 0000000000000000 a4 : 0000000000000000 > > [ 20.952926] a5 : 0000000000000001 a6 : 0000000000000538 a7 : 0000000000000000 > > [ 20.952941] s2 : 000000000000002c s3 : 0000000000000000 s4 : ff6000003ffd4390 > > [ 20.952957] s5 : ffffffff80d0a1f8 s6 : 0000000000000000 s7 : ffffffff8007d492 > > [ 20.952972] s8 : 0000000000000001 s9 : fffffffffffffffb s10: 0000000000000000 > > [ 20.952987] s11: 00005555820dc708 t3 : 0000000000000002 t4 : 0000000000000402 > > [ 20.953002] t5 : ff600000010f0710 t6 : ff600000010f0718 > > [ 20.953016] status: 0000000200000120 badaddr: 0000000000000000 cause: 0000000000000003 > > [ 20.953124] [<ffffffff8007d6c0>] hrtimers_dead_cpu+0x22e/0x230 > > [ 20.953226] [<ffffffff8000f904>] cpuhp_invoke_callback+0xe4/0x54e > > [ 20.953241] [<ffffffff80010fb8>] _cpu_up+0x200/0x2a2 > > [ 20.953254] [<ffffffff800110ac>] cpu_up+0x52/0x8a > > [ 20.953266] [<ffffffff80011654>] cpu_device_up+0x14/0x1c > > [ 20.953279] [<ffffffff8029abb6>] cpu_subsys_online+0x1e/0x68 > > [ 20.953296] [<ffffffff802957de>] device_online+0x3c/0x70 > > [ 20.953306] [<ffffffff8029587a>] online_store+0x68/0x8c > > [ 20.953317] [<ffffffff802909ba>] dev_attr_store+0xe/0x1a > > [ 20.953330] [<ffffffff801df8aa>] sysfs_kf_write+0x2a/0x34 > > [ 20.953346] [<ffffffff801def06>] kernfs_fop_write_iter+0xde/0x162 > > [ 20.953360] [<ffffffff8018154a>] vfs_write+0x136/0x320 > > [ 20.953372] [<ffffffff801818e4>] ksys_write+0x4a/0xb4 > > [ 20.953383] [<ffffffff80181962>] __riscv_sys_write+0x14/0x1c > > [ 20.953394] [<ffffffff803dec7e>] do_trap_ecall_u+0x4a/0x110 > > [ 20.953420] [<ffffffff80003666>] ret_from_exception+0x0/0x66 > > [ 20.953648] Code: 7c42 7ca2 7d02 6de2 4501 6109 8082 c0ef 7463 bd1d (9002) 1141 > > [ 20.953897] ---[ end trace 0000000000000000 ]--- > > [ 20.954068] Kernel panic - not syncing: Fatal exception in interrupt > > [ 20.954128] SMP: stopping secondary CPUs > > [ 22.749953] SMP: failed to stop secondary CPUs 0-1 > > [ 22.803768] ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]--- > > > > > > > > > > > > --- > > > > > > > > Jisheng, I didn't add your Tested-by tag since the patch evolved from > > > > the one you tested. Hopefully this one brings you the same result. > > > > > > > > --- > > > > arch/riscv/include/asm/cpufeature.h | 3 ++- > > > > arch/riscv/kernel/cpufeature.c | 28 +++++++++++++++++++++++----- > > > > arch/riscv/kernel/smpboot.c | 11 ++++++++++- > > > > 3 files changed, 35 insertions(+), 7 deletions(-) > > > > > > > > diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h > > > > index d0345bd659c9..19e7817eba10 100644 > > > > --- a/arch/riscv/include/asm/cpufeature.h > > > > +++ b/arch/riscv/include/asm/cpufeature.h > > > > @@ -30,6 +30,7 @@ DECLARE_PER_CPU(long, misaligned_access_speed); > > > > /* Per-cpu ISA extensions. */ > > > > extern struct riscv_isainfo hart_isa[NR_CPUS]; > > > > > > > > -void check_unaligned_access(int cpu); > > > > +extern bool misaligned_speed_measured; > > > > +int check_unaligned_access(void *unused); > > > > > > > > #endif > > > > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c > > > > index 1cfbba65d11a..8eb36e1dfb95 100644 > > > > --- a/arch/riscv/kernel/cpufeature.c > > > > +++ b/arch/riscv/kernel/cpufeature.c > > > > @@ -42,6 +42,9 @@ struct riscv_isainfo hart_isa[NR_CPUS]; > > > > /* Performance information */ > > > > DEFINE_PER_CPU(long, misaligned_access_speed); > > > > > > > > +/* Boot-time in-parallel unaligned access measurement has occurred. */ > > > > +bool misaligned_speed_measured; > > > > > > This var can be avoided, see below. > > > > > > > + > > > > /** > > > > * riscv_isa_extension_base() - Get base extension word > > > > * > > > > @@ -556,8 +559,9 @@ unsigned long riscv_get_elf_hwcap(void) > > > > return hwcap; > > > > } > > > > > > > > -void check_unaligned_access(int cpu) > > > > +int check_unaligned_access(void *unused) > > > > { > > > > + int cpu = smp_processor_id(); > > > > u64 start_cycles, end_cycles; > > > > u64 word_cycles; > > > > u64 byte_cycles; > > > > @@ -571,7 +575,7 @@ void check_unaligned_access(int cpu) > > > > page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE)); > > > > if (!page) { > > > > pr_warn("Can't alloc pages to measure memcpy performance"); > > > > - return; > > > > + return 0; > > > > } > > > > > > > > /* Make an unaligned destination buffer. */ > > > > @@ -643,15 +647,29 @@ void check_unaligned_access(int cpu) > > > > > > > > out: > > > > __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); > > > > + return 0; > > > > +} > > > > + > > > > +static void check_unaligned_access_nonboot_cpu(void *param) > > > > +{ > > > > + if (smp_processor_id() != 0) > > > > + check_unaligned_access(param); > > > > } > > > > > > > > -static int check_unaligned_access_boot_cpu(void) > > > > +static int check_unaligned_access_all_cpus(void) > > > > { > > > > - check_unaligned_access(0); > > > > + /* Check everybody except 0, who stays behind to tend jiffies. */ > > > > + on_each_cpu(check_unaligned_access_nonboot_cpu, NULL, 1); > > > > + > > > > + /* Check core 0. */ > > > > + smp_call_on_cpu(0, check_unaligned_access, NULL, true); > > > > + > > > > + /* Boot-time measurements are complete. */ > > > > + misaligned_speed_measured = true; > > > > return 0; > > > > } > > > > > > > > -arch_initcall(check_unaligned_access_boot_cpu); > > > > +arch_initcall(check_unaligned_access_all_cpus); > > > > > > > > #ifdef CONFIG_RISCV_ALTERNATIVE > > > > /* > > > > diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c > > > > index 1b8da4e40a4d..39322ae20a75 100644 > > > > --- a/arch/riscv/kernel/smpboot.c > > > > +++ b/arch/riscv/kernel/smpboot.c > > > > @@ -27,6 +27,7 @@ > > > > #include <linux/sched/mm.h> > > > > #include <asm/cpu_ops.h> > > > > #include <asm/cpufeature.h> > > > > +#include <asm/hwprobe.h> > > > > #include <asm/irq.h> > > > > #include <asm/mmu_context.h> > > > > #include <asm/numa.h> > > > > @@ -246,7 +247,15 @@ asmlinkage __visible void smp_callin(void) > > > > > > > > numa_add_cpu(curr_cpuid); > > > > set_cpu_online(curr_cpuid, 1); > > > > - check_unaligned_access(curr_cpuid); > > > > + > > > > + /* > > > > + * Boot-time misaligned access speed measurements are done in parallel > > > > + * in an initcall. Only measure here for hotplug. > > > > + */ > > > > + if (misaligned_speed_measured && > > > > + (per_cpu(misaligned_access_speed, curr_cpuid) == RISCV_HWPROBE_MISALIGNED_UNKNOWN)) { > > > > > > I believe this check is for cpu not-booted during boot time but hotplug in > > > after that, if so I'm not sure whether > > > misaligned_speed_measured can be replaced with > > > (system_state == SYSTEM_RUNNING) > > > then we don't need misaligned_speed_measured at all. > > > > > > > + check_unaligned_access(NULL); > > > > + } > > > > > > > > if (has_vector()) { > > > > if (riscv_v_setup_vsize()) > > > > -- > > > > 2.34.1 > > > >
On Fri, Sep 15, 2023 at 11:49 AM Evan Green <evan@rivosinc.com> wrote: > > Probing for misaligned access speed takes about 0.06 seconds. On a > system with 64 cores, doing this in smp_callin() means it's done > serially, extending boot time by 3.8 seconds. That's a lot of boot time. > > Instead of measuring each CPU serially, let's do the measurements on > all CPUs in parallel. If we disable preemption on all CPUs, the > jiffies stop ticking, so we can do this in stages of 1) everybody > except core 0, then 2) core 0. > > The measurement call in smp_callin() stays around, but is now > conditionalized to only run if a new CPU shows up after the round of > in-parallel measurements has run. The goal is to have the measurement > call not run during boot or suspend/resume, but only on a hotplug > addition. > > Signed-off-by: Evan Green <evan@rivosinc.com> Shoot, I saw the other thread [1] where it seems like my use of alloc_pages() in this context is improper? I had thought I was alright, as Documentation/core-api/memory-allocation.rst says: > If the allocation is performed from an atomic context, e.g interrupt > handler, use ``GFP_NOWAIT``. Any tips for reproducing that splat? I have CONFIG_DEBUG_ATOMIC_SLEEP on (it's in the defconfig), and lockdep, and I'm on Conor's linux-6.6.y-rt, but so far I'm not seeing it. -Evan [1] https://lore.kernel.org/linux-riscv/ZUPWc7sY47l34lV+@xhacker/T/#t
On Thu, Nov 02, 2023 at 03:41:58PM -0700, Evan Green wrote: > On Fri, Sep 15, 2023 at 11:49 AM Evan Green <evan@rivosinc.com> wrote: > > > > Probing for misaligned access speed takes about 0.06 seconds. On a > > system with 64 cores, doing this in smp_callin() means it's done > > serially, extending boot time by 3.8 seconds. That's a lot of boot time. > > > > Instead of measuring each CPU serially, let's do the measurements on > > all CPUs in parallel. If we disable preemption on all CPUs, the > > jiffies stop ticking, so we can do this in stages of 1) everybody > > except core 0, then 2) core 0. > > > > The measurement call in smp_callin() stays around, but is now > > conditionalized to only run if a new CPU shows up after the round of > > in-parallel measurements has run. The goal is to have the measurement > > call not run during boot or suspend/resume, but only on a hotplug > > addition. > > > > Signed-off-by: Evan Green <evan@rivosinc.com> > > Shoot, I saw the other thread [1] where it seems like my use of > alloc_pages() in this context is improper? I had thought I was > alright, as Documentation/core-api/memory-allocation.rst says: > > > If the allocation is performed from an atomic context, e.g interrupt > > handler, use ``GFP_NOWAIT``. > > Any tips for reproducing that splat? I have CONFIG_DEBUG_ATOMIC_SLEEP > on (it's in the defconfig), and lockdep, and I'm on Conor's > linux-6.6.y-rt, but so far I'm not seeing it. It was originally produced in hardware, but I can also see these issues in QEMU's emulation of my hardware (although as you may have seen, I get them both with and without this patch). My qemu incantation was something like: $(qemu) -M microchip-icicle-kit \ -m 3G -smp 5 \ -kernel vmlinux.bin \ -dtb mpfs-icicle.dtb \ -initrd initramfs \ -display none -serial null \ -serial stdio \ -D qemu.log -d unimp Where the kernel was built from the .config in that branch in my repo. Cheers, Conor.
diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h index d0345bd659c9..19e7817eba10 100644 --- a/arch/riscv/include/asm/cpufeature.h +++ b/arch/riscv/include/asm/cpufeature.h @@ -30,6 +30,7 @@ DECLARE_PER_CPU(long, misaligned_access_speed); /* Per-cpu ISA extensions. */ extern struct riscv_isainfo hart_isa[NR_CPUS]; -void check_unaligned_access(int cpu); +extern bool misaligned_speed_measured; +int check_unaligned_access(void *unused); #endif diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 1cfbba65d11a..8eb36e1dfb95 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -42,6 +42,9 @@ struct riscv_isainfo hart_isa[NR_CPUS]; /* Performance information */ DEFINE_PER_CPU(long, misaligned_access_speed); +/* Boot-time in-parallel unaligned access measurement has occurred. */ +bool misaligned_speed_measured; + /** * riscv_isa_extension_base() - Get base extension word * @@ -556,8 +559,9 @@ unsigned long riscv_get_elf_hwcap(void) return hwcap; } -void check_unaligned_access(int cpu) +int check_unaligned_access(void *unused) { + int cpu = smp_processor_id(); u64 start_cycles, end_cycles; u64 word_cycles; u64 byte_cycles; @@ -571,7 +575,7 @@ void check_unaligned_access(int cpu) page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE)); if (!page) { pr_warn("Can't alloc pages to measure memcpy performance"); - return; + return 0; } /* Make an unaligned destination buffer. */ @@ -643,15 +647,29 @@ void check_unaligned_access(int cpu) out: __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); + return 0; +} + +static void check_unaligned_access_nonboot_cpu(void *param) +{ + if (smp_processor_id() != 0) + check_unaligned_access(param); } -static int check_unaligned_access_boot_cpu(void) +static int check_unaligned_access_all_cpus(void) { - check_unaligned_access(0); + /* Check everybody except 0, who stays behind to tend jiffies. */ + on_each_cpu(check_unaligned_access_nonboot_cpu, NULL, 1); + + /* Check core 0. */ + smp_call_on_cpu(0, check_unaligned_access, NULL, true); + + /* Boot-time measurements are complete. */ + misaligned_speed_measured = true; return 0; } -arch_initcall(check_unaligned_access_boot_cpu); +arch_initcall(check_unaligned_access_all_cpus); #ifdef CONFIG_RISCV_ALTERNATIVE /* diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c index 1b8da4e40a4d..39322ae20a75 100644 --- a/arch/riscv/kernel/smpboot.c +++ b/arch/riscv/kernel/smpboot.c @@ -27,6 +27,7 @@ #include <linux/sched/mm.h> #include <asm/cpu_ops.h> #include <asm/cpufeature.h> +#include <asm/hwprobe.h> #include <asm/irq.h> #include <asm/mmu_context.h> #include <asm/numa.h> @@ -246,7 +247,15 @@ asmlinkage __visible void smp_callin(void) numa_add_cpu(curr_cpuid); set_cpu_online(curr_cpuid, 1); - check_unaligned_access(curr_cpuid); + + /* + * Boot-time misaligned access speed measurements are done in parallel + * in an initcall. Only measure here for hotplug. + */ + if (misaligned_speed_measured && + (per_cpu(misaligned_access_speed, curr_cpuid) == RISCV_HWPROBE_MISALIGNED_UNKNOWN)) { + check_unaligned_access(NULL); + } if (has_vector()) { if (riscv_v_setup_vsize())