Message ID | 20231010115335.13862-1-yangyicong@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp138131vqb; Tue, 10 Oct 2023 04:56:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEUUqI2SBjp57OPz+b5/5bM01qvc+ubJnGPopWe9QrC4k+3juzUKshDc9UCdGAFdQCr6uls X-Received: by 2002:a05:6358:9226:b0:143:21e8:11f2 with SMTP id d38-20020a056358922600b0014321e811f2mr18563998rwb.11.1696939008062; Tue, 10 Oct 2023 04:56:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696939008; cv=none; d=google.com; s=arc-20160816; b=U9d9LuUG0VgYIP+maTjA2KQL97jRBNGEiZ5Vv313AEof65dkd8Z1cMgDY4aRyS0DRN H6vBlXZx0gWtiB2eMHZGYjYQ3NTBlnAPXRzHT+B/V68CulXTZ8k+zZrbgi2Ex4vJA7Hn XqmEI8SDavA942y9gydQlJ/PG7p3HVb1KnI3jmCJNnrRDCws4C9psO88w8QG3x2umi8J o7gGKACqFDlu9C8OyYPTy1R3B9SNmKVtFigQjP5CZj7tbSUSBOiizDxdyOqF0MJLwJGQ /fr27bzBFpQ+5xrNwuyelh4PXVpWczO4rEpsOejmm65Zh9Zk3p5motzIavI6HqEdI8gH /jjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=uyMtNIIewUR3dRVR6Vg61q33XPTIgzZZG6/bYaGgcHQ=; fh=3iB9MPM7gOV8iaw6nlpksJFY47xoJvfTkJIkOXUlBK4=; b=teGquW3G5Gr+h3X+h+2JM3OdUWtwWEi1IkdGbIcDherTFt7E9Q/HxlPCzp/EKieRV5 Uy7ItiWaOnXs9EuWMtKLy82/uI6ZErBLmYxotA0X0vMuejU6FzLibQP4Hfmz7R5wxE0B i3kMb3ZcBA8ihOeruCTof9xL9jmkBHsbnjg+yKN8TXafCbzFG4xi4KzSJaUT8E5/ejAu D23oO6b2R6tTZ/3I9tivWWfBZKi6RSsET0EU4KSw7vWn81z0JSEB5nQEdVkXBa9+wKOm BkjEtvtgiDNYwt3jXKretjlxHUWYrTQWjq5qbB+N1cMUfMwxwJty44XpUl98Vsa0GZDo MsHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id np15-20020a17090b4c4f00b00263e299dff6si13006088pjb.74.2023.10.10.04.56.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Oct 2023 04:56:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 47AFC83700DE; Tue, 10 Oct 2023 04:56:45 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231435AbjJJL4a (ORCPT <rfc822;rua109.linux@gmail.com> + 20 others); Tue, 10 Oct 2023 07:56:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230122AbjJJL42 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 10 Oct 2023 07:56:28 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8912399 for <linux-kernel@vger.kernel.org>; Tue, 10 Oct 2023 04:56:26 -0700 (PDT) Received: from canpemm500009.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4S4Z4g6RdJztTMx; Tue, 10 Oct 2023 19:51:47 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by canpemm500009.china.huawei.com (7.192.105.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 10 Oct 2023 19:56:23 +0800 From: Yicong Yang <yangyicong@huawei.com> To: <catalin.marinas@arm.com>, <will@kernel.org>, <sudeep.holla@arm.com>, <linux-arm-kernel@lists.infradead.org> CC: <dietmar.eggemann@arm.com>, <gregkh@linuxfoundation.org>, <rafael@kernel.org>, <jonathan.cameron@huawei.com>, <prime.zeng@hisilicon.com>, <linuxarm@huawei.com>, <yangyicong@hisilicon.com>, <linux-kernel@vger.kernel.org> Subject: [PATCH v2] arch_topology: Support SMT control on arm64 Date: Tue, 10 Oct 2023 19:53:35 +0800 Message-ID: <20231010115335.13862-1-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500009.china.huawei.com (7.192.105.203) X-CFilter-Loop: Reflected X-Spam-Status: No, score=2.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Tue, 10 Oct 2023 04:56:45 -0700 (PDT) X-Spam-Level: ** X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779369517503764041 X-GMAIL-MSGID: 1779369517503764041 |
Series |
[v2] arch_topology: Support SMT control on arm64
|
|
Commit Message
Yicong Yang
Oct. 10, 2023, 11:53 a.m. UTC
From: Yicong Yang <yangyicong@hisilicon.com> The core CPU control framework supports runtime SMT control which is not yet supported on arm64. Besides the general vulnerabilities concerns we want this runtime control on our arm64 server for: - better single CPU performance in some cases - saving overall power consumption This patch implements it in the following aspects: - implement the callbacks of the core - update the SMT status after the topology enumerated on arm64 - select HOTPLUG_SMT for arm64 For disabling SMT we'll offline all the secondary threads and only leave the primary thread. Since we don't have restriction for primary thread selection, the first thread is chosen as the primary thread in this implementation. Tests has been done on our real ACPI based arm64 server and on ACPI/OF based QEMU VMs. Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> --- Change since v1: - Avoid the complexity on SMT detecting by visiting each CPU once, concerned by Sudeep Link: https://lore.kernel.org/all/20230919123319.23785-1-yangyicong@huawei.com/ arch/arm64/Kconfig | 1 + drivers/base/arch_topology.c | 75 +++++++++++++++++++++++++++++++++++ include/linux/arch_topology.h | 11 +++++ 3 files changed, 87 insertions(+)
Comments
On Tue, Oct 10, 2023 at 07:53:35PM +0800, Yicong Yang wrote: > From: Yicong Yang <yangyicong@hisilicon.com> > > The core CPU control framework supports runtime SMT control which > is not yet supported on arm64. Besides the general vulnerabilities > concerns we want this runtime control on our arm64 server for: But shouldn't this be part of UEFI? Why manually try to determine this at powerup in Linux? > - better single CPU performance in some cases > - saving overall power consumption > > This patch implements it in the following aspects: > > - implement the callbacks of the core > - update the SMT status after the topology enumerated on arm64 > - select HOTPLUG_SMT for arm64 > > For disabling SMT we'll offline all the secondary threads and > only leave the primary thread. Since we don't have restriction > for primary thread selection, the first thread is chosen as the > primary thread in this implementation. > > Tests has been done on our real ACPI based arm64 server and on > ACPI/OF based QEMU VMs. > > Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> > --- > Change since v1: > - Avoid the complexity on SMT detecting by visiting each CPU once, concerned by Sudeep > Link: https://lore.kernel.org/all/20230919123319.23785-1-yangyicong@huawei.com/ > > arch/arm64/Kconfig | 1 + > drivers/base/arch_topology.c | 75 +++++++++++++++++++++++++++++++++++ > include/linux/arch_topology.h | 11 +++++ > 3 files changed, 87 insertions(+) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 78f20e632712..339661ceabc8 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -233,6 +233,7 @@ config ARM64 > select HAVE_KRETPROBES > select HAVE_GENERIC_VDSO > select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU > + select HOTPLUG_SMT if SMP > select IRQ_DOMAIN > select IRQ_FORCED_THREADING > select KASAN_VMALLOC if KASAN > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c > index b741b5ba82bd..c5b453c2cd61 100644 > --- a/drivers/base/arch_topology.c > +++ b/drivers/base/arch_topology.c > @@ -729,6 +729,75 @@ const struct cpumask *cpu_clustergroup_mask(int cpu) > return &cpu_topology[cpu].cluster_sibling; > } > > +#ifdef CONFIG_HOTPLUG_SMT > +static int topology_smt_num_threads = 1; > + > +void __init topology_smt_set_num_threads(void) > +{ > + int cpu, sibling, threads; > + cpumask_var_t to_visit; > + > + if (!alloc_cpumask_var(&to_visit, GFP_KERNEL)) { > + pr_err("Failed to update the SMT info\n"); > + return; > + } > + > + cpumask_or(to_visit, to_visit, cpu_possible_mask); > + > + /* > + * Walk all the CPUs to find the largest thread number, in case we're > + * on a heterogeneous platform with only part of the CPU cores support > + * SMT. > + * > + * Get the thread number by checking the CPUs with same core id > + * rather than checking the topology_sibling_cpumask(), since the > + * sibling mask will not cover all the CPUs if there's CPU offline. > + */ > + for_each_cpu(cpu, to_visit) { > + threads = 1; > + > + cpumask_clear_cpu(cpu, to_visit); > + > + /* Invalid thread id, this CPU is not in a SMT core */ > + if (cpu_topology[cpu].thread_id == -1) > + continue; > + > + for_each_cpu(sibling, to_visit) { > + if (cpu_topology[sibling].thread_id != -1 && > + cpu_topology[cpu].core_id == cpu_topology[sibling].core_id) > + threads++; > + > + cpumask_clear_cpu(sibling, to_visit); > + } > + > + if (threads > topology_smt_num_threads) > + topology_smt_num_threads = threads; > + } > + > + free_cpumask_var(to_visit); > + > + /* > + * We don't support CONFIG_SMT_NUM_THREADS_DYNAMIC so make the > + * max_threads == num_threads. > + */ > + cpu_smt_set_num_threads(topology_smt_num_threads, topology_smt_num_threads); > +} How is this going to affect non-arm64 systems? Will we now be doing this double loop for all cpus in the systems (i.e. for 10's of thousands on x86)? And again, why is this not an issue on the current platforms that already support CONFIG_HOTPLUG_SMT? What makes ARM64 so broken it requires this manual intervention? thanks, greg k-h
On 2023/10/10 20:33, Greg KH wrote: > On Tue, Oct 10, 2023 at 07:53:35PM +0800, Yicong Yang wrote: >> From: Yicong Yang <yangyicong@hisilicon.com> >> >> The core CPU control framework supports runtime SMT control which >> is not yet supported on arm64. Besides the general vulnerabilities >> concerns we want this runtime control on our arm64 server for: > > But shouldn't this be part of UEFI? Why manually try to determine this > at powerup in Linux? We can disable/enable SMT by UEFI, but it lacks of flexibility. With runtime support we can disable/enable on demand, rather than disable/enable all the time. In our case, mainly for the below 2 reasons in the commit. > >> - better single CPU performance in some cases >> - saving overall power consumption >> >> This patch implements it in the following aspects: >> >> - implement the callbacks of the core >> - update the SMT status after the topology enumerated on arm64 >> - select HOTPLUG_SMT for arm64 >> >> For disabling SMT we'll offline all the secondary threads and >> only leave the primary thread. Since we don't have restriction >> for primary thread selection, the first thread is chosen as the >> primary thread in this implementation. >> >> Tests has been done on our real ACPI based arm64 server and on >> ACPI/OF based QEMU VMs. >> >> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> >> --- >> Change since v1: >> - Avoid the complexity on SMT detecting by visiting each CPU once, concerned by Sudeep >> Link: https://lore.kernel.org/all/20230919123319.23785-1-yangyicong@huawei.com/ >> >> arch/arm64/Kconfig | 1 + >> drivers/base/arch_topology.c | 75 +++++++++++++++++++++++++++++++++++ >> include/linux/arch_topology.h | 11 +++++ >> 3 files changed, 87 insertions(+) >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index 78f20e632712..339661ceabc8 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -233,6 +233,7 @@ config ARM64 >> select HAVE_KRETPROBES >> select HAVE_GENERIC_VDSO >> select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU >> + select HOTPLUG_SMT if SMP >> select IRQ_DOMAIN >> select IRQ_FORCED_THREADING >> select KASAN_VMALLOC if KASAN >> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c >> index b741b5ba82bd..c5b453c2cd61 100644 >> --- a/drivers/base/arch_topology.c >> +++ b/drivers/base/arch_topology.c >> @@ -729,6 +729,75 @@ const struct cpumask *cpu_clustergroup_mask(int cpu) >> return &cpu_topology[cpu].cluster_sibling; >> } >> >> +#ifdef CONFIG_HOTPLUG_SMT >> +static int topology_smt_num_threads = 1; >> + >> +void __init topology_smt_set_num_threads(void) >> +{ >> + int cpu, sibling, threads; >> + cpumask_var_t to_visit; >> + >> + if (!alloc_cpumask_var(&to_visit, GFP_KERNEL)) { >> + pr_err("Failed to update the SMT info\n"); >> + return; >> + } >> + >> + cpumask_or(to_visit, to_visit, cpu_possible_mask); >> + >> + /* >> + * Walk all the CPUs to find the largest thread number, in case we're >> + * on a heterogeneous platform with only part of the CPU cores support >> + * SMT. >> + * >> + * Get the thread number by checking the CPUs with same core id >> + * rather than checking the topology_sibling_cpumask(), since the >> + * sibling mask will not cover all the CPUs if there's CPU offline. >> + */ >> + for_each_cpu(cpu, to_visit) { >> + threads = 1; >> + >> + cpumask_clear_cpu(cpu, to_visit); >> + >> + /* Invalid thread id, this CPU is not in a SMT core */ >> + if (cpu_topology[cpu].thread_id == -1) >> + continue; >> + >> + for_each_cpu(sibling, to_visit) { >> + if (cpu_topology[sibling].thread_id != -1 && >> + cpu_topology[cpu].core_id == cpu_topology[sibling].core_id) >> + threads++; >> + >> + cpumask_clear_cpu(sibling, to_visit); >> + } >> + >> + if (threads > topology_smt_num_threads) >> + topology_smt_num_threads = threads; >> + } >> + >> + free_cpumask_var(to_visit); >> + >> + /* >> + * We don't support CONFIG_SMT_NUM_THREADS_DYNAMIC so make the >> + * max_threads == num_threads. >> + */ >> + cpu_smt_set_num_threads(topology_smt_num_threads, topology_smt_num_threads); >> +} > > How is this going to affect non-arm64 systems? Will we now be doing > this double loop for all cpus in the systems (i.e. for 10's of thousands > on x86)? x86 build the topology in their arch codes. Currently the user of this driver are arm64, arm, risv and parisc who select GENERIC_ARCH_TOPOLOGY. arch needs to select CONFIG_HOTPLUG_SMT for this feature and this patch only adds arm64 support. So others won't be affected. > And again, why is this not an issue on the current platforms that > already support CONFIG_HOTPLUG_SMT? What makes ARM64 so broken it > requires this manual intervention? > Currently I see x86 and powerpc supports CONFIG_HOTPLUG_SMT on the mainline. For x86 they build the topology and detects the SMT suppport in the arch code, seems they can directly get the SMT number by reading the system register, refers to arch/x86/kernel/cpu/common.c:detect_ht_early(). For powerpc I see they are reading the threads number from the firmware(OF) and assuming all the SMT have the same thread number, refer to arch/powerpc/kernel/setup-common.c:smp_setup_cpu_maps(). Please corrects me if there's mistakes. Back to arm64, there's no system registers to directly indicate the thread numbers of each SMT in the system like x86, nor do we have the information from the firmware (OF/ACPI). I cannot assume all the physical cores have the same thread number, since we may have SMT cores and non-SMT cores on heterogeneous platform. We can only know the SMT information after parsing the firmware information. So in this implementation I scan the cpus to find the biggest SMT number rather than use the SMT number of the first CPU in case on a heterogeneous platform. And it can handle the case of both ACPI and OF based platform. Sudeep also suggests to detect SMT numbers during parsing the PPTT or device tree, that means we need to handle it separately. Do the detecting in the OF scanning code in arch_topology and also the dectecting in the PPTT scanning code in arch codes. But if we agree it's better, I can switch to that. Or any other suggestions are appreciated. Thanks.
On Tue, Oct 10, 2023 at 09:10:48PM +0800, Yicong Yang wrote: > On 2023/10/10 20:33, Greg KH wrote: > > On Tue, Oct 10, 2023 at 07:53:35PM +0800, Yicong Yang wrote: > >> From: Yicong Yang <yangyicong@hisilicon.com> > >> > >> The core CPU control framework supports runtime SMT control which > >> is not yet supported on arm64. Besides the general vulnerabilities > >> concerns we want this runtime control on our arm64 server for: > > > > But shouldn't this be part of UEFI? Why manually try to determine this > > at powerup in Linux? > > We can disable/enable SMT by UEFI, but it lacks of flexibility. With runtime > support we can disable/enable on demand, rather than disable/enable all the time. > In our case, mainly for the below 2 reasons in the commit. Runtime is fine, it's the initial detection that I have questions about, see below. > > And again, why is this not an issue on the current platforms that > > already support CONFIG_HOTPLUG_SMT? What makes ARM64 so broken it > > requires this manual intervention? > > > > Currently I see x86 and powerpc supports CONFIG_HOTPLUG_SMT on the mainline. > For x86 they build the topology and detects the SMT suppport in the arch > code, seems they can directly get the SMT number by reading the system > register, refers to arch/x86/kernel/cpu/common.c:detect_ht_early(). For > powerpc I see they are reading the threads number from the firmware(OF) > and assuming all the SMT have the same thread number, refer to > arch/powerpc/kernel/setup-common.c:smp_setup_cpu_maps(). Please corrects > me if there's mistakes. > > Back to arm64, there's no system registers to directly indicate the thread > numbers of each SMT in the system like x86, nor do we have the information > from the firmware (OF/ACPI). UEFI/ACPI should be showing you this information, as that's how x86 gets the information, right? If not, it needs to be added as part of the specification. > I cannot assume all the physical cores have > the same thread number, since we may have SMT cores and non-SMT cores on > heterogeneous platform. We can only know the SMT information after > parsing the firmware information. Ugh, heterogeneous platforms, why??? Anyway, I didn't think about that, but even then, the firmware should be giving you this information in a standard way, otherwise it's not following the UEFI/ACPI specification from what I can tell. thanks, greg k-h
On Tue, 10 Oct 2023 16:08:25 +0200 Greg KH <gregkh@linuxfoundation.org> wrote: > On Tue, Oct 10, 2023 at 09:10:48PM +0800, Yicong Yang wrote: > > On 2023/10/10 20:33, Greg KH wrote: > > > On Tue, Oct 10, 2023 at 07:53:35PM +0800, Yicong Yang wrote: > > >> From: Yicong Yang <yangyicong@hisilicon.com> > > >> > > >> The core CPU control framework supports runtime SMT control which > > >> is not yet supported on arm64. Besides the general vulnerabilities > > >> concerns we want this runtime control on our arm64 server for: > > > > > > But shouldn't this be part of UEFI? Why manually try to determine this > > > at powerup in Linux? > > > > We can disable/enable SMT by UEFI, but it lacks of flexibility. With runtime > > support we can disable/enable on demand, rather than disable/enable all the time. > > In our case, mainly for the below 2 reasons in the commit. > > Runtime is fine, it's the initial detection that I have questions about, > see below. > > > > And again, why is this not an issue on the current platforms that > > > already support CONFIG_HOTPLUG_SMT? What makes ARM64 so broken it > > > requires this manual intervention? > > > > > > > Currently I see x86 and powerpc supports CONFIG_HOTPLUG_SMT on the mainline. > > For x86 they build the topology and detects the SMT suppport in the arch > > code, seems they can directly get the SMT number by reading the system > > register, refers to arch/x86/kernel/cpu/common.c:detect_ht_early(). For > > powerpc I see they are reading the threads number from the firmware(OF) > > and assuming all the SMT have the same thread number, refer to > > arch/powerpc/kernel/setup-common.c:smp_setup_cpu_maps(). Please corrects > > me if there's mistakes. > > > > Back to arm64, there's no system registers to directly indicate the thread > > numbers of each SMT in the system like x86, nor do we have the information > > from the firmware (OF/ACPI). > > UEFI/ACPI should be showing you this information, as that's how x86 gets > the information, right? If not, it needs to be added as part of the > specification. > > > I cannot assume all the physical cores have > > the same thread number, since we may have SMT cores and non-SMT cores on > > heterogeneous platform. We can only know the SMT information after > > parsing the firmware information. > > Ugh, heterogeneous platforms, why??? Anyway, I didn't think about that, > but even then, the firmware should be giving you this information in a > standard way, otherwise it's not following the UEFI/ACPI specification > from what I can tell. It can be established from ACPI PPTT, but it's pretty indirect. You'd need to search all leaf nodes of the tree and find "Processor Hierarchy Nodes tagged as "Processor is a thread" then walk up from those find common ancestors. So ultimately I think you end up doing some form of multiple pass. I'm not sure it ends up simpler than what yangyicong had here. There may be better algorithms than the simple nested loop here though with better scalability. Jonathan > > thanks, > > greg k-h
Hi Yicong,
kernel test robot noticed the following build warnings:
[auto build test WARNING on arm64/for-next/core]
[also build test WARNING on driver-core/driver-core-testing driver-core/driver-core-next driver-core/driver-core-linus soc/for-next linus/master v6.6-rc5 next-20231012]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Yicong-Yang/arch_topology-Support-SMT-control-on-arm64/20231010-195926
base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
patch link: https://lore.kernel.org/r/20231010115335.13862-1-yangyicong%40huawei.com
patch subject: [PATCH v2] arch_topology: Support SMT control on arm64
config: arm64-randconfig-004-20231012 (https://download.01.org/0day-ci/archive/20231012/202310122334.Q66kv9Wm-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231012/202310122334.Q66kv9Wm-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202310122334.Q66kv9Wm-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from include/linux/cpumask.h:12,
from include/linux/smp.h:13,
from include/linux/lockdep.h:14,
from include/linux/spinlock.h:63,
from include/linux/mmzone.h:8,
from include/linux/gfp.h:7,
from include/linux/slab.h:16,
from include/linux/resource_ext.h:11,
from include/linux/acpi.h:13,
from drivers/base/arch_topology.c:9:
In function 'bitmap_or',
inlined from 'cpumask_or' at include/linux/cpumask.h:582:2,
inlined from 'topology_smt_set_num_threads' at drivers/base/arch_topology.c:745:2:
>> include/linux/bitmap.h:330:24: warning: 'to_visit' is used uninitialized [-Wuninitialized]
330 | *dst = *src1 | *src2;
| ^~~~~
drivers/base/arch_topology.c: In function 'topology_smt_set_num_threads':
drivers/base/arch_topology.c:738:23: note: 'to_visit' declared here
738 | cpumask_var_t to_visit;
| ^~~~~~~~
vim +/to_visit +330 include/linux/bitmap.h
^1da177e4c3f41 Linus Torvalds 2005-04-16 325
^1da177e4c3f41 Linus Torvalds 2005-04-16 326 static inline void bitmap_or(unsigned long *dst, const unsigned long *src1,
2f9305eb31097f Rasmus Villemoes 2014-08-06 327 const unsigned long *src2, unsigned int nbits)
^1da177e4c3f41 Linus Torvalds 2005-04-16 328 {
4b0bc0bca83f3f Rusty Russell 2008-12-30 329 if (small_const_nbits(nbits))
^1da177e4c3f41 Linus Torvalds 2005-04-16 @330 *dst = *src1 | *src2;
^1da177e4c3f41 Linus Torvalds 2005-04-16 331 else
^1da177e4c3f41 Linus Torvalds 2005-04-16 332 __bitmap_or(dst, src1, src2, nbits);
^1da177e4c3f41 Linus Torvalds 2005-04-16 333 }
^1da177e4c3f41 Linus Torvalds 2005-04-16 334
Hi Yicong,
kernel test robot noticed the following build errors:
[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on driver-core/driver-core-testing driver-core/driver-core-next driver-core/driver-core-linus soc/for-next linus/master v6.6-rc5 next-20231012]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Yicong-Yang/arch_topology-Support-SMT-control-on-arm64/20231010-195926
base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
patch link: https://lore.kernel.org/r/20231010115335.13862-1-yangyicong%40huawei.com
patch subject: [PATCH v2] arch_topology: Support SMT control on arm64
config: arm64-allnoconfig (https://download.01.org/0day-ci/archive/20231012/202310122334.INC9SFga-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231012/202310122334.INC9SFga-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202310122334.INC9SFga-lkp@intel.com/
All errors (new ones prefixed by >>):
kernel/cpu.c: In function 'cpuhp_smt_disable':
>> kernel/cpu.c:2687:23: error: implicit declaration of function 'cpu_down_maps_locked' [-Werror=implicit-function-declaration]
2687 | ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
| ^~~~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
vim +/cpu_down_maps_locked +2687 kernel/cpu.c
dc8d37ed304eee Arnd Bergmann 2019-12-10 2672
dc8d37ed304eee Arnd Bergmann 2019-12-10 2673 int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
dc8d37ed304eee Arnd Bergmann 2019-12-10 2674 {
dc8d37ed304eee Arnd Bergmann 2019-12-10 2675 int cpu, ret = 0;
dc8d37ed304eee Arnd Bergmann 2019-12-10 2676
dc8d37ed304eee Arnd Bergmann 2019-12-10 2677 cpu_maps_update_begin();
dc8d37ed304eee Arnd Bergmann 2019-12-10 2678 for_each_online_cpu(cpu) {
dc8d37ed304eee Arnd Bergmann 2019-12-10 2679 if (topology_is_primary_thread(cpu))
dc8d37ed304eee Arnd Bergmann 2019-12-10 2680 continue;
38253464bc821d Michael Ellerman 2023-07-05 2681 /*
38253464bc821d Michael Ellerman 2023-07-05 2682 * Disable can be called with CPU_SMT_ENABLED when changing
38253464bc821d Michael Ellerman 2023-07-05 2683 * from a higher to lower number of SMT threads per core.
38253464bc821d Michael Ellerman 2023-07-05 2684 */
38253464bc821d Michael Ellerman 2023-07-05 2685 if (ctrlval == CPU_SMT_ENABLED && cpu_smt_thread_allowed(cpu))
38253464bc821d Michael Ellerman 2023-07-05 2686 continue;
dc8d37ed304eee Arnd Bergmann 2019-12-10 @2687 ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
dc8d37ed304eee Arnd Bergmann 2019-12-10 2688 if (ret)
dc8d37ed304eee Arnd Bergmann 2019-12-10 2689 break;
dc8d37ed304eee Arnd Bergmann 2019-12-10 2690 /*
dc8d37ed304eee Arnd Bergmann 2019-12-10 2691 * As this needs to hold the cpu maps lock it's impossible
dc8d37ed304eee Arnd Bergmann 2019-12-10 2692 * to call device_offline() because that ends up calling
dc8d37ed304eee Arnd Bergmann 2019-12-10 2693 * cpu_down() which takes cpu maps lock. cpu maps lock
dc8d37ed304eee Arnd Bergmann 2019-12-10 2694 * needs to be held as this might race against in kernel
dc8d37ed304eee Arnd Bergmann 2019-12-10 2695 * abusers of the hotplug machinery (thermal management).
dc8d37ed304eee Arnd Bergmann 2019-12-10 2696 *
dc8d37ed304eee Arnd Bergmann 2019-12-10 2697 * So nothing would update device:offline state. That would
dc8d37ed304eee Arnd Bergmann 2019-12-10 2698 * leave the sysfs entry stale and prevent onlining after
dc8d37ed304eee Arnd Bergmann 2019-12-10 2699 * smt control has been changed to 'off' again. This is
dc8d37ed304eee Arnd Bergmann 2019-12-10 2700 * called under the sysfs hotplug lock, so it is properly
dc8d37ed304eee Arnd Bergmann 2019-12-10 2701 * serialized against the regular offline usage.
dc8d37ed304eee Arnd Bergmann 2019-12-10 2702 */
dc8d37ed304eee Arnd Bergmann 2019-12-10 2703 cpuhp_offline_cpu_device(cpu);
dc8d37ed304eee Arnd Bergmann 2019-12-10 2704 }
dc8d37ed304eee Arnd Bergmann 2019-12-10 2705 if (!ret)
dc8d37ed304eee Arnd Bergmann 2019-12-10 2706 cpu_smt_control = ctrlval;
dc8d37ed304eee Arnd Bergmann 2019-12-10 2707 cpu_maps_update_done();
dc8d37ed304eee Arnd Bergmann 2019-12-10 2708 return ret;
dc8d37ed304eee Arnd Bergmann 2019-12-10 2709 }
dc8d37ed304eee Arnd Bergmann 2019-12-10 2710
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 78f20e632712..339661ceabc8 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -233,6 +233,7 @@ config ARM64 select HAVE_KRETPROBES select HAVE_GENERIC_VDSO select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU + select HOTPLUG_SMT if SMP select IRQ_DOMAIN select IRQ_FORCED_THREADING select KASAN_VMALLOC if KASAN diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c index b741b5ba82bd..c5b453c2cd61 100644 --- a/drivers/base/arch_topology.c +++ b/drivers/base/arch_topology.c @@ -729,6 +729,75 @@ const struct cpumask *cpu_clustergroup_mask(int cpu) return &cpu_topology[cpu].cluster_sibling; } +#ifdef CONFIG_HOTPLUG_SMT +static int topology_smt_num_threads = 1; + +void __init topology_smt_set_num_threads(void) +{ + int cpu, sibling, threads; + cpumask_var_t to_visit; + + if (!alloc_cpumask_var(&to_visit, GFP_KERNEL)) { + pr_err("Failed to update the SMT info\n"); + return; + } + + cpumask_or(to_visit, to_visit, cpu_possible_mask); + + /* + * Walk all the CPUs to find the largest thread number, in case we're + * on a heterogeneous platform with only part of the CPU cores support + * SMT. + * + * Get the thread number by checking the CPUs with same core id + * rather than checking the topology_sibling_cpumask(), since the + * sibling mask will not cover all the CPUs if there's CPU offline. + */ + for_each_cpu(cpu, to_visit) { + threads = 1; + + cpumask_clear_cpu(cpu, to_visit); + + /* Invalid thread id, this CPU is not in a SMT core */ + if (cpu_topology[cpu].thread_id == -1) + continue; + + for_each_cpu(sibling, to_visit) { + if (cpu_topology[sibling].thread_id != -1 && + cpu_topology[cpu].core_id == cpu_topology[sibling].core_id) + threads++; + + cpumask_clear_cpu(sibling, to_visit); + } + + if (threads > topology_smt_num_threads) + topology_smt_num_threads = threads; + } + + free_cpumask_var(to_visit); + + /* + * We don't support CONFIG_SMT_NUM_THREADS_DYNAMIC so make the + * max_threads == num_threads. + */ + cpu_smt_set_num_threads(topology_smt_num_threads, topology_smt_num_threads); +} + +/* + * On SMT Hotplug the primary thread of the SMT won't be disabled. For x86 they + * seem to have a primary thread for special purpose. For other arthitectures + * like arm64 there's no such restriction for a primary thread, so make the + * first thread in the SMT as the primary thread. + */ +bool topology_is_primary_thread(unsigned int cpu) +{ + if (cpu == cpumask_first(topology_sibling_cpumask(cpu))) + return true; + + return false; +} +#endif + void update_siblings_masks(unsigned int cpuid) { struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; @@ -841,6 +910,12 @@ void __init init_cpu_topology(void) reset_cpu_topology(); } + /* + * By this stage we get to know whether we support SMT or not, update + * the information for the core. + */ + topology_smt_set_num_threads(); + for_each_possible_cpu(cpu) { ret = fetch_cache_info(cpu); if (!ret) diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h index a07b510e7dc5..cf605a576e7b 100644 --- a/include/linux/arch_topology.h +++ b/include/linux/arch_topology.h @@ -92,6 +92,17 @@ void update_siblings_masks(unsigned int cpu); void remove_cpu_topology(unsigned int cpuid); void reset_cpu_topology(void); int parse_acpi_topology(void); + +#ifdef CONFIG_HOTPLUG_SMT +bool topology_smt_supported(void); +bool topology_is_primary_thread(unsigned int cpu); +void topology_smt_set_num_threads(void); +#else +static inline bool topology_smt_supported(void) { return false; } +static inline bool topology_is_primary_thread(unsigned int cpu) { return false; } +static inline void topology_smt_set_num_threads(void) { } +#endif + #endif #endif /* _LINUX_ARCH_TOPOLOGY_H_ */