From patchwork Wed Aug 2 10:21:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 129811 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9f41:0:b0:3e4:2afc:c1 with SMTP id v1csp413861vqx; Wed, 2 Aug 2023 05:20:10 -0700 (PDT) X-Google-Smtp-Source: APBJJlGDnFw0IEpkp70y4vIhve35vEp76jikJrwU0cFoHOsoXkVbG9v3JJR+yXdQeTc6BNvH9Gsb X-Received: by 2002:a17:907:9686:b0:98d:ebb7:a8b0 with SMTP id hd6-20020a170907968600b0098debb7a8b0mr7220235ejc.14.1690978810416; Wed, 02 Aug 2023 05:20:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690978810; cv=none; d=google.com; s=arc-20160816; b=aG7tNFYJXFJeChTexNSZg/yAOisLK2ALq7iGB+a0W8hyaFysrEfmTqFnbF9mm9I44M 7JkasHPLQbkFkUYAkMs29kWaQnr+3oWuIakd2hd1dJiHo95AajdZYDM+h1Eap1uq/QtD 5ewN30GjrXCb6ZwNVQA+5yt7gO7n98qOgJMQkwtv88HEokaUFEa/lkgqDaStBKhf03Rl 1J58RyN5Q3Fy6ZSoN588VcTEuE2rdN9iRZZhIueyr1+vF5yKN0sWv6YorDDJTEsGKZ/R PcqzkbGnTPbFOJGyqi+qkCt9hIdM79OFWpirvPEZitpdu4DNpbCNdlhbLSo2UhOHzt2D V2oA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:date:mime-version:references:subject:cc:to:from :dkim-signature:dkim-signature:message-id; bh=nJP92ZJMXC8M/YJGPnD+1ew7/OovV6r/sIvEynfyO4k=; fh=O9JPtEgoXN20WvvffwC9vht4C9mlV4cjm0SKbEbqoG0=; b=CVFanFGfJu4np8ntKC3SibpstkyMXCqHCUOT6C0y1hKwwjsQnWjBWiNqOXlZvWuQW+ 18EqC9ttG8tb+kROGl4DxoXpCcxxm/ML28upYv+y9U/UDkJ8ddnsu/FL3JL+jhHBEsmf sF8l33LUZj4wVt17usq/nWXY1hZVLx13s0xYRoi5g8S7yft5neAEyZ8dTcE/BzhgmNdy RvcYuIL14lexKlARTfRus8AFZ5Pzm1P83fxKnwFygtpE84sTY+svMPA+HV4i9X/it60B skilpIE89HTj3S7a59l+5MXOpGzMcSnVlEqiXyrsHlWnN4MMt27kOc4wKv+ZQApOEyFS YOEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=gDFcm6wM; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y26-20020a170906071a00b0098d6097599esi10334790ejb.872.2023.08.02.05.19.45; Wed, 02 Aug 2023 05:20:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=gDFcm6wM; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233339AbjHBKVT (ORCPT + 99 others); Wed, 2 Aug 2023 06:21:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233272AbjHBKVJ (ORCPT ); Wed, 2 Aug 2023 06:21:09 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 869D22698 for ; Wed, 2 Aug 2023 03:21:04 -0700 (PDT) Message-ID: <20230802101932.876156493@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1690971663; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=nJP92ZJMXC8M/YJGPnD+1ew7/OovV6r/sIvEynfyO4k=; b=gDFcm6wMqNxj/7HRJ+xTYd1J1ayejnSpZ8BYUKcdzVPyXccAg3JEpwJ26u8TW1DiYHy1Us mubCmLepW5wPFd66GZ1KqHFb1p4QuXSo3gYMq5Tz8rKGRVbVfuttwC5Cj5a6IrOXPfkI6F GTXtJ2gYNSea4tf5CvxTny7DowL6MT4WSmnSKeSqAu4pCBkS/iP8iGA5sOu0ZiNyV9mzbr uwO9Ecr39dLQ/0nMaOeZ5X17Ko68z9yQy7B+gmy84hVldIRTXfdCKTp+xYZeuaokLl1I6t tpwN57clSkrJFgRYuhu/6++fuXpGATyQOamlFOuPMLPxZndymWX37qUGn0Q7gQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1690971663; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=nJP92ZJMXC8M/YJGPnD+1ew7/OovV6r/sIvEynfyO4k=; b=MjYPETlhtQZV/hf5y4YIM2Jauvk3Tb/rwWksSqU4cLizKZ1XYhnkRLhmeXrUUVR2LOjC5t o2Gak0SJyPpdfpCw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Tom Lendacky , Andrew Cooper , Arjan van de Ven , Huang Rui , Juergen Gross , Dimitri Sivanich , Michael Kelley , Wei Liu Subject: [patch V3 03/40] x86/cpu: Encapsulate topology information in cpuinfo_x86 References: <20230802101635.459108805@linutronix.de> MIME-Version: 1.0 Date: Wed, 2 Aug 2023 12:21:02 +0200 (CEST) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773119797258814982 X-GMAIL-MSGID: 1773119797258814982 The topology related information is randomly scattered across cpuinfo_x86. Create a new structure cpuinfo_topo and move in a first step initial_apicid and apicid into it. Aside of being better readable this is in preparation for replacing the horribly fragile CPU topology evaluation code further down the road. Consolidate APIC ID fields to u32 as that represents the hardware type. No functional change. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/processor.h | 14 +++++++++----- arch/x86/kernel/cpu/amd.c | 10 +++++----- arch/x86/kernel/cpu/cacheinfo.c | 20 ++++++++++---------- arch/x86/kernel/cpu/common.c | 18 +++++++++--------- arch/x86/kernel/cpu/hygon.c | 12 ++++++------ arch/x86/kernel/cpu/mce/apei.c | 2 +- arch/x86/kernel/cpu/mce/core.c | 2 +- arch/x86/kernel/cpu/proc.c | 4 ++-- arch/x86/kernel/cpu/topology.c | 12 ++++++------ arch/x86/xen/apic.c | 2 +- drivers/gpu/drm/amd/amdkfd/kfd_topology.c | 2 +- drivers/virt/acrn/hsm.c | 2 +- 12 files changed, 52 insertions(+), 48 deletions(-) --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -74,11 +74,16 @@ extern u16 __read_mostly tlb_lld_4m[NR_I extern u16 __read_mostly tlb_lld_1g[NR_INFO]; /* - * CPU type and hardware bug flags. Kept separately for each CPU. - * Members of this structure are referenced in head_32.S, so think twice - * before touching them. [mj] + * CPU type and hardware bug flags. Kept separately for each CPU. */ +struct cpuinfo_topology { + // Real APIC ID read from the local APIC + u32 apicid; + // The initial APIC ID provided by CPUID + u32 initial_apicid; +}; + struct cpuinfo_x86 { __u8 x86; /* CPU family */ __u8 x86_vendor; /* CPU vendor */ @@ -111,6 +116,7 @@ struct cpuinfo_x86 { }; char x86_vendor_id[16]; char x86_model_id[64]; + struct cpuinfo_topology topo; /* in KB - valid for CPUS which support this call: */ unsigned int x86_cache_size; int x86_cache_alignment; /* In bytes */ @@ -124,8 +130,6 @@ struct cpuinfo_x86 { u64 ppin; /* cpuid returned max cores value: */ u16 x86_max_cores; - u16 apicid; - u16 initial_apicid; u16 x86_clflush_size; /* number of cores as seen by the OS: */ u16 booted_cores; --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -387,9 +387,9 @@ static void amd_detect_cmp(struct cpuinf bits = c->x86_coreid_bits; /* Low order bits define the core id (index of core in socket) */ - c->cpu_core_id = c->initial_apicid & ((1 << bits)-1); + c->cpu_core_id = c->topo.initial_apicid & ((1 << bits)-1); /* Convert the initial APIC ID into the socket ID */ - c->phys_proc_id = c->initial_apicid >> bits; + c->phys_proc_id = c->topo.initial_apicid >> bits; /* use socket ID also for last level cache */ per_cpu(cpu_llc_id, cpu) = c->cpu_die_id = c->phys_proc_id; } @@ -405,7 +405,7 @@ static void srat_detect_node(struct cpui #ifdef CONFIG_NUMA int cpu = smp_processor_id(); int node; - unsigned apicid = c->apicid; + unsigned apicid = c->topo.apicid; node = numa_cpu_node(cpu); if (node == NUMA_NO_NODE) @@ -439,7 +439,7 @@ static void srat_detect_node(struct cpui * through CPU mapping may alter the outcome, directly * access __apicid_to_node[]. */ - int ht_nodeid = c->initial_apicid; + int ht_nodeid = c->topo.initial_apicid; if (__apicid_to_node[ht_nodeid] != NUMA_NO_NODE) node = __apicid_to_node[ht_nodeid]; @@ -934,7 +934,7 @@ static void init_amd(struct cpuinfo_x86 set_cpu_cap(c, X86_FEATURE_FSRS); /* get apicid instead of initial apic id from cpuid */ - c->apicid = read_apic_id(); + c->topo.apicid = read_apic_id(); /* K6s reports MCEs but don't actually have all the MSRs */ if (c->x86 < 6) --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -678,7 +678,7 @@ void cacheinfo_amd_init_llc_id(struct cp * LLC is at the core complex level. * Core complex ID is ApicId[3] for these processors. */ - per_cpu(cpu_llc_id, cpu) = c->apicid >> 3; + per_cpu(cpu_llc_id, cpu) = c->topo.apicid >> 3; } else { /* * LLC ID is calculated from the number of threads sharing the @@ -694,7 +694,7 @@ void cacheinfo_amd_init_llc_id(struct cp if (num_sharing_cache) { int bits = get_count_order(num_sharing_cache); - per_cpu(cpu_llc_id, cpu) = c->apicid >> bits; + per_cpu(cpu_llc_id, cpu) = c->topo.apicid >> bits; } } } @@ -712,7 +712,7 @@ void cacheinfo_hygon_init_llc_id(struct * LLC is at the core complex level. * Core complex ID is ApicId[3] for these processors. */ - per_cpu(cpu_llc_id, cpu) = c->apicid >> 3; + per_cpu(cpu_llc_id, cpu) = c->topo.apicid >> 3; } void init_amd_cacheinfo(struct cpuinfo_x86 *c) @@ -776,13 +776,13 @@ void init_intel_cacheinfo(struct cpuinfo new_l2 = this_leaf.size/1024; num_threads_sharing = 1 + this_leaf.eax.split.num_threads_sharing; index_msb = get_count_order(num_threads_sharing); - l2_id = c->apicid & ~((1 << index_msb) - 1); + l2_id = c->topo.apicid & ~((1 << index_msb) - 1); break; case 3: new_l3 = this_leaf.size/1024; num_threads_sharing = 1 + this_leaf.eax.split.num_threads_sharing; index_msb = get_count_order(num_threads_sharing); - l3_id = c->apicid & ~((1 << index_msb) - 1); + l3_id = c->topo.apicid & ~((1 << index_msb) - 1); break; default: break; @@ -915,7 +915,7 @@ static int __cache_amd_cpumap_setup(unsi unsigned int apicid, nshared, first, last; nshared = base->eax.split.num_threads_sharing + 1; - apicid = cpu_data(cpu).apicid; + apicid = cpu_data(cpu).topo.apicid; first = apicid - (apicid % nshared); last = first + nshared - 1; @@ -924,14 +924,14 @@ static int __cache_amd_cpumap_setup(unsi if (!this_cpu_ci->info_list) continue; - apicid = cpu_data(i).apicid; + apicid = cpu_data(i).topo.apicid; if ((apicid < first) || (apicid > last)) continue; this_leaf = this_cpu_ci->info_list + index; for_each_online_cpu(sibling) { - apicid = cpu_data(sibling).apicid; + apicid = cpu_data(sibling).topo.apicid; if ((apicid < first) || (apicid > last)) continue; cpumask_set_cpu(sibling, @@ -969,7 +969,7 @@ static void __cache_cpumap_setup(unsigne index_msb = get_count_order(num_threads_sharing); for_each_online_cpu(i) - if (cpu_data(i).apicid >> index_msb == c->apicid >> index_msb) { + if (cpu_data(i).topo.apicid >> index_msb == c->topo.apicid >> index_msb) { struct cpu_cacheinfo *sib_cpu_ci = get_cpu_cacheinfo(i); if (i == cpu || !sib_cpu_ci->info_list) @@ -1024,7 +1024,7 @@ static void get_cache_id(int cpu, struct num_threads_sharing = 1 + id4_regs->eax.split.num_threads_sharing; index_msb = get_count_order(num_threads_sharing); - id4_regs->id = c->apicid >> index_msb; + id4_regs->id = c->topo.apicid >> index_msb; } int populate_cache_leaves(unsigned int cpu) --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -899,7 +899,7 @@ void detect_ht(struct cpuinfo_x86 *c) return; index_msb = get_count_order(smp_num_siblings); - c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb); + c->phys_proc_id = apic->phys_pkg_id(c->topo.initial_apicid, index_msb); smp_num_siblings = smp_num_siblings / c->x86_max_cores; @@ -907,7 +907,7 @@ void detect_ht(struct cpuinfo_x86 *c) core_bits = get_count_order(c->x86_max_cores); - c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, index_msb) & + c->cpu_core_id = apic->phys_pkg_id(c->topo.initial_apicid, index_msb) & ((1 << core_bits) - 1); #endif } @@ -1721,15 +1721,15 @@ static void generic_identify(struct cpui get_cpu_address_sizes(c); if (c->cpuid_level >= 0x00000001) { - c->initial_apicid = (cpuid_ebx(1) >> 24) & 0xFF; + c->topo.initial_apicid = (cpuid_ebx(1) >> 24) & 0xFF; #ifdef CONFIG_X86_32 # ifdef CONFIG_SMP - c->apicid = apic->phys_pkg_id(c->initial_apicid, 0); + c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0); # else - c->apicid = c->initial_apicid; + c->topo.apicid = c->topo.initial_apicid; # endif #endif - c->phys_proc_id = c->initial_apicid; + c->phys_proc_id = c->topo.initial_apicid; } get_model_name(c); /* Default name */ @@ -1763,9 +1763,9 @@ static void validate_apic_and_package_id apicid = apic->cpu_present_to_apicid(cpu); - if (apicid != c->apicid) { + if (apicid != c->topo.apicid) { pr_err(FW_BUG "CPU%u: APIC id mismatch. Firmware: %x APIC: %x\n", - cpu, apicid, c->initial_apicid); + cpu, apicid, c->topo.initial_apicid); } BUG_ON(topology_update_package_map(c->phys_proc_id, cpu)); BUG_ON(topology_update_die_map(c->cpu_die_id, cpu)); @@ -1815,7 +1815,7 @@ static void identify_cpu(struct cpuinfo_ apply_forced_caps(c); #ifdef CONFIG_X86_64 - c->apicid = apic->phys_pkg_id(c->initial_apicid, 0); + c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0); #endif /* --- a/arch/x86/kernel/cpu/hygon.c +++ b/arch/x86/kernel/cpu/hygon.c @@ -88,7 +88,7 @@ static void hygon_get_topology(struct cp c->x86_coreid_bits = get_count_order(c->x86_max_cores); /* Socket ID is ApicId[6] for these processors. */ - c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT; + c->phys_proc_id = c->topo.apicid >> APICID_SOCKET_ID_BIT; cacheinfo_hygon_init_llc_id(c, cpu); } else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) { @@ -116,9 +116,9 @@ static void hygon_detect_cmp(struct cpui bits = c->x86_coreid_bits; /* Low order bits define the core id (index of core in socket) */ - c->cpu_core_id = c->initial_apicid & ((1 << bits)-1); + c->cpu_core_id = c->topo.initial_apicid & ((1 << bits)-1); /* Convert the initial APIC ID into the socket ID */ - c->phys_proc_id = c->initial_apicid >> bits; + c->phys_proc_id = c->topo.initial_apicid >> bits; /* use socket ID also for last level cache */ per_cpu(cpu_llc_id, cpu) = c->cpu_die_id = c->phys_proc_id; } @@ -128,7 +128,7 @@ static void srat_detect_node(struct cpui #ifdef CONFIG_NUMA int cpu = smp_processor_id(); int node; - unsigned int apicid = c->apicid; + unsigned int apicid = c->topo.apicid; node = numa_cpu_node(cpu); if (node == NUMA_NO_NODE) @@ -161,7 +161,7 @@ static void srat_detect_node(struct cpui * through CPU mapping may alter the outcome, directly * access __apicid_to_node[]. */ - int ht_nodeid = c->initial_apicid; + int ht_nodeid = c->topo.initial_apicid; if (__apicid_to_node[ht_nodeid] != NUMA_NO_NODE) node = __apicid_to_node[ht_nodeid]; @@ -301,7 +301,7 @@ static void init_hygon(struct cpuinfo_x8 set_cpu_cap(c, X86_FEATURE_REP_GOOD); /* get apicid instead of initial apic id from cpuid */ - c->apicid = read_apic_id(); + c->topo.apicid = read_apic_id(); /* * XXX someone from Hygon needs to confirm this DTRT --- a/arch/x86/kernel/cpu/mce/apei.c +++ b/arch/x86/kernel/cpu/mce/apei.c @@ -103,7 +103,7 @@ int apei_smca_report_x86_error(struct cp m.socketid = -1; for_each_possible_cpu(cpu) { - if (cpu_data(cpu).initial_apicid == lapic_id) { + if (cpu_data(cpu).topo.initial_apicid == lapic_id) { m.extcpu = cpu; m.socketid = cpu_data(m.extcpu).phys_proc_id; break; --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -124,7 +124,7 @@ void mce_setup(struct mce *m) m->cpuvendor = boot_cpu_data.x86_vendor; m->cpuid = cpuid_eax(1); m->socketid = cpu_data(m->extcpu).phys_proc_id; - m->apicid = cpu_data(m->extcpu).initial_apicid; + m->apicid = cpu_data(m->extcpu).topo.initial_apicid; m->mcgcap = __rdmsr(MSR_IA32_MCG_CAP); m->ppin = cpu_data(m->extcpu).ppin; m->microcode = boot_cpu_data.microcode; --- a/arch/x86/kernel/cpu/proc.c +++ b/arch/x86/kernel/cpu/proc.c @@ -23,8 +23,8 @@ static void show_cpuinfo_core(struct seq cpumask_weight(topology_core_cpumask(cpu))); seq_printf(m, "core id\t\t: %d\n", c->cpu_core_id); seq_printf(m, "cpu cores\t: %d\n", c->booted_cores); - seq_printf(m, "apicid\t\t: %d\n", c->apicid); - seq_printf(m, "initial apicid\t: %d\n", c->initial_apicid); + seq_printf(m, "apicid\t\t: %d\n", c->topo.apicid); + seq_printf(m, "initial apicid\t: %d\n", c->topo.initial_apicid); #endif } --- a/arch/x86/kernel/cpu/topology.c +++ b/arch/x86/kernel/cpu/topology.c @@ -78,7 +78,7 @@ int detect_extended_topology_early(struc /* * initial apic id, which also represents 32-bit extended x2apic id. */ - c->initial_apicid = edx; + c->topo.initial_apicid = edx; smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx)); #endif return 0; @@ -108,7 +108,7 @@ int detect_extended_topology(struct cpui * Populate HT related information from sub-leaf level 0. */ cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx); - c->initial_apicid = edx; + c->topo.initial_apicid = edx; core_level_siblings = LEVEL_MAX_SIBLINGS(ebx); smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx)); core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); @@ -146,20 +146,20 @@ int detect_extended_topology(struct cpui die_select_mask = (~(-1 << die_plus_mask_width)) >> core_plus_mask_width; - c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, + c->cpu_core_id = apic->phys_pkg_id(c->topo.initial_apicid, ht_mask_width) & core_select_mask; if (die_level_present) { - c->cpu_die_id = apic->phys_pkg_id(c->initial_apicid, + c->cpu_die_id = apic->phys_pkg_id(c->topo.initial_apicid, core_plus_mask_width) & die_select_mask; } - c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, + c->phys_proc_id = apic->phys_pkg_id(c->topo.initial_apicid, pkg_mask_width); /* * Reinit the apicid, now that we have extended initial_apicid. */ - c->apicid = apic->phys_pkg_id(c->initial_apicid, 0); + c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0); c->x86_max_cores = (core_level_siblings / smp_num_siblings); __max_die_per_package = (die_level_siblings / core_level_siblings); --- a/arch/x86/xen/apic.c +++ b/arch/x86/xen/apic.c @@ -118,7 +118,7 @@ static int xen_phys_pkg_id(int initial_a static int xen_cpu_present_to_apicid(int cpu) { if (cpu_present(cpu)) - return cpu_data(cpu).apicid; + return cpu_data(cpu).topo.apicid; else return BAD_APICID; } --- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c @@ -2255,7 +2255,7 @@ static int kfd_cpumask_to_apic_id(const if (first_cpu_of_numa_node >= nr_cpu_ids) return -1; #ifdef CONFIG_X86_64 - return cpu_data(first_cpu_of_numa_node).apicid; + return cpu_data(first_cpu_of_numa_node).topo.apicid; #else return first_cpu_of_numa_node; #endif --- a/drivers/virt/acrn/hsm.c +++ b/drivers/virt/acrn/hsm.c @@ -447,7 +447,7 @@ static ssize_t remove_cpu_store(struct d if (cpu_online(cpu)) remove_cpu(cpu); - lapicid = cpu_data(cpu).apicid; + lapicid = cpu_data(cpu).topo.apicid; dev_dbg(dev, "Try to remove cpu %lld with lapicid %lld\n", cpu, lapicid); ret = hcall_sos_remove_cpu(lapicid); if (ret < 0) {