From patchwork Tue Jan 30 22:20:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 194350 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp1530875dyb; Tue, 30 Jan 2024 14:22:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IGYOiGoojIS3V0Bm80REsmaN05fQC+oDLXFQfL1ueECsIr05NMx1Z7Ej6kgb/2fsK2gkKvt X-Received: by 2002:a05:620a:c87:b0:783:d8b6:42ba with SMTP id q7-20020a05620a0c8700b00783d8b642bamr7263297qki.49.1706653355005; Tue, 30 Jan 2024 14:22:35 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706653354; cv=pass; d=google.com; s=arc-20160816; b=Rwtl5MKpRwjysDnBdmPm0EG6W5YqYZWV3jogwTq9SELizU6eTBk8RFMUqObDzBJDZO E0Y+ZqpEPx3jXyLptazRBAf0C67tJEamGD+AR5ntLu/Bn1/+ggoym2mB63gRV+bHHo7z GQNOcYtaaH9DRnB/B8mzc1kW3hJ2s+xrphCfGXbCXCy6kUupPyr805HHiIHvJXGcRROS kBqyhKAnsHRrO83x6PCvIoKeY9IWAismbpSIlApXIldQjk2u+GZ3H73kHvqcos0ayOdg SFVQvUOwiA9OtEqf/Bctt8OmawXEezljsOB5Ex+dLhpw2wp7vkfw3lsPKcEOFtDLFBOX icZA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=AFpRjlZt5KfhD2pl60sCm+o66/BVaVDwOmaa7Xlj2rw=; fh=kDu4WSgSKQjfqS0+uR7gDD47kd+EMQAW47hbuuOMwvI=; b=qUPuF3twVUXMlmhMHVvSJMl0NYOVMNH9pFL1PoD8bo2WJ+5asgsLyDzvdDYMY1DWHI PjUgaGV3QYL24jXm/vgOANU+/0W+X9Od3oI9mIaWNM6R7evZJxjRRE9wStWaX23VCGrT UHYjvataw1ByGe+zY0SY8/WykZ17WWuPLx5TeMfpYG6H6nC1+3/FW691VkvLTr9KfDZz xX+rLU/BBmkHEe36FboD9/0usqZbtFsTm4ROStna6hBM1mPsHZ2GW82Ra9uK+6l/zSpJ qqw6s3eYZnl7GKOsj62uHIz+kl0kfOBgn3BvKbdgX3rEIsn5WBmUJbXkJbigznK5rUZ4 XSCw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="H+/u2LBG"; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-45389-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-45389-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id oq27-20020a05620a611b00b0078335234933si10601534qkn.291.2024.01.30.14.22.34 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jan 2024 14:22:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-45389-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="H+/u2LBG"; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-45389-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-45389-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id BAB641C21CD5 for ; Tue, 30 Jan 2024 22:22:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E620B71B39; Tue, 30 Jan 2024 22:20:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="H+/u2LBG" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A8307867E; Tue, 30 Jan 2024 22:20:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.55.52.115 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706653250; cv=none; b=cUdrnKWSZRJFuHkgIgCwqx/Yqs2dgvXhIhvSAOFXY1JTvkveQXIs30Xpdx77fOJlpGPHnbWBud/oVvcOVDwDHv6rMFLjZqeU1A9bOnQCTIQTaUFtcJGfPhU37VdwjU3WT3TlOHgRDZGB3Qvr7mphLjGYh+FzO0nNvKT97db5XJU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706653250; c=relaxed/simple; bh=29bQN1NyDHKevckTrT72bvfFGLl3zjcUVMkQBDiTwo4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qQ6MzPt5hvGW2+ZLSUEhueoRzvnfTFW0tgLKz7ix0K0tQZddliXJsNJyTm8LoCSG0aiqONzH96Vy57QohXT1SSfUsJ1v6n5+PtEz6fN2I9+M9qFmk3k9z1SomE5izc1ttI1sa1UkcjPRCMN/y5cFg8ojx9KeGpdATHIejk63TYw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=H+/u2LBG; arc=none smtp.client-ip=192.55.52.115 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1706653248; x=1738189248; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=29bQN1NyDHKevckTrT72bvfFGLl3zjcUVMkQBDiTwo4=; b=H+/u2LBGASYounmuks7453IrNU1GiZXCMWEGNAFeNK5nrxD6pSUlr2GX W1goB6ZfyCNhJZhw4S5ewariy/gQ4oxiiFJuyRc9gNKIPvSBWzcZKDAOv K3Ew2yW3A9ApvkWzOD0YeVBNCVQMf7IunswXtCgHnHyotKImlKovwVfmp F4ffzfdKfh84lmmrUsqsUPUjDPaiAwrtv0Z9CbjXn9sxydbp1/4Kensu8 WzQIjrD0j8loXh3UYMlnqBlbJ1PUCGAC2JJeT2DwcBX3jxWBKtK6TdGWA 3ER+vsUu8CIcNjih3Mklo4cJn3FEMtpuSWnWP4Cc3BfHVW1/QXrBter4G g==; X-IronPort-AV: E=McAfee;i="6600,9927,10969"; a="403041796" X-IronPort-AV: E=Sophos;i="6.05,230,1701158400"; d="scan'208";a="403041796" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2024 14:20:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10969"; a="1119412856" X-IronPort-AV: E=Sophos;i="6.05,230,1701158400"; d="scan'208";a="1119412856" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2024 14:20:43 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Peter Newman , Jonathan Corbet , Shuah Khan , x86@kernel.org Cc: Shaopeng Tan , James Morse , Jamie Iles , Babu Moger , Randy Dunlap , Drew Fustini , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15-RFC 7/8] x86/resctrl: Sub NUMA Cluster detection and enable Date: Tue, 30 Jan 2024 14:20:33 -0800 Message-ID: <20240130222034.37181-8-tony.luck@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240130222034.37181-1-tony.luck@intel.com> References: <20240126223837.21835-1-tony.luck@intel.com> <20240130222034.37181-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778058162467640738 X-GMAIL-MSGID: 1789555748122756920 There isn't a simple hardware bit that indicates whether a CPU is running in Sub NUMA Cluster (SNC) mode. Infer the state by comparing the ratio of NUMA nodes to L3 cache instances. When SNC mode is detected, reconfigure the RMID counters by updating the MSR_RMID_SNC_CONFIG MSR on each socket as CPUs are seen. Update the scope of the RDT_RESOURCE_L3_MON resource to NODE. Clearing bit zero of the MSR divides the RMIDs and renumbers the ones on the second SNC node to start from zero. Signed-off-by: Tony Luck --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/kernel/cpu/resctrl/core.c | 119 +++++++++++++++++++++++++++++ 2 files changed, 120 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index f1bd7b91b3c6..f6ba7d0397b8 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -1119,6 +1119,7 @@ #define MSR_IA32_QM_CTR 0xc8e #define MSR_IA32_PQR_ASSOC 0xc8f #define MSR_IA32_L3_CBM_BASE 0xc90 +#define MSR_RMID_SNC_CONFIG 0xca0 #define MSR_IA32_L2_CBM_BASE 0xd10 #define MSR_IA32_MBA_THRTL_BASE 0xd50 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index dc886d2c9a33..84c36e10241f 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -16,11 +16,14 @@ #define pr_fmt(fmt) "resctrl: " fmt +#include #include #include #include #include +#include +#include #include #include #include "internal.h" @@ -651,11 +654,42 @@ static void clear_closid_rmid(int cpu) wrmsr(MSR_IA32_PQR_ASSOC, 0, 0); } +/* + * The power-on reset value of MSR_RMID_SNC_CONFIG is 0x1 + * which indicates that RMIDs are configured in legacy mode. + * This mode is incompatible with Linux resctrl semantics + * as RMIDs are partitioned between SNC nodes, which requires + * a user to know which RMID is allocated to a task. + * Clearing bit 0 reconfigures the RMID counters for use + * in Sub NUMA Cluster mode. This mode is better for Linux. + * The RMID space is divided between all SNC nodes with the + * RMIDs renumbered to start from zero in each node when + * couning operations from tasks. Code to read the counters + * must adjust RMID counter numbers based on SNC node. See + * __rmid_read() for code that does this. + */ +static void snc_remap_rmids(int cpu) +{ + u64 val; + + /* Only need to enable once per package. */ + if (cpumask_first(topology_core_cpumask(cpu)) != cpu) + return; + + rdmsrl(MSR_RMID_SNC_CONFIG, val); + val &= ~BIT_ULL(0); + wrmsrl(MSR_RMID_SNC_CONFIG, val); +} + static int resctrl_online_cpu(unsigned int cpu) { struct rdt_resource *r; mutex_lock(&rdtgroup_mutex); + + if (snc_nodes_per_l3_cache > 1) + snc_remap_rmids(cpu); + for_each_capable_rdt_resource(r) domain_add_cpu(cpu, r); /* The cpu is set in default rdtgroup after online. */ @@ -910,11 +944,96 @@ static __init bool get_rdt_resources(void) return (rdt_mon_capable || rdt_alloc_capable); } +/* CPU models that support MSR_RMID_SNC_CONFIG */ +static const struct x86_cpu_id snc_cpu_ids[] __initconst = { + X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(GRANITERAPIDS_X, 0), + {} +}; + +/* + * There isn't a simple hardware bit that indicates whether a CPU is running + * in Sub NUMA Cluster (SNC) mode. Infer the state by comparing the + * ratio of NUMA nodes to L3 cache instances. + * It is not possible to accurately determine SNC state if the system is + * booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes + * to L3 caches. It will be OK if system is booted with hyperthreading + * disabled (since this doesn't affect the ratio). + */ +static __init int snc_get_config(void) +{ + unsigned long *node_caches; + int mem_only_nodes = 0; + int cpu, node, ret; + int num_l3_caches; + int cache_id; + + if (!x86_match_cpu(snc_cpu_ids)) + return 1; + + node_caches = bitmap_zalloc(num_possible_cpus(), GFP_KERNEL); + if (!node_caches) + return 1; + + cpus_read_lock(); + + if (num_online_cpus() != num_present_cpus()) + pr_warn("Some CPUs offline, SNC detection may be incorrect\n"); + + for_each_node(node) { + cpu = cpumask_first(cpumask_of_node(node)); + if (cpu < nr_cpu_ids) { + cache_id = get_cpu_cacheinfo_id(cpu, 3); + if (cache_id != -1) + set_bit(cache_id, node_caches); + } else { + mem_only_nodes++; + } + } + cpus_read_unlock(); + + num_l3_caches = bitmap_weight(node_caches, num_possible_cpus()); + kfree(node_caches); + + if (!num_l3_caches) + goto insane; + + /* sanity check #1: Number of CPU nodes must be multiple of num_l3_caches */ + if ((nr_node_ids - mem_only_nodes) % num_l3_caches) + goto insane; + + ret = (nr_node_ids - mem_only_nodes) / num_l3_caches; + + /* sanity check #2: Only valid results are 1, 2, 3, 4 */ + switch (ret) { + case 1: + break; + case 2: + case 3: + case 4: + rdt_resources_all[RDT_RESOURCE_L3_MON].r_resctrl.scope = RESCTRL_NODE; + pr_info("Sub-NUMA Cluster: %d nodes per L3 cache\n", ret); + break; + default: + goto insane; + } + + return ret; +insane: + pr_warn("SNC insanity: CPU nodes = %d num_l3_caches = %d\n", + (nr_node_ids - mem_only_nodes), num_l3_caches); + return 1; +} + static __init void rdt_init_res_defs_intel(void) { struct rdt_hw_resource *hw_res; struct rdt_resource *r; + snc_nodes_per_l3_cache = snc_get_config(); + for_each_rdt_resource(r) { hw_res = resctrl_to_arch_res(r);