From patchwork Thu Nov 9 23:09:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Luck, Tony" X-Patchwork-Id: 163630 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b129:0:b0:403:3b70:6f57 with SMTP id q9csp763692vqs; Thu, 9 Nov 2023 15:10:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IEQ/kEY8SakFyBSa5FP+gLO0P6YjbfKnrCJ5KRM+4rbC67t1nIA0L4DTpcFXoZ8MTjIlKrb X-Received: by 2002:a05:6a21:18d:b0:17e:2afd:407b with SMTP id le13-20020a056a21018d00b0017e2afd407bmr8504279pzb.9.1699571411124; Thu, 09 Nov 2023 15:10:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699571411; cv=none; d=google.com; s=arc-20160816; b=AAiJmhgsMqVked8gP76GrUPpzbhNGxuWdOclrCNfh00+d49kSgj8LjlUgavJ3vj3cZ BL06Asl0CdTKSGlUrqjpxhjR8zTC6iw9BgU+B5QU4couj5YAY+lUO27L4uUn9fTpyAcl 5BMOTAvH1frpsJtXeAfaHCPErTDufC+JFF63sK8or6C0VyggM4InPonpes677uBZRRYs okZjRd4ukD8AxTjGIu9z1VjuRqqveNMY5dD4fpHckh45GGj8c+dhdgVfFFeFX5VrZZ0k P61gGVmUemKpo6fOiLKFU1D/aajEGbu6T5oRtRbmodIZDDrYDkHlxeZVPUni+8ofXpgP cEsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=U3BR/J0DoiBGnpCj1Xc+kkeFq5u0lnKVIqWz8tkJJS0=; fh=EIH9XAmicvPIUSP7TBeBhZ/WaoqG49JQ3xV1i3Gl7Co=; b=li+srfp8rDCf2SuQULJ3LGDo2pjVp1cM9qZPFmM4XdK9VP2iv+BxP56k1HOX6DhheR /QDoEwoJSeRA8Jrf826uJ00LVjCQTrPcVswshmqxyFW/2LcWMzvh1eTbajQ4crEIqoUK TlLZ58OVLsT1ECgCLdL+cMHqdBWtSVfPlmWyLHhb4RzkkKojqu8THabDfn3YHf3bAWcD zitOjgJRI80drkgkQQrUIVBNYrwbucxp/ERcwRD5kuNETDEIkkvP3AHigNrLn6i1TytA EAsyQTb6UaKugezfgXVnyDTvUaHefHAFVp2uOcSjrTL8kbuEJhi1c97yf1WZXdKwZu+h QjTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W9u2s8w5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id m12-20020a170902db0c00b001ca0b64f5a6si6510998plx.449.2023.11.09.15.10.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Nov 2023 15:10:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W9u2s8w5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 2E5F683A3137; Thu, 9 Nov 2023 15:10:10 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345441AbjKIXKC (ORCPT + 31 others); Thu, 9 Nov 2023 18:10:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234836AbjKIXJk (ORCPT ); Thu, 9 Nov 2023 18:09:40 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC1984239; Thu, 9 Nov 2023 15:09:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699571377; x=1731107377; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5YAtBJ2VcnRvbeS5AqOrCcLa71qzEKmZqtl2nlNXQlM=; b=W9u2s8w5jGsKKlrFUgBKv1hfGcW0KD1IjerodocUv8K/9uLqzrvHKlH/ t/TZUDXIGaiDuPOViJyGLhrQcbYrZB9fdm/7MB8iUa9LGs4u1Cyj/JZ6X Z/4urnNVxOAXd2TDQFnSH+WgFMFz9jLlzfpzpj5v4+eNzWQOWa3K1lChb 7fDs3QFxbkRFRqeGbUwh5u91pAvSY1E17vnl3CUHZb/Bu0n+pBTm8kyfr r8SxYdySA76Z2zGrxlx5KcfhzxLpNAATiXimnq1pMcUjp7A1ClvUv0yyB ViOgCQ+eRE4XZa58az7xRK8hxKPw4OWW5pfyHNJgHgbGRIlMZEE8iG1kD w==; X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="370298169" X-IronPort-AV: E=Sophos;i="6.03,290,1694761200"; d="scan'208";a="370298169" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2023 15:09:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="833984577" X-IronPort-AV: E=Sophos;i="6.03,290,1694761200"; d="scan'208";a="833984577" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2023 15:09:32 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Peter Newman , Jonathan Corbet , Shuah Khan , x86@kernel.org Cc: Shaopeng Tan , James Morse , Jamie Iles , Babu Moger , Randy Dunlap , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 7/8] x86/resctrl: Sub NUMA Cluster detection and enable Date: Thu, 9 Nov 2023 15:09:14 -0800 Message-ID: <20231109230915.73600-8-tony.luck@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231109230915.73600-1-tony.luck@intel.com> References: <20231031211708.37390-1-tony.luck@intel.com> <20231109230915.73600-1-tony.luck@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Thu, 09 Nov 2023 15:10:10 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778058162467640738 X-GMAIL-MSGID: 1782129792033668519 There isn't a simple hardware bit that indicates whether a CPU is running in Sub NUMA Cluster (SNC) mode. Infer the state by comparing the ratio of NUMA nodes to L3 cache instances. When SNC mode is detected, reconfigure the RMID counters by updating the MSR_RMID_SNC_CONFIG MSR on each socket as CPUs are seen. Clearing bit zero of the MSR divides the RMIDs and renumbers the ones on the second SNC node to start from zero. Reviewed-by: Peter Newman Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- Changes since v10: Reinette: Revert two places where I'd globally swapped "h/w" for "hardware" in comments for functions that were not touched by this patch. arch/x86/include/asm/msr-index.h | 1 + arch/x86/kernel/cpu/resctrl/core.c | 96 ++++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index e3fa9cecd599..4285a5ee81fe 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -1109,6 +1109,7 @@ #define MSR_IA32_QM_CTR 0xc8e #define MSR_IA32_PQR_ASSOC 0xc8f #define MSR_IA32_L3_CBM_BASE 0xc90 +#define MSR_RMID_SNC_CONFIG 0xca0 #define MSR_IA32_L2_CBM_BASE 0xd10 #define MSR_IA32_MBA_THRTL_BASE 0xd50 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index f10b68b45342..fa7bc90ccc99 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -16,11 +16,14 @@ #define pr_fmt(fmt) "resctrl: " fmt +#include #include #include #include #include +#include +#include #include #include #include "internal.h" @@ -732,11 +735,42 @@ static void clear_closid_rmid(int cpu) wrmsr(MSR_IA32_PQR_ASSOC, 0, 0); } +/* + * The power-on reset value of MSR_RMID_SNC_CONFIG is 0x1 + * which indicates that RMIDs are configured in legacy mode. + * This mode is incompatible with Linux resctrl semantics + * as RMIDs are partitioned between SNC nodes, which requires + * a user to know which RMID is allocated to a task. + * Clearing bit 0 reconfigures the RMID counters for use + * in Sub NUMA Cluster mode. This mode is better for Linux. + * The RMID space is divided between all SNC nodes with the + * RMIDs renumbered to start from zero in each node when + * couning operations from tasks. Code to read the counters + * must adjust RMID counter numbers based on SNC node. See + * __rmid_read() for code that does this. + */ +static void snc_remap_rmids(int cpu) +{ + u64 val; + + /* Only need to enable once per package. */ + if (cpumask_first(topology_core_cpumask(cpu)) != cpu) + return; + + rdmsrl(MSR_RMID_SNC_CONFIG, val); + val &= ~BIT_ULL(0); + wrmsrl(MSR_RMID_SNC_CONFIG, val); +} + static int resctrl_online_cpu(unsigned int cpu) { struct rdt_resource *r; mutex_lock(&rdtgroup_mutex); + + if (snc_nodes_per_l3_cache > 1) + snc_remap_rmids(cpu); + for_each_capable_rdt_resource(r) domain_add_cpu(cpu, r); /* The cpu is set in default rdtgroup after online. */ @@ -991,11 +1025,73 @@ static __init bool get_rdt_resources(void) return (rdt_mon_capable || rdt_alloc_capable); } +/* CPU models that support MSR_RMID_SNC_CONFIG */ +static const struct x86_cpu_id snc_cpu_ids[] __initconst = { + X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(GRANITERAPIDS_X, 0), + {} +}; + +/* + * There isn't a simple hardware bit that indicates whether a CPU is running + * in Sub NUMA Cluster (SNC) mode. Infer the state by comparing the + * ratio of NUMA nodes to L3 cache instances. + * It is not possible to accurately determine SNC state if the system is + * booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes + * to L3 caches. It will be OK if system is booted with hyperthreading + * disabled (since this doesn't affect the ratio). + */ +static __init int snc_get_config(void) +{ + unsigned long *node_caches; + int mem_only_nodes = 0; + int cpu, node, ret; + int num_l3_caches; + + if (!x86_match_cpu(snc_cpu_ids)) + return 1; + + node_caches = bitmap_zalloc(nr_node_ids, GFP_KERNEL); + if (!node_caches) + return 1; + + cpus_read_lock(); + + if (num_online_cpus() != num_present_cpus()) + pr_warn("Some CPUs offline, SNC detection may be incorrect\n"); + + for_each_node(node) { + cpu = cpumask_first(cpumask_of_node(node)); + if (cpu < nr_cpu_ids) + set_bit(get_cpu_cacheinfo_id(cpu, 3), node_caches); + else + mem_only_nodes++; + } + cpus_read_unlock(); + + num_l3_caches = bitmap_weight(node_caches, nr_node_ids); + kfree(node_caches); + + if (!num_l3_caches) + return 1; + + ret = (nr_node_ids - mem_only_nodes) / num_l3_caches; + + if (ret > 1) + rdt_resources_all[RDT_RESOURCE_L3].r_resctrl.mon_scope = RESCTRL_NODE; + + return ret; +} + static __init void rdt_init_res_defs_intel(void) { struct rdt_hw_resource *hw_res; struct rdt_resource *r; + snc_nodes_per_l3_cache = snc_get_config(); + for_each_rdt_resource(r) { hw_res = resctrl_to_arch_res(r);