From patchwork Wed Feb 7 03:58:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 197726 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1987451dyb; Tue, 6 Feb 2024 20:02:43 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWfPkznPfGz85elwtNz7li1yvF7OwMY4xA6RX9djABlLW76y2t28PJB03W+ldUHTTpJ+/M/Cy1bywSHn+QtryMwis1+oA== X-Google-Smtp-Source: AGHT+IErIoaxZHCOGdA0z9HsSRVmm3zz82uewWMnBdOQs85vWqCB0Tmvd6HVnqglGWCbzc6WOoFl X-Received: by 2002:a17:903:41cb:b0:1d9:ae31:83ff with SMTP id u11-20020a17090341cb00b001d9ae3183ffmr5063622ple.8.1707278563284; Tue, 06 Feb 2024 20:02:43 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707278563; cv=pass; d=google.com; s=arc-20160816; b=yH2X9WOg4ir0YnAHfTsGzbIZHvKgdg4/O4v0W2edBqt1CFSpgKwiBXW/AZoXPkob+9 HGkzeIngBKcoc+q0bXHer6o25ui/gbPwwBNsrcsWDRLEWfCMh9aDeiEWo0LzR4KeiUyW y11nhKY7gwRDRmVgfktlJKdRWCs5UvTRJ8QPmKpYWM6d3SlY6aRqhFRgT3WmVz1ugOlV SJnKh/bYwjT+5knXaGnVtBUYJN9FvMFeMu4PejRynvgihYJiDxjpKgSywiJWXj8rT1PQ 3AFcKnr89y0Ijh3unYO6MSihjGHbE1uEwe1NRs6g+hWw+2kJhofMXGY/np+qTnuxoC6O 3n9Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=Rn2m35JR2W3FuaSyU/Bzw6cgYOHpXME6xE1EiPCRciE=; fh=F+s89dIOXC5I+Uj+g0x3mUa7kWW8YWJKDe0M1OycJNg=; b=OcfJqY5gb6qKjn7PP0lgv0ILKqLyvkuahtoYTm/IhHdR8nW4UIq/AZM8Byd6seRO2b DluMWxZNj07OBUjKHiIFX4xmUJA/K2VyGTtopTmJK9EGPnmwEQMKvh7PANlNhTTX3fQN l7hZQfM8blQy6jQcSUVhNqeeJeWfwkdF69UQEJfsRthPIgtQamme2VIFY5GB/aEozWyt F5WjTNw04xRzBNzzNZzbw2cOIF1iTYDg+KE5NbfKeI6SR+JfyFIQQ85rAoSIyIiGqaZA bmw698rhHlaymRsX+wf5F8a7WoWbN+1Hof/vXfVIJBy+87sz8/iK0qm7lrRepNmOfUI5 TA7w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=RyihPSvJ; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-55915-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-55915-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org X-Forwarded-Encrypted: i=2; AJvYcCXnFv4WvHanLb1L7TBGply7kWMFAlC+8wpbVV4eJDh3UgLt5phqC7bNVElBMm3cQ4y2p5bqmETQ8wZB7QldVDx5yQDRvw== Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id r11-20020a170903014b00b001d78f440061si580850plc.385.2024.02.06.20.02.42 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 06 Feb 2024 20:02:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-55915-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=RyihPSvJ; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-55915-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-55915-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 4E408B2486F for ; Wed, 7 Feb 2024 03:56:14 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9523312E7E; Wed, 7 Feb 2024 03:56:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RyihPSvJ" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1827B10949 for ; Wed, 7 Feb 2024 03:55:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707278157; cv=none; b=V4y5TAodyMmdjOXmxUo1PEHeHhawR/U098DKD0JS3P93vZEbNraI0FyrmM1aaIvqcNSEvSFvdshj4G5f8nKjUcExENU20LigTx6N2XNK5+aMomFKf7glpvzyXuIsH21BiKKtjM/GGe8iflJ8KwAwHBEkwYDcVHO/b3cTE7Jb3l0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707278157; c=relaxed/simple; bh=7kZpcTwOpBJsZVknIp6pf/jE4tv4HkKG1MtDV4vzq10=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=no67eEfO1wSky65zG5Uvg8XdbTo7WllFv4R1hcXh09DpisFO2X2cIo/kJZBT29mhdF4c/NIimi2vdM+f+u1ihHUbCgXjqcJotD6PkKKj/iBru2U5OOjPKtO+1rWjFr+o5ovjbeqz7lEh2XY+tJ7UrQDnIKXT+igKK4BRhK1zpdI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RyihPSvJ; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 078A0C433C7; Wed, 7 Feb 2024 03:55:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1707278156; bh=7kZpcTwOpBJsZVknIp6pf/jE4tv4HkKG1MtDV4vzq10=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RyihPSvJY40VJwOTUJApVjde6sK5XNv/PxqZjQFRv32KGiU1FzOkZ1L3EKjmZrceU HXkENg39rGH6GGHyQ1lQ6QJGIRR6z1tilyqBeuQtlYLqKeK8U3ST26aph+PwuMYk3A o1+YL5Gj7nzAT3lKCKJDjeLFrMuMGnRg138EdPpDZN565PLrqY1JaPl0UrgusMF5Rs 3Vu+IcbIap3B0nu2re4bR/efqn8J22rHIEUUckmGMdEQujnhkNjKYmVNolLycv0gim piiWsZNLD2HIDw5pgGauo2NeAF0BHUrWWlAeZwCBYuaBhp4GZW/UykHB5g73OJhLNh vU6qG5r5zAaHA== From: alexs@kernel.org To: Christophe Leroy , "Aneesh Kumar K . V" , "Naveen N . Rao" , Ingo Molnar , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Daniel Bristot de Oliveira , Frederic Weisbecker , Mark Rutland , Barry Song , Miaohe Lin , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Cc: Michael Ellerman , Nicholas Piggin , Valentin Schneider , Srikar Dronamraju , Josh Poimboeuf , Alex Shi , Ricardo Neri , Yicong Yang , "Gautham R . Shenoy" Subject: [PATCH v4 5/5] sched: rename SD_SHARE_PKG_RESOURCES to SD_SHARE_LLC Date: Wed, 7 Feb 2024 11:58:40 +0800 Message-ID: <20240207035840.936676-1-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240207034704.935774-4-alexs@kernel.org> References: <20240207034704.935774-4-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790211326551399212 X-GMAIL-MSGID: 1790211326551399212 From: Alex Shi SD_CLUSTER shares the CPU resources like llc tags or l2 cache, that's easy confuse with SD_SHARE_PKG_RESOURCES. So let's specifical point what the latter shares: LLC. That would reduce some confusing. Suggested-by: Valentin Schneider Signed-off-by: Alex Shi To: linux-kernel@vger.kernel.org To: linuxppc-dev@lists.ozlabs.org To: Miaohe Lin To: Barry Song To: Mark Rutland To: Frederic Weisbecker To: Daniel Bristot de Oliveira To: Ben Segall To: Steven Rostedt To: Dietmar Eggemann To: Juri Lelli To: Ingo Molnar To: "Naveen N. Rao" To: "Aneesh Kumar K.V" To: Christophe Leroy Cc: "Gautham R. Shenoy" Cc: Yicong Yang Cc: Ricardo Neri Cc: Josh Poimboeuf Cc: Srikar Dronamraju Cc: Valentin Schneider Cc: Nicholas Piggin Cc: Michael Ellerman Reviewed-by: Valentin Schneider --- arch/powerpc/kernel/smp.c | 6 +++--- include/linux/sched/sd_flags.h | 4 ++-- include/linux/sched/topology.h | 6 +++--- kernel/sched/fair.c | 2 +- kernel/sched/topology.c | 16 ++++++++-------- 5 files changed, 17 insertions(+), 17 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 693334c20d07..a60e4139214b 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -984,7 +984,7 @@ static bool shared_caches __ro_after_init; /* cpumask of CPUs with asymmetric SMT dependency */ static int powerpc_smt_flags(void) { - int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; + int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_LLC; if (cpu_has_feature(CPU_FTR_ASYM_SMT)) { printk_once(KERN_INFO "Enabling Asymmetric SMT scheduling\n"); @@ -1010,9 +1010,9 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(splpar_asym_pack); static int powerpc_shared_cache_flags(void) { if (static_branch_unlikely(&splpar_asym_pack)) - return SD_SHARE_PKG_RESOURCES | SD_ASYM_PACKING; + return SD_SHARE_LLC | SD_ASYM_PACKING; - return SD_SHARE_PKG_RESOURCES; + return SD_SHARE_LLC; } static int powerpc_shared_proc_flags(void) diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h index a8b28647aafc..b04a5d04dee9 100644 --- a/include/linux/sched/sd_flags.h +++ b/include/linux/sched/sd_flags.h @@ -117,13 +117,13 @@ SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) SD_FLAG(SD_CLUSTER, SDF_NEEDS_GROUPS) /* - * Domain members share CPU package resources (i.e. caches) + * Domain members share CPU Last Level Caches * * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share * the same cache(s). * NEEDS_GROUPS: Caches are shared between groups. */ -SD_FLAG(SD_SHARE_PKG_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) +SD_FLAG(SD_SHARE_LLC, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) /* * Only a single load balancing instance diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index a6e04b4a21d7..191b122158fb 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -38,21 +38,21 @@ extern const struct sd_flag_debug sd_flag_debug[]; #ifdef CONFIG_SCHED_SMT static inline int cpu_smt_flags(void) { - return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; + return SD_SHARE_CPUCAPACITY | SD_SHARE_LLC; } #endif #ifdef CONFIG_SCHED_CLUSTER static inline int cpu_cluster_flags(void) { - return SD_CLUSTER | SD_SHARE_PKG_RESOURCES; + return SD_CLUSTER | SD_SHARE_LLC; } #endif #ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { - return SD_SHARE_PKG_RESOURCES; + return SD_SHARE_LLC; } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 10ae28e1c088..188597640b1f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10695,7 +10695,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s */ if (local->group_type == group_has_spare) { if ((busiest->group_type > group_fully_busy) && - !(env->sd->flags & SD_SHARE_PKG_RESOURCES)) { + !(env->sd->flags & SD_SHARE_LLC)) { /* * If busiest is overloaded, try to fill spare * capacity. This might end up creating spare capacity diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 0b33f7b05d21..e877730219d3 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -684,7 +684,7 @@ static void update_top_cache_domain(int cpu) int id = cpu; int size = 1; - sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES); + sd = highest_flag_domain(cpu, SD_SHARE_LLC); if (sd) { id = cpumask_first(sched_domain_span(sd)); size = cpumask_weight(sched_domain_span(sd)); @@ -1554,7 +1554,7 @@ static struct cpumask ***sched_domains_numa_masks; * function. For details, see include/linux/sched/sd_flags.h. * * SD_SHARE_CPUCAPACITY - * SD_SHARE_PKG_RESOURCES + * SD_SHARE_LLC * SD_CLUSTER * SD_NUMA * @@ -1566,7 +1566,7 @@ static struct cpumask ***sched_domains_numa_masks; #define TOPOLOGY_SD_FLAGS \ (SD_SHARE_CPUCAPACITY | \ SD_CLUSTER | \ - SD_SHARE_PKG_RESOURCES | \ + SD_SHARE_LLC | \ SD_NUMA | \ SD_ASYM_PACKING) @@ -1609,7 +1609,7 @@ sd_init(struct sched_domain_topology_level *tl, | 0*SD_BALANCE_WAKE | 1*SD_WAKE_AFFINE | 0*SD_SHARE_CPUCAPACITY - | 0*SD_SHARE_PKG_RESOURCES + | 0*SD_SHARE_LLC | 0*SD_SERIALIZE | 1*SD_PREFER_SIBLING | 0*SD_NUMA @@ -1646,7 +1646,7 @@ sd_init(struct sched_domain_topology_level *tl, if (sd->flags & SD_SHARE_CPUCAPACITY) { sd->imbalance_pct = 110; - } else if (sd->flags & SD_SHARE_PKG_RESOURCES) { + } else if (sd->flags & SD_SHARE_LLC) { sd->imbalance_pct = 117; sd->cache_nice_tries = 1; @@ -1671,7 +1671,7 @@ sd_init(struct sched_domain_topology_level *tl, * For all levels sharing cache; connect a sched_domain_shared * instance. */ - if (sd->flags & SD_SHARE_PKG_RESOURCES) { + if (sd->flags & SD_SHARE_LLC) { sd->shared = *per_cpu_ptr(sdd->sds, sd_id); atomic_inc(&sd->shared->ref); atomic_set(&sd->shared->nr_busy_cpus, sd_weight); @@ -2446,8 +2446,8 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att for (sd = *per_cpu_ptr(d.sd, i); sd; sd = sd->parent) { struct sched_domain *child = sd->child; - if (!(sd->flags & SD_SHARE_PKG_RESOURCES) && child && - (child->flags & SD_SHARE_PKG_RESOURCES)) { + if (!(sd->flags & SD_SHARE_LLC) && child && + (child->flags & SD_SHARE_LLC)) { struct sched_domain __rcu *top_p; unsigned int nr_llcs;