Message ID | 20240207034704.935774-4-alexs@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-55908-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1981536dyb; Tue, 6 Feb 2024 19:45:33 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCU/eVXlVA0WUCYfmeK7OH9arvOReETYTrKQX2fDRxzwJbbtRoXGJLzxwMLuNidbX4iL+5KoVmU1UHs5u3BfcfbREykbww== X-Google-Smtp-Source: AGHT+IGLr2r2qUCaQb9uEQ6CjzwU40ANY+jvy9ZuEL4nsVJSJceiGnaZixli1vS/hl8jiXjfL2y7 X-Received: by 2002:a05:620a:20cc:b0:783:bccc:1c3c with SMTP id f12-20020a05620a20cc00b00783bccc1c3cmr3971048qka.50.1707277533722; Tue, 06 Feb 2024 19:45:33 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707277533; cv=pass; d=google.com; s=arc-20160816; b=KqWtTNHlawaTUX1Z7mrBpnA4KHxk+Mkj7W9sl8Sz9X8r3d32H0cPWgnGT8KqAijPuk C8vZSrtkxwz+x9mFEtm9fvDppqvvzEoHuAoyUe+3PVyPFllR7SIWWNkL4GYbHqvSkkTv dCclLjWLlWC3luvB2bYCfpJVYGi5Hk7qogFNAN8JQIftQw1XolywKXx3hzdTF/FDSr6e XBLaXI6R+7FQZwPvxMGKKIfcrXveNrKci94+Ns5R6vlEJSoed4hIYuHTwXGXVK3H6L4L 5IwYfRkOwsMWbzBAxB2w/vFUuMa+49kfYW6SDQrfy8Lh8rLN/79RslxRFBJLHSPlm+jd g5Cw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ik2cotX23r5Hd4AICdyaM/XDRCBerV7dO7M5HlB9A94=; fh=Pkoff50d1EUFiXYMW+0sOH3E32h4/MVw4Ct5YgW6PuQ=; b=AvfyoyhAyQN8MD1ZjZkuPyDEiktfrWtgYtikSK5C96Xb2mJREOH61oConUu9a7wCpx bmH80cPP0x5Vop4mQjF9/4c0zljlyik4oJHnVN0JdN7jrymyPMZwshdZMq5iKAUqe86S cBfE94r0jJFPYmYaT0UsGCTGYUGLHA04hu4PM/49AYjWrmSwlCRXqD9v2EFmxwMo55Rd rkV3lEKceKCyPfRuHqvRFbwflytIa1VqCOZ0avQ+frtTMKtpuMgJ/IFiOFQBmk92w03C nMZVGZYSWE+00NkM3cck7Db9reYoi832QNtQzYSq9k6UklGhNhDCeQI8M5rQ8AMl/Ghw IIIw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JVqunjyh; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-55908-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-55908-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org X-Forwarded-Encrypted: i=2; AJvYcCWHQldhQ2nB2BEts0DFQeiY9z0RR2QR7iIcSLCIrlTGiHZmzupTUMFcdCqOyy+HyFX/EYeSpJU+5WL++lOTOezq5JKayQ== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id v4-20020a05622a014400b0042c2f2c42ccsi353190qtw.610.2024.02.06.19.45.33 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 06 Feb 2024 19:45:33 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-55908-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JVqunjyh; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-55908-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-55908-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 3FB3E1C2467B for <ouuuleilei@gmail.com>; Wed, 7 Feb 2024 03:45:13 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4098F1CFAF; Wed, 7 Feb 2024 03:44:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JVqunjyh" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62DF11CF80 for <linux-kernel@vger.kernel.org>; Wed, 7 Feb 2024 03:44:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707277461; cv=none; b=p8/kDztpGY8GF/53rdRr35G8KaapV4uAJQnUJCrgP8uKHAB4caApKQ00F6G8rwtnZaqyyUIwYfn7SZIrdEqLJ+imH99GeCn9x4hERrCF1zpEFP64y3k0836HkBaUx1ugEKGVuNPZoYEXa2MS+C8JMa+QXEAQ1+ux0AmvbPHVgiA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707277461; c=relaxed/simple; bh=JRihQeRnvMv6IBslLO3YQYd+JbOoJjzD4VVOTcl40hQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Noh0DryfFOr5uoJz4+0xvrx29k90XL6ew0ohQoEfYaCDIL1Cy85PdC2wQLPeN7l/ghRRsnQqLxVG/Ob2TmAQDSmtLn2laFb1j2bq6WXOxYwkZdbWzkp7u0FiXmK3YjMi9vStyyGNVKuurmykNOmPo1rwkzxD9hmIFgPkEQxBztM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JVqunjyh; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id DE4A7C433C7; Wed, 7 Feb 2024 03:44:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1707277460; bh=JRihQeRnvMv6IBslLO3YQYd+JbOoJjzD4VVOTcl40hQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JVqunjyhj6v9iIXA7+241LS3sT9Zrqu2nctH3nylTQFehdPLbLm1Bh9DUwe2RVHEv 3SiRF9snGljzV3h3qluoCadqVVBBJ6y9bpUWjUG8/VRuAqnng08hvVWuRPtF/vJsdC Nd/yS6E5P3GlrE1+sqsuIRu97G0px/UEftKWACDtSVYmjzoYevAeWtCeG9KS5NxgO/ cgHFPzBjQNUqa5zhVaSYSV96MqtsKFYN6J1eMy00l/XzypTsRiRygSbxE5TfFhux6y zZYLiE4mAzS2C4jD5rkLx+nm9b0rbjYjctvzYmZ5vI0oR/SMWbLEh3oMPVcNc9LY5l Mplk7d9HIqjYw== From: alexs@kernel.org To: Ingo Molnar <mingo@redhat.com>, Peter Zijlstra <peterz@infradead.org>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>, Daniel Bristot de Oliveira <bristot@redhat.com>, Valentin Schneider <vschneid@redhat.com>, linux-kernel@vger.kernel.org, ricardo.neri-calderon@linux.intel.com, yangyicong@hisilicon.com Cc: Alex Shi <alexs@kernel.org> Subject: [PATCH v4 4/4] sched/fair: Check the SD_ASYM_PACKING flag in sched_use_asym_prio() Date: Wed, 7 Feb 2024 11:47:04 +0800 Message-ID: <20240207034704.935774-4-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240207034704.935774-1-alexs@kernel.org> References: <20240207034704.935774-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790210247204491559 X-GMAIL-MSGID: 1790210247204491559 |
Series |
[v4,1/4] sched/topology: Remove duplicate descriptions from TOPOLOGY_SD_FLAGS
|
|
Commit Message
alexs@kernel.org
Feb. 7, 2024, 3:47 a.m. UTC
From: Alex Shi <alexs@kernel.org> sched_use_asym_prio() checks whether CPU priorities should be used. It makes sense to check for the SD_ASYM_PACKING() inside the function. Since both sched_asym() and sched_group_asym() use sched_use_asym_prio(), remove the now superfluous checks for the flag in various places. Signed-off-by: Alex Shi <alexs@kernel.org> To: linux-kernel@vger.kernel.org To: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> To: Ben Segall <bsegall@google.com> To: Steven Rostedt <rostedt@goodmis.org> To: Dietmar Eggemann <dietmar.eggemann@arm.com> To: Valentin Schneider <vschneid@redhat.com> To: Daniel Bristot de Oliveira <bristot@redhat.com> To: Vincent Guittot <vincent.guittot@linaro.org> To: Juri Lelli <juri.lelli@redhat.com> To: Peter Zijlstra <peterz@infradead.org> To: Ingo Molnar <mingo@redhat.com> --- kernel/sched/fair.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-)
Comments
Hi Valentin&Ricardo, Any more comment for this patch? or Reviewed-by from you as a Chinese new year gift for this patch? :) Thanks Alex On 2/7/24 11:47 AM, alexs@kernel.org wrote: > From: Alex Shi <alexs@kernel.org> > > sched_use_asym_prio() checks whether CPU priorities should be used. It > makes sense to check for the SD_ASYM_PACKING() inside the function. > Since both sched_asym() and sched_group_asym() use sched_use_asym_prio(), > remove the now superfluous checks for the flag in various places. > > Signed-off-by: Alex Shi <alexs@kernel.org> > To: linux-kernel@vger.kernel.org > To: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> > To: Ben Segall <bsegall@google.com> > To: Steven Rostedt <rostedt@goodmis.org> > To: Dietmar Eggemann <dietmar.eggemann@arm.com> > To: Valentin Schneider <vschneid@redhat.com> > To: Daniel Bristot de Oliveira <bristot@redhat.com> > To: Vincent Guittot <vincent.guittot@linaro.org> > To: Juri Lelli <juri.lelli@redhat.com> > To: Peter Zijlstra <peterz@infradead.org> > To: Ingo Molnar <mingo@redhat.com> > --- > kernel/sched/fair.c | 16 +++++++--------- > 1 file changed, 7 insertions(+), 9 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 942b6358f683..10ae28e1c088 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -9740,6 +9740,9 @@ group_type group_classify(unsigned int imbalance_pct, > */ > static bool sched_use_asym_prio(struct sched_domain *sd, int cpu) > { > + if (!(sd->flags & SD_ASYM_PACKING)) > + return false; > + > if (!sched_smt_active()) > return true; > > @@ -9941,11 +9944,9 @@ static inline void update_sg_lb_stats(struct lb_env *env, > sgs->group_weight = group->group_weight; > > /* Check if dst CPU is idle and preferred to this group */ > - if (!local_group && env->sd->flags & SD_ASYM_PACKING && > - env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > - sched_group_asym(env, sgs, group)) { > + if (!local_group && env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > + sched_group_asym(env, sgs, group)) > sgs->group_asym_packing = 1; > - } > > /* Check for loaded SMT group to be balanced to dst CPU */ > if (!local_group && smt_balance(env, sgs, group)) > @@ -11041,9 +11042,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, > * If balancing between cores, let lower priority CPUs help > * SMT cores with more than one busy sibling. > */ > - if ((env->sd->flags & SD_ASYM_PACKING) && > - sched_asym(env->sd, i, env->dst_cpu) && > - nr_running == 1) > + if (sched_asym(env->sd, i, env->dst_cpu) && nr_running == 1) > continue; > > switch (env->migration_type) { > @@ -11139,8 +11138,7 @@ asym_active_balance(struct lb_env *env) > * the lower priority @env::dst_cpu help it. Do not follow > * CPU priority. > */ > - return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) && > - sched_use_asym_prio(env->sd, env->dst_cpu) && > + return env->idle != CPU_NOT_IDLE && sched_use_asym_prio(env->sd, env->dst_cpu) && > (sched_asym_prefer(env->dst_cpu, env->src_cpu) || > !sched_use_asym_prio(env->sd, env->src_cpu)); > }
On Fri, Feb 09, 2024 at 07:12:10PM +0800, kuiliang Shi wrote: > Hi Valentin&Ricardo, > > Any more comment for this patch? or Reviewed-by from you as a Chinese new year gift for this patch? :) I will give you a Tested-by tag ;). I have started testing your patches but I am not done yet. > > Thanks > Alex > > On 2/7/24 11:47 AM, alexs@kernel.org wrote: > > From: Alex Shi <alexs@kernel.org> > > > > sched_use_asym_prio() checks whether CPU priorities should be used. It > > makes sense to check for the SD_ASYM_PACKING() inside the function. > > Since both sched_asym() and sched_group_asym() use sched_use_asym_prio(), > > remove the now superfluous checks for the flag in various places. > > > > Signed-off-by: Alex Shi <alexs@kernel.org> > > To: linux-kernel@vger.kernel.org > > To: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> > > To: Ben Segall <bsegall@google.com> > > To: Steven Rostedt <rostedt@goodmis.org> > > To: Dietmar Eggemann <dietmar.eggemann@arm.com> > > To: Valentin Schneider <vschneid@redhat.com> > > To: Daniel Bristot de Oliveira <bristot@redhat.com> > > To: Vincent Guittot <vincent.guittot@linaro.org> > > To: Juri Lelli <juri.lelli@redhat.com> > > To: Peter Zijlstra <peterz@infradead.org> > > To: Ingo Molnar <mingo@redhat.com> > > --- > > kernel/sched/fair.c | 16 +++++++--------- > > 1 file changed, 7 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 942b6358f683..10ae28e1c088 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -9740,6 +9740,9 @@ group_type group_classify(unsigned int imbalance_pct, > > */ > > static bool sched_use_asym_prio(struct sched_domain *sd, int cpu) > > { > > + if (!(sd->flags & SD_ASYM_PACKING)) > > + return false; > > + > > if (!sched_smt_active()) > > return true; > > > > @@ -9941,11 +9944,9 @@ static inline void update_sg_lb_stats(struct lb_env *env, > > sgs->group_weight = group->group_weight; > > > > /* Check if dst CPU is idle and preferred to this group */ > > - if (!local_group && env->sd->flags & SD_ASYM_PACKING && > > - env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > > - sched_group_asym(env, sgs, group)) { > > + if (!local_group && env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > > + sched_group_asym(env, sgs, group)) > > sgs->group_asym_packing = 1; > > - } > > > > /* Check for loaded SMT group to be balanced to dst CPU */ > > if (!local_group && smt_balance(env, sgs, group)) > > @@ -11041,9 +11042,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, > > * If balancing between cores, let lower priority CPUs help > > * SMT cores with more than one busy sibling. > > */ > > - if ((env->sd->flags & SD_ASYM_PACKING) && > > - sched_asym(env->sd, i, env->dst_cpu) && > > - nr_running == 1) > > + if (sched_asym(env->sd, i, env->dst_cpu) && nr_running == 1) > > continue; > > > > switch (env->migration_type) { > > @@ -11139,8 +11138,7 @@ asym_active_balance(struct lb_env *env) > > * the lower priority @env::dst_cpu help it. Do not follow > > * CPU priority. > > */ > > - return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) && > > - sched_use_asym_prio(env->sd, env->dst_cpu) && > > + return env->idle != CPU_NOT_IDLE && sched_use_asym_prio(env->sd, env->dst_cpu) && > > (sched_asym_prefer(env->dst_cpu, env->src_cpu) || > > !sched_use_asym_prio(env->sd, env->src_cpu)); > > }
On Wed, Feb 07, 2024 at 11:47:04AM +0800, alexs@kernel.org wrote: > From: Alex Shi <alexs@kernel.org> > > sched_use_asym_prio() checks whether CPU priorities should be used. It > makes sense to check for the SD_ASYM_PACKING() inside the function. > Since both sched_asym() and sched_group_asym() use sched_use_asym_prio(), > remove the now superfluous checks for the flag in various places. Reviewed-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Tested on Alder Lake and Meteor Lake, which use asym_packing. Tested-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> > > Signed-off-by: Alex Shi <alexs@kernel.org> > To: linux-kernel@vger.kernel.org > To: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> > To: Ben Segall <bsegall@google.com> > To: Steven Rostedt <rostedt@goodmis.org> > To: Dietmar Eggemann <dietmar.eggemann@arm.com> > To: Valentin Schneider <vschneid@redhat.com> > To: Daniel Bristot de Oliveira <bristot@redhat.com> > To: Vincent Guittot <vincent.guittot@linaro.org> > To: Juri Lelli <juri.lelli@redhat.com> > To: Peter Zijlstra <peterz@infradead.org> > To: Ingo Molnar <mingo@redhat.com> > --- > kernel/sched/fair.c | 16 +++++++--------- > 1 file changed, 7 insertions(+), 9 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 942b6358f683..10ae28e1c088 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -9740,6 +9740,9 @@ group_type group_classify(unsigned int imbalance_pct, > */ > static bool sched_use_asym_prio(struct sched_domain *sd, int cpu) > { > + if (!(sd->flags & SD_ASYM_PACKING)) > + return false; > + > if (!sched_smt_active()) > return true; > > @@ -9941,11 +9944,9 @@ static inline void update_sg_lb_stats(struct lb_env *env, > sgs->group_weight = group->group_weight; > > /* Check if dst CPU is idle and preferred to this group */ > - if (!local_group && env->sd->flags & SD_ASYM_PACKING && > - env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > - sched_group_asym(env, sgs, group)) { > + if (!local_group && env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > + sched_group_asym(env, sgs, group)) > sgs->group_asym_packing = 1; > - } > > /* Check for loaded SMT group to be balanced to dst CPU */ > if (!local_group && smt_balance(env, sgs, group)) > @@ -11041,9 +11042,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, > * If balancing between cores, let lower priority CPUs help > * SMT cores with more than one busy sibling. > */ > - if ((env->sd->flags & SD_ASYM_PACKING) && > - sched_asym(env->sd, i, env->dst_cpu) && > - nr_running == 1) > + if (sched_asym(env->sd, i, env->dst_cpu) && nr_running == 1) > continue; > > switch (env->migration_type) { > @@ -11139,8 +11138,7 @@ asym_active_balance(struct lb_env *env) > * the lower priority @env::dst_cpu help it. Do not follow > * CPU priority. > */ > - return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) && > - sched_use_asym_prio(env->sd, env->dst_cpu) && > + return env->idle != CPU_NOT_IDLE && sched_use_asym_prio(env->sd, env->dst_cpu) && > (sched_asym_prefer(env->dst_cpu, env->src_cpu) || > !sched_use_asym_prio(env->sd, env->src_cpu)); > } > -- > 2.43.0 >
Ricardo Neri <ricardo.neri-calderon@linux.intel.com> 于2024年2月10日周六 09:11写道: > > On Wed, Feb 07, 2024 at 11:47:04AM +0800, alexs@kernel.org wrote: > > From: Alex Shi <alexs@kernel.org> > > > > sched_use_asym_prio() checks whether CPU priorities should be used. It > > makes sense to check for the SD_ASYM_PACKING() inside the function. > > Since both sched_asym() and sched_group_asym() use sched_use_asym_prio(), > > remove the now superfluous checks for the flag in various places. > > Reviewed-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> > > Tested on Alder Lake and Meteor Lake, which use asym_packing. > > Tested-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> It's the best gift for my lunar new year! :) Next version with your Tested and Reviewed is coming. Thanks a lot! Alex > > > > > Signed-off-by: Alex Shi <alexs@kernel.org> > > To: linux-kernel@vger.kernel.org > > To: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> > > To: Ben Segall <bsegall@google.com> > > To: Steven Rostedt <rostedt@goodmis.org> > > To: Dietmar Eggemann <dietmar.eggemann@arm.com> > > To: Valentin Schneider <vschneid@redhat.com> > > To: Daniel Bristot de Oliveira <bristot@redhat.com> > > To: Vincent Guittot <vincent.guittot@linaro.org> > > To: Juri Lelli <juri.lelli@redhat.com> > > To: Peter Zijlstra <peterz@infradead.org> > > To: Ingo Molnar <mingo@redhat.com> > > --- > > kernel/sched/fair.c | 16 +++++++--------- > > 1 file changed, 7 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 942b6358f683..10ae28e1c088 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -9740,6 +9740,9 @@ group_type group_classify(unsigned int imbalance_pct, > > */ > > static bool sched_use_asym_prio(struct sched_domain *sd, int cpu) > > { > > + if (!(sd->flags & SD_ASYM_PACKING)) > > + return false; > > + > > if (!sched_smt_active()) > > return true; > > > > @@ -9941,11 +9944,9 @@ static inline void update_sg_lb_stats(struct lb_env *env, > > sgs->group_weight = group->group_weight; > > > > /* Check if dst CPU is idle and preferred to this group */ > > - if (!local_group && env->sd->flags & SD_ASYM_PACKING && > > - env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > > - sched_group_asym(env, sgs, group)) { > > + if (!local_group && env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > > + sched_group_asym(env, sgs, group)) > > sgs->group_asym_packing = 1; > > - } > > > > /* Check for loaded SMT group to be balanced to dst CPU */ > > if (!local_group && smt_balance(env, sgs, group)) > > @@ -11041,9 +11042,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, > > * If balancing between cores, let lower priority CPUs help > > * SMT cores with more than one busy sibling. > > */ > > - if ((env->sd->flags & SD_ASYM_PACKING) && > > - sched_asym(env->sd, i, env->dst_cpu) && > > - nr_running == 1) > > + if (sched_asym(env->sd, i, env->dst_cpu) && nr_running == 1) > > continue; > > > > switch (env->migration_type) { > > @@ -11139,8 +11138,7 @@ asym_active_balance(struct lb_env *env) > > * the lower priority @env::dst_cpu help it. Do not follow > > * CPU priority. > > */ > > - return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) && > > - sched_use_asym_prio(env->sd, env->dst_cpu) && > > + return env->idle != CPU_NOT_IDLE && sched_use_asym_prio(env->sd, env->dst_cpu) && > > (sched_asym_prefer(env->dst_cpu, env->src_cpu) || > > !sched_use_asym_prio(env->sd, env->src_cpu)); > > } > > -- > > 2.43.0 > >
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 942b6358f683..10ae28e1c088 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9740,6 +9740,9 @@ group_type group_classify(unsigned int imbalance_pct, */ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu) { + if (!(sd->flags & SD_ASYM_PACKING)) + return false; + if (!sched_smt_active()) return true; @@ -9941,11 +9944,9 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->group_weight = group->group_weight; /* Check if dst CPU is idle and preferred to this group */ - if (!local_group && env->sd->flags & SD_ASYM_PACKING && - env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && - sched_group_asym(env, sgs, group)) { + if (!local_group && env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && + sched_group_asym(env, sgs, group)) sgs->group_asym_packing = 1; - } /* Check for loaded SMT group to be balanced to dst CPU */ if (!local_group && smt_balance(env, sgs, group)) @@ -11041,9 +11042,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, * If balancing between cores, let lower priority CPUs help * SMT cores with more than one busy sibling. */ - if ((env->sd->flags & SD_ASYM_PACKING) && - sched_asym(env->sd, i, env->dst_cpu) && - nr_running == 1) + if (sched_asym(env->sd, i, env->dst_cpu) && nr_running == 1) continue; switch (env->migration_type) { @@ -11139,8 +11138,7 @@ asym_active_balance(struct lb_env *env) * the lower priority @env::dst_cpu help it. Do not follow * CPU priority. */ - return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) && - sched_use_asym_prio(env->sd, env->dst_cpu) && + return env->idle != CPU_NOT_IDLE && sched_use_asym_prio(env->sd, env->dst_cpu) && (sched_asym_prefer(env->dst_cpu, env->src_cpu) || !sched_use_asym_prio(env->sd, env->src_cpu)); }