From patchwork Mon Nov 28 13:20:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Neri X-Patchwork-Id: 26755 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp5657112wrr; Mon, 28 Nov 2022 05:14:51 -0800 (PST) X-Google-Smtp-Source: AA0mqf4M2RXzls+KMNPn/s7a0Gwwd72iRc1si3P3ScnomIQkwZRcRGbb8WTA1j7bHpRL7H3PL+B4 X-Received: by 2002:a17:902:b20f:b0:188:d4ea:251f with SMTP id t15-20020a170902b20f00b00188d4ea251fmr44345087plr.36.1669641291237; Mon, 28 Nov 2022 05:14:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669641291; cv=none; d=google.com; s=arc-20160816; b=MCTqJCw3ckpXEW3MRt0tIlUrtD8Fnfq7DpM1DYZke8bnJA9QEH9zN9B5i2iRYmskKR uP8RxKSPwV6DrONAITPoBzup8hhzxkKyarL8EHPZZRSqRFUmetZEvDPY1Apjw8eAQUgJ 4gGLZ1WhPNIXDXIt2mmMTsiuZP7hQWMl43QKOEew1kuLxdFPsD8ENKnq6+pJkDSj1z7d lomt+WcXstPAIaE1W0yhxcEkEowC4+MOUnD0ntF47Wa3eRqxie9KjW5YObv6FAeeQ7LT m1yIv2/R3C59nygJHNz8eDDQQxLWR0X5RHWN3fFbVkb9+yq3qit4DeoAoBIsOf0xLBJI Nrpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=J2/oF+mdjqb+42cc7Aj5SPoRfJDqnvZ1A7r4imzOzJU=; b=JokUTvdPj1TmjJt/XiMp3co+B6na/kxY2ICewNtT+c8ZCAejsX5gGfOkNZoaKYHDfL n9ouiB0cINutysxaMhusCneX+U70PbtM6UEHW1KsX2GPSzzH3lg4w2y4x+ySLAESAVRa UuYJYhc4/lJixD0XCWof6fb9+MZj5NV2AoAJjv5efX7U+BdL3H4R62wmKnYVSw0o68k7 U7ndg7DydWX7TH5G4l079/jjXwauUihIWf8CXDSk8LKUoXpQZCoW2aTaFkYvSxqvpd8o 4241gft5gFr+lJtzk5UAEa6IHSMlwHji2Kgqtkm0Jkzz3Yp6QNp3+E2hDxF3i/3SZWxI Xj0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W2mD71dO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t3-20020a17090aba8300b0020d5867aab6si11239957pjr.141.2022.11.28.05.14.33; Mon, 28 Nov 2022 05:14:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W2mD71dO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231796AbiK1NOH (ORCPT + 99 others); Mon, 28 Nov 2022 08:14:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231657AbiK1NNi (ORCPT ); Mon, 28 Nov 2022 08:13:38 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80F671D676; Mon, 28 Nov 2022 05:13:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669641217; x=1701177217; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=2uPfNNplw7xOwTi269F+VNr3fJJfs2c0TJ+7Nw/WlL4=; b=W2mD71dOmlZY9WM53EkJ/Uy9G7hyZOgGnRJ14qwTAHhYW/RLkVjqhltN CuKkQs3ExkJPIt4eO0JvXCYhXHVkpoh8vBDkouHeSKDrzRmO7F0LsX0Wb dWiEwUb+UKsNOXNloc8CSUTIgZ7pJ3PhTtdrM+NN8opTVzbjye6aTVkSe BqnP71iOS6AFuL8v2jSc4DfLB8ghR6Gk6683P3ZRFRl6O+DEUNEuQOQus ZVLNyI/m+NjIJYpbHlHas1VJMAcNnSUh90JrqcnmR0Ojh6Skyq+1sFrOy TPec0XhDsegeMrYxVUnlaLsXWsyKFILWR3RTCHS4kocgHVMRrEiAt2vWL g==; X-IronPort-AV: E=McAfee;i="6500,9779,10544"; a="401117115" X-IronPort-AV: E=Sophos;i="5.96,200,1665471600"; d="scan'208";a="401117115" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Nov 2022 05:13:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10544"; a="749381345" X-IronPort-AV: E=Sophos;i="5.96,200,1665471600"; d="scan'208";a="749381345" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by fmsmga002.fm.intel.com with ESMTP; 28 Nov 2022 05:13:32 -0800 From: Ricardo Neri To: "Peter Zijlstra (Intel)" , Juri Lelli , Vincent Guittot Cc: Ricardo Neri , "Ravi V. Shankar" , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Len Brown , Mel Gorman , "Rafael J. Wysocki" , Srinivas Pandruvada , Steven Rostedt , Tim Chen , Valentin Schneider , x86@kernel.org, "Joel Fernandes (Google)" , linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ricardo Neri , "Tim C . Chen" Subject: [PATCH v2 07/22] sched/fair: Compute IPC class scores for load balancing Date: Mon, 28 Nov 2022 05:20:45 -0800 Message-Id: <20221128132100.30253-8-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221128132100.30253-1-ricardo.neri-calderon@linux.intel.com> References: <20221128132100.30253-1-ricardo.neri-calderon@linux.intel.com> X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750745786156015088?= X-GMAIL-MSGID: =?utf-8?q?1750745786156015088?= Compute the joint total (both current and prospective) IPC class score of a scheduling group and the local scheduling group. These IPCC statistics are used during asym_packing load balancing. It implies that the candidate sched group will have one fewer busy CPU after load balancing. This observation is important for physical cores with SMT support. The IPCC score of scheduling groups composed of SMT siblings needs to consider that the siblings share CPU resources. When computing the total IPCC score of the scheduling group, divide score from each sibilng by the number of busy siblings. Cc: Ben Segall Cc: Daniel Bristot de Oliveira Cc: Dietmar Eggemann Cc: Joel Fernandes (Google) Cc: Len Brown Cc: Mel Gorman Cc: Rafael J. Wysocki Cc: Srinivas Pandruvada Cc: Steven Rostedt Cc: Tim C. Chen Cc: Valentin Schneider Cc: x86@kernel.org Cc: linux-pm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ricardo Neri --- Changes since v1: * Implemented cleanups and reworks from PeterZ. I took all his suggestions, except the computation of the IPC score before and after load balancing. We are computing not the average score, but the *total*. * Check for the SD_SHARE_CPUCAPACITY to compute the throughput of the SMT siblings of a physical core. * Used the new interface names. * Reworded commit message for clarity. --- kernel/sched/fair.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3a1d6c50a19b..e333f9623b3a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8766,6 +8766,10 @@ struct sg_lb_stats { unsigned int nr_numa_running; unsigned int nr_preferred_running; #endif +#ifdef CONFIG_IPC_CLASSES + long ipcc_score_after; /* Prospective IPCC score after load balancing */ + long ipcc_score_before; /* IPCC score before load balancing */ +#endif }; /* @@ -9140,6 +9144,38 @@ static void update_sg_lb_ipcc_stats(struct sg_lb_ipcc_stats *sgcs, } } +static void update_sg_lb_stats_scores(struct sg_lb_ipcc_stats *sgcs, + struct sg_lb_stats *sgs, + struct sched_group *sg, + int dst_cpu) +{ + int busy_cpus, score_on_dst_cpu; + long before, after; + + if (!sched_ipcc_enabled()) + return; + + busy_cpus = sgs->group_weight - sgs->idle_cpus; + /* No busy CPUs in the group. No tasks to move. */ + if (!busy_cpus) + return; + + score_on_dst_cpu = arch_get_ipcc_score(sgcs->min_ipcc, dst_cpu); + + before = sgcs->sum_score; + after = before - sgcs->min_score; + + /* SMT siblings share throughput. */ + if (busy_cpus > 1 && sg->flags & SD_SHARE_CPUCAPACITY) { + before /= busy_cpus; + /* One sibling will become idle after load balance. */ + after /= busy_cpus - 1; + } + + sgs->ipcc_score_after = after + score_on_dst_cpu; + sgs->ipcc_score_before = before; +} + #else /* CONFIG_IPC_CLASSES */ static void update_sg_lb_ipcc_stats(struct sg_lb_ipcc_stats *sgcs, struct rq *rq) @@ -9149,6 +9185,14 @@ static void update_sg_lb_ipcc_stats(struct sg_lb_ipcc_stats *sgcs, static void init_rq_ipcc_stats(struct sg_lb_ipcc_stats *class_sgs) { } + +static void update_sg_lb_stats_scores(struct sg_lb_ipcc_stats *sgcs, + struct sg_lb_stats *sgs, + struct sched_group *sg, + int dst_cpu) +{ +} + #endif /* CONFIG_IPC_CLASSES */ /** @@ -9329,6 +9373,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, if (!local_group && env->sd->flags & SD_ASYM_PACKING && env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && sched_asym(env, sds, sgs, group)) { + update_sg_lb_stats_scores(&sgcs, sgs, group, env->dst_cpu); sgs->group_asym_packing = 1; }