Message ID | 20231206090043.634697-1-pierre.gondois@arm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp3974364vqy; Wed, 6 Dec 2023 01:01:09 -0800 (PST) X-Google-Smtp-Source: AGHT+IGg/Kmied1ZZsbvJN98HSBm/RfZP4G8qPRTXEtw3VeGH4Uz1YxsA7QslyAWnbKKUbU5uLy7 X-Received: by 2002:a05:6a20:1605:b0:18f:97c:8259 with SMTP id l5-20020a056a20160500b0018f097c8259mr433858pzj.99.1701853269164; Wed, 06 Dec 2023 01:01:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701853269; cv=none; d=google.com; s=arc-20160816; b=S6xNmfP36CLmplLGefscDZ+JfkcgUd5VTJtv9+4T4fKWl+xRJ++yHz/OIQ6rrq6mU0 5URVTEzRktTO97naLhfi6l5TrJT7hU7mCfuoML1z7jJ3RAzH27wK8iAG1Kn1vvxtmFBw ecXdqtr/QnDbWp9hfFrc7edmp/tnYLeTLMR2FRvnOaYPfEwymAqexhe4NwYVpQbY+0uh x2+xnzLdJFiCrPsCRMhfmCCSxiHRH03Opq5u0TtoMq1C7E8jO7EIYLdayMGhmC8g7iEE UmEK5ToCtmBNZPgKRE11E7KNnU6oOVQMPz6Wk5RJFld1LtPleUO6wwbRIqroqg4cnyxT kxKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=6iW2dj/U1azHc8eR1YzWUZ6DDXqzL3gDQOdPy1CaQwU=; fh=+/a/zW9X/KVQmA23DvF5PYYe9S636UtyZW9RACZP6IE=; b=N2wPQi9q9gWgWgvcyBCWTPg829EOcFvmihHjKOE/ZUky1dXeO++TmKr8hBmhKXF1fN OJ6wwu9uObYudpn5goQNU4OIAkzRT4sIPmF2z7vz+iiuwT9YL+mmmCvirsjuJWGSZaiq 78bsdQQLHnSuI2BXllqalpTl5l1CoisW54bsxFrwDBxp2JOcqGzYXPGEtR9PMEPvcbFd Nk3w919EAPXX4Gra7s2JIvTkmVI+IUnP27FQ5rT2sw9R/luTcPjnZK2nNnw190EU5DAL gcG3yCrURvBhDCNJjafGo0qcZs439lBUiuYWnLkET7T8OWJQCkakr2sOnbvOip/QcN2h V2/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id s10-20020a65690a000000b005c66b8a7329si6621719pgq.15.2023.12.06.01.01.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 01:01:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 54E9B807C867; Wed, 6 Dec 2023 01:01:04 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377108AbjLFJAz (ORCPT <rfc822;pusanteemu@gmail.com> + 99 others); Wed, 6 Dec 2023 04:00:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230357AbjLFJAv (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 6 Dec 2023 04:00:51 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5756C1048D for <linux-kernel@vger.kernel.org>; Wed, 6 Dec 2023 01:00:53 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7B29F139F; Wed, 6 Dec 2023 01:01:38 -0800 (PST) Received: from e126645.nice.arm.com (e126645.nice.arm.com [10.34.100.101]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 88BD73F762; Wed, 6 Dec 2023 01:00:50 -0800 (PST) From: Pierre Gondois <pierre.gondois@arm.com> To: linux-kernel@vger.kernel.org Cc: Qais Yousef <qyousef@layalina.io>, Pierre Gondois <pierre.gondois@arm.com>, Vincent Guittot <vincent.guittot@linaro.org>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Ingo Molnar <mingo@redhat.com>, Peter Zijlstra <peterz@infradead.org>, Juri Lelli <juri.lelli@redhat.com>, Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, Valentin Schneider <vschneid@redhat.com> Subject: [PATCH v3] sched/fair: Use all little CPUs for CPU-bound workload Date: Wed, 6 Dec 2023 10:00:43 +0100 Message-Id: <20231206090043.634697-1-pierre.gondois@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Wed, 06 Dec 2023 01:01:04 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784522493407027924 X-GMAIL-MSGID: 1784522493407027924 |
Series |
[v3] sched/fair: Use all little CPUs for CPU-bound workload
|
|
Commit Message
Pierre Gondois
Dec. 6, 2023, 9 a.m. UTC
Running n CPU-bound tasks on an n CPUs platform: - with asymmetric CPU capacity - not being a DynamIq system (i.e. having a PKG level sched domain without the SD_SHARE_PKG_RESOURCES flag set) might result in a task placement where two tasks run on a big CPU and none on a little CPU. This placement could be more optimal by using all CPUs. Testing platform: Juno-r2: - 2 big CPUs (1-2), maximum capacity of 1024 - 4 little CPUs (0,3-5), maximum capacity of 383 Testing workload ([1]): Spawn 6 CPU-bound tasks. During the first 100ms (step 1), each tasks is affine to a CPU, except for: - one little CPU which is left idle. - one big CPU which has 2 tasks affine. After the 100ms (step 2), remove the cpumask affinity. Before patch: During step 2, the load balancer running from the idle CPU tags sched domains as: - little CPUs: 'group_has_spare'. Cf. group_has_capacity() and group_is_overloaded(), 3 CPU-bound tasks run on a 4 CPUs sched-domain, and the idle CPU provides enough spare capacity regarding the imbalance_pct - big CPUs: 'group_overloaded'. Indeed, 3 tasks run on a 2 CPUs sched-domain, so the following path is used: group_is_overloaded() \-if (sgs->sum_nr_running <= sgs->group_weight) return true; The following path which would change the migration type to 'migrate_task' is not taken: calculate_imbalance() \-if (env->idle != CPU_NOT_IDLE && env->imbalance == 0) as the local group has some spare capacity, so the imbalance is not 0. The migration type requested is 'migrate_util' and the busiest runqueue is the big CPU's runqueue having 2 tasks (each having a utilization of 512). The idle little CPU cannot pull one of these task as its capacity is too small for the task. The following path is used: detach_tasks() \-case migrate_util: \-if (util > env->imbalance) goto next; After patch: As the number of failed balancing attempts grows (with 'nr_balance_failed'), progressively make it easier to migrate a big task to the idling little CPU. A similar mechanism is used for the 'migrate_load' migration type. Improvement: Running the testing workload [1] with the step 2 representing a ~10s load for a big CPU: Before patch: ~19.3s After patch: ~18s (-6.7%) Similar issue reported at: https://lore.kernel.org/lkml/20230716014125.139577-1-qyousef@layalina.io/ v1: https://lore.kernel.org/all/20231110125902.2152380-1-pierre.gondois@arm.com/ v2: https://lore.kernel.org/all/20231124153323.3202444-1-pierre.gondois@arm.com/ Suggested-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Pierre Gondois <pierre.gondois@arm.com> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> --- Notes: v2: - Used Vincent's approach. v3: - Updated commit message. - Added Reviewed-by tags kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Comments
On 12/06/23 10:00, Pierre Gondois wrote: > Running n CPU-bound tasks on an n CPUs platform: > - with asymmetric CPU capacity > - not being a DynamIq system (i.e. having a PKG level sched domain > without the SD_SHARE_PKG_RESOURCES flag set) > might result in a task placement where two tasks run on a big CPU > and none on a little CPU. This placement could be more optimal by > using all CPUs. > > Testing platform: > Juno-r2: > - 2 big CPUs (1-2), maximum capacity of 1024 > - 4 little CPUs (0,3-5), maximum capacity of 383 > > Testing workload ([1]): > Spawn 6 CPU-bound tasks. During the first 100ms (step 1), each tasks > is affine to a CPU, except for: > - one little CPU which is left idle. > - one big CPU which has 2 tasks affine. > After the 100ms (step 2), remove the cpumask affinity. > > Before patch: > During step 2, the load balancer running from the idle CPU tags sched > domains as: > - little CPUs: 'group_has_spare'. Cf. group_has_capacity() and > group_is_overloaded(), 3 CPU-bound tasks run on a 4 CPUs > sched-domain, and the idle CPU provides enough spare capacity > regarding the imbalance_pct > - big CPUs: 'group_overloaded'. Indeed, 3 tasks run on a 2 CPUs > sched-domain, so the following path is used: > group_is_overloaded() > \-if (sgs->sum_nr_running <= sgs->group_weight) return true; > > The following path which would change the migration type to > 'migrate_task' is not taken: > calculate_imbalance() > \-if (env->idle != CPU_NOT_IDLE && env->imbalance == 0) > as the local group has some spare capacity, so the imbalance > is not 0. > > The migration type requested is 'migrate_util' and the busiest > runqueue is the big CPU's runqueue having 2 tasks (each having a > utilization of 512). The idle little CPU cannot pull one of these > task as its capacity is too small for the task. The following path > is used: > detach_tasks() > \-case migrate_util: > \-if (util > env->imbalance) goto next; > > After patch: > As the number of failed balancing attempts grows (with > 'nr_balance_failed'), progressively make it easier to migrate > a big task to the idling little CPU. A similar mechanism is > used for the 'migrate_load' migration type. > > Improvement: > Running the testing workload [1] with the step 2 representing > a ~10s load for a big CPU: > Before patch: ~19.3s > After patch: ~18s (-6.7%) > > Similar issue reported at: > https://lore.kernel.org/lkml/20230716014125.139577-1-qyousef@layalina.io/ > > v1: > https://lore.kernel.org/all/20231110125902.2152380-1-pierre.gondois@arm.com/ > v2: > https://lore.kernel.org/all/20231124153323.3202444-1-pierre.gondois@arm.com/ > > Suggested-by: Vincent Guittot <vincent.guittot@linaro.org> > Signed-off-by: Pierre Gondois <pierre.gondois@arm.com> > Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> > Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> > --- Thanks Pierre! I think this is a good candidate to stable. It is likely to help some folks shipping with phantom domains on impacted stable kernels. Cheers -- Qais Yousef
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d7a3c63a2171..9481b8cff31b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9060,7 +9060,7 @@ static int detach_tasks(struct lb_env *env) case migrate_util: util = task_util_est(p); - if (util > env->imbalance) + if (shr_bound(util, env->sd->nr_balance_failed) > env->imbalance) goto next; env->imbalance -= util;