From patchwork Tue Mar 28 09:26:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75990 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2127947vqo; Tue, 28 Mar 2023 04:09:11 -0700 (PDT) X-Google-Smtp-Source: AKy350YQo0F6NDpXEREtJNlId8Br9/4SITuRNVprCwaGG6lFna3RpNLy/l/gv7j58nYkr8wm8eAT X-Received: by 2002:a17:906:fe45:b0:944:5e49:a9aa with SMTP id wz5-20020a170906fe4500b009445e49a9aamr8072985ejb.21.1680001751372; Tue, 28 Mar 2023 04:09:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001751; cv=none; d=google.com; s=arc-20160816; b=TzEj/wuxMEGENkNLWi8bLCDqNt7VqYw/pbwao0glqurD6MafP0XxAX/SWgueDgqyUp 98gtW7QaXitznZ7qywiFCaLYmDsAUq1nJFTyjTPhnC//Ez/a0+Q3D8fHV/Yq+eT/INT3 m7JMS5U3lb28+sXrqcucRxhq2tRe42AY+H/Z/PJZrlLdZ+SYiBD7RqKE/dMKU6Xd8dVZ DNaZP03jJsazOEf6p93MKjP3Ws882RYocjoxZa7p5el7Iir7ygDU+WAJg1j3fCVi1rRc is8colJWrfABe9EYuQVCr+lfFeRXsoBvhe2wnS4hGzQkp4erO/8aK+T/vB80zslzDuSS Q8UA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=p1qY0ShNMle0RIdMAeucvk2dRsb0fc8qmQtfmBqonko=; b=oVj91DrRKrh8icEfXV+X2/YEK6uSprObg7DDhrgFRz3CQYNr2jDIGM72P/Y6qJM0jj hwP81B7ULP6N0zj6hzAZvv1wNdezPMJON1dJu5rSr/xCfWh19WsyukidEWc+Ruv9POm5 NEQXESa8JvstfQl+PpwsP29l5NLQGdsnPKcIciDE5qxrsVAfNnkhxVNJVmGXyIbDnf8c FBhAfYCvQnn7E7ixc+F6zYFkQZ08hbw8nQy5vO7jwnKNR0YlhL03gGnJgBi5IFXPPVV4 UTvgVEXti+XJCN7d6FXPMd7yuYrsFKcSUAwrX0HmJk8oYsymY/05AqXwD0choxckbm2c I0MA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=jAv8+1Qd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b21-20020aa7cd15000000b004c1190b5abbsi30364993edw.184.2023.03.28.04.08.46; Tue, 28 Mar 2023 04:09:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=jAv8+1Qd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232563AbjC1LHh (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232521AbjC1LHM (ORCPT ); Tue, 28 Mar 2023 07:07:12 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3E2D7EC3 for ; Tue, 28 Mar 2023 04:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=p1qY0ShNMle0RIdMAeucvk2dRsb0fc8qmQtfmBqonko=; b=jAv8+1QdeYm1LDKyGTjKACvQpJ 3TpsxpuEdg7FQIDb+Xs2RBYlLSbWBQqRB3GQeEZNR90Wl/GxgpekP4uHQMGh8mAFbyylbgTb/6saA yzBegPfRG5kAkwFCO9oz02cOy/pWqpgQ/349ojHD4aRp+hlbyt5scLZ5VF/AYLHvJ8KhQBxXBeha5 W3mO0I0dTY83RBNexOhgSAl7Sv0alDfrZqIltK6lssk2AIvtyGdOLV7zFNvenUK5p4SAuBl3buSe4 icNh7OkMLMeWEIbdzkbMcGpTmDcbYedsvt6fIV5KLEM1iugDxtcEitlVbC1i4h2wZ+12jEYqbyxUU uY5y8+aA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79l-006XUu-00; Tue, 28 Mar 2023 11:06:26 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C1A60300652; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id A7842201F3C6C; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110353.631740156@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:23 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, Parth Shah Subject: [PATCH 01/17] sched: Introduce latency-nice as a per-task attribute References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609516217755139?= X-GMAIL-MSGID: =?utf-8?q?1761609516217755139?= From: Parth Shah Latency-nice indicates the latency requirements of a task with respect to the other tasks in the system. The value of the attribute can be within the range of [-20, 19] both inclusive to be in-line with the values just like task nice values. Just like task nice, -20 is the 'highest' priority and conveys this task should get minimal latency, conversely 19 is the lowest priority and conveys this task will get the least consideration and will thus receive maximal latency. [peterz: rebase, squash] Signed-off-by: Parth Shah Signed-off-by: Peter Zijlstra (Intel) --- include/linux/sched.h | 1 + include/uapi/linux/sched.h | 4 +++- include/uapi/linux/sched/types.h | 19 +++++++++++++++++++ init/init_task.c | 3 ++- kernel/sched/core.c | 27 ++++++++++++++++++++++++++- kernel/sched/debug.c | 1 + tools/include/uapi/linux/sched.h | 4 +++- 7 files changed, 55 insertions(+), 4 deletions(-) --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -784,6 +784,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; + int latency_prio; struct sched_entity se; struct sched_rt_entity rt; --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) #endif /* _UAPI_LINUX_SCHED_H */ --- a/include/uapi/linux/sched/types.h +++ b/include/uapi/linux/sched/types.h @@ -10,6 +10,7 @@ struct sched_param { #define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */ #define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */ +#define SCHED_ATTR_SIZE_VER2 60 /* add: latency_nice */ /* * Extended scheduling parameters data structure. @@ -98,6 +99,22 @@ struct sched_param { * scheduled on a CPU with no more capacity than the specified value. * * A task utilization boundary can be reset by setting the attribute to -1. + * + * Latency Tolerance Attributes + * =========================== + * + * A subset of sched_attr attributes allows to specify the relative latency + * requirements of a task with respect to the other tasks running/queued in the + * system. + * + * @ sched_latency_nice task's latency_nice value + * + * The latency_nice of a task can have any value in a range of + * [MIN_LATENCY_NICE..MAX_LATENCY_NICE]. + * + * A task with latency_nice with the value of LATENCY_NICE_MIN can be + * taken for a task requiring a lower latency as opposed to the task with + * higher latency_nice. */ struct sched_attr { __u32 size; @@ -120,6 +137,8 @@ struct sched_attr { __u32 sched_util_min; __u32 sched_util_max; + /* latency requirement hints */ + __s32 sched_latency_nice; }; #endif /* _UAPI_LINUX_SCHED_TYPES_H */ --- a/init/init_task.c +++ b/init/init_task.c @@ -78,6 +78,7 @@ struct task_struct init_task .prio = MAX_PRIO - 20, .static_prio = MAX_PRIO - 20, .normal_prio = MAX_PRIO - 20, + .latency_prio = DEFAULT_PRIO, .policy = SCHED_NORMAL, .cpus_ptr = &init_task.cpus_mask, .user_cpus_ptr = NULL, @@ -89,7 +90,7 @@ struct task_struct init_task .fn = do_no_restart_syscall, }, .se = { - .group_node = LIST_HEAD_INIT(init_task.se.group_node), + .group_node = LIST_HEAD_INIT(init_task.se.group_node), }, .rt = { .run_list = LIST_HEAD_INIT(init_task.rt.run_list), --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4684,6 +4684,8 @@ int sched_fork(unsigned long clone_flags p->prio = p->normal_prio = p->static_prio; set_load_weight(p, false); + p->latency_prio = NICE_TO_PRIO(0); + /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: @@ -7428,7 +7430,7 @@ static struct task_struct *find_process_ #define SETPARAM_POLICY -1 static void __setscheduler_params(struct task_struct *p, - const struct sched_attr *attr) + const struct sched_attr *attr) { int policy = attr->sched_policy; @@ -7452,6 +7454,13 @@ static void __setscheduler_params(struct set_load_weight(p, true); } +static void __setscheduler_latency(struct task_struct *p, + const struct sched_attr *attr) +{ + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) + p->latency_prio = NICE_TO_PRIO(attr->sched_latency_nice); +} + /* * Check the target process has a UID that matches the current process's: */ @@ -7592,6 +7601,13 @@ static int __sched_setscheduler(struct t return retval; } + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + if (attr->sched_latency_nice > MAX_NICE) + return -EINVAL; + if (attr->sched_latency_nice < MIN_NICE) + return -EINVAL; + } + if (pi) cpuset_read_lock(); @@ -7626,6 +7642,9 @@ static int __sched_setscheduler(struct t goto change; if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && + attr->sched_latency_nice != PRIO_TO_NICE(p->latency_prio)) + goto change; p->sched_reset_on_fork = reset_on_fork; retval = 0; @@ -7714,6 +7733,7 @@ static int __sched_setscheduler(struct t __setscheduler_params(p, attr); __setscheduler_prio(p, newprio); } + __setscheduler_latency(p, attr); __setscheduler_uclamp(p, attr); if (queued) { @@ -7924,6 +7944,9 @@ static int sched_copy_attr(struct sched_ size < SCHED_ATTR_SIZE_VER1) return -EINVAL; + if ((attr->sched_flags & SCHED_FLAG_LATENCY_NICE) && + size < SCHED_ATTR_SIZE_VER2) + return -EINVAL; /* * XXX: Do we want to be lenient like existing syscalls; or do we want * to be strict and return an error on out-of-bounds values? @@ -8161,6 +8184,8 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pi get_params(p, &kattr); kattr.sched_flags &= SCHED_FLAG_ALL; + kattr.sched_latency_nice = PRIO_TO_NICE(p->latency_prio); + #ifdef CONFIG_UCLAMP_TASK /* * This could race with another potential updater, but this is fine --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1043,6 +1043,7 @@ void proc_sched_show_task(struct task_st #endif P(policy); P(prio); + P(latency_prio); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); --- a/tools/include/uapi/linux/sched.h +++ b/tools/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) #endif /* _UAPI_LINUX_SCHED_H */ From patchwork Tue Mar 28 09:26:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75997 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2128490vqo; Tue, 28 Mar 2023 04:10:04 -0700 (PDT) X-Google-Smtp-Source: AKy350YswMvbU3+BMU0OhRQEqRqMrOsLZufLv0aBMMEeXepQaxtO7csIpL5jPzhf10iGYqpDe7Gp X-Received: by 2002:aa7:dd50:0:b0:4ac:bd84:43d9 with SMTP id o16-20020aa7dd50000000b004acbd8443d9mr14436813edw.2.1680001803791; Tue, 28 Mar 2023 04:10:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001803; cv=none; d=google.com; s=arc-20160816; b=xQcWczNmlPV07Uk6zGIaFuhMhI7bHVzm3j7XDjYYobbXqyrYA5vl68aKtx/dpGKJlx 6nVd4xEUdYuLDo+QYPKGv4r8LKNLfemPEF8xYH96mTAlL5iO08pxFT9CqKIjGvF7bcU5 REVkDvTsCkPBU0v33wY+08M/i5u7ppbbJ1XHzfE3rNZUwZQ97h/z8+vxR8HpY45XPfOy Jt7e1ph2ynfK3JbPsUGsXXfgk8uHaISXOMKmSJZPm66afJWYBlmM8YegCrn/VKUT5W7L 0eeOC+hSkr1aANe75y9H0enPm7c4To7biJKD3EYXQNj1E4ZqqdT/o0R8csgUsO0zBZou UBMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=gcokwTRoOXLfqm29bKIdOPIEzWft2hYogOiPRUmi49Q=; b=Vrs3oduXV+nlUl53XJUZs2kj39iz8RwYLdK5jCiOIzopbIIY6AKVhMLPOffQGprMOd fMNecKMXEHVnzVVGZ2h/f7I/uIhJCAcx3mETphWQYNhXtlF/3K3h7o8bOu1VyUhEJgOn LTY1Y9+ATKHYztE+1YQDVPvBKpv+4JHPtWiWao1lZPCCnXy+hlnQcgi41ebhYJN82ejq eLPoOtC8r5SK5FHAjwCTiu8lShG+JQq8uy/sdRHm3/7rbx/p7aDEs39S354nP4OQ6G3e AW6rjFduI0OUWC35NaSsCa4QMp3+CHBFUSnmuKzZwxwPuTSmRcCs4RTp3dQHGud80fyB fVpA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=GHYFFKG+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z13-20020aa7cf8d000000b004fba129c19dsi35191454edx.597.2023.03.28.04.09.39; Tue, 28 Mar 2023 04:10:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=GHYFFKG+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230526AbjC1LIG (ORCPT + 99 others); Tue, 28 Mar 2023 07:08:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232542AbjC1LHN (ORCPT ); Tue, 28 Mar 2023 07:07:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC0F87EF3 for ; Tue, 28 Mar 2023 04:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=gcokwTRoOXLfqm29bKIdOPIEzWft2hYogOiPRUmi49Q=; b=GHYFFKG+BT1DHI9SlCIGz8zxDc 2kXPnZJYPtx2muz2w2qSiBeNG8N/tAEOKjBxQAbnkSvYkpe0ZaB0b1byfaq6NaUnYeHKPtgK856bb YwAYiu+kEQlUHYO2xWsCeFpSXJ2ePf8TFnKwDoH5sDY+M9+SyBZ7QcFNAQceUMKcCUAjHvRd7BWCA XagdPMn6pzTa0rJfabE5TI738QcqLeuLybgBxIJ1piFFvFW9p5Bu2OhCCIxfnHGVI860767LdOxYp ZA0sz/fT8UltwflDRD17LdiZbu0HtenSRsn0fbL6CHfhxho24HQeuAGp+Uzzm7y2H+QlkQK4mwc8Y Eza4jXxQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ph79k-008MD1-Pu; Tue, 28 Mar 2023 11:06:25 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C5BE1300747; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id AA1012CB8CEB4; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110353.711028613@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:24 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 02/17] sched/fair: Add latency_offset References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609571029678688?= X-GMAIL-MSGID: =?utf-8?q?1761609571029678688?= From: Vincent Guittot Murdered-by: Peter Zijlstra (Intel) Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Tested-by: K Prateek Nayak --- include/linux/sched.h | 2 ++ kernel/sched/core.c | 12 +++++++++++- kernel/sched/fair.c | 8 ++++++++ kernel/sched/sched.h | 2 ++ 4 files changed, 23 insertions(+), 1 deletion(-) --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -568,6 +568,8 @@ struct sched_entity { /* cached value of my_q->h_nr_running */ unsigned long runnable_weight; #endif + /* preemption offset in ns */ + long latency_offset; #ifdef CONFIG_SMP /* --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1285,6 +1285,11 @@ static void set_load_weight(struct task_ } } +static void set_latency_offset(struct task_struct *p) +{ + p->se.latency_offset = calc_latency_offset(p->latency_prio - MAX_RT_PRIO); +} + #ifdef CONFIG_UCLAMP_TASK /* * Serializes updates of utilization clamp values @@ -4433,6 +4438,8 @@ static void __sched_fork(unsigned long c p->se.vruntime = 0; INIT_LIST_HEAD(&p->se.group_node); + set_latency_offset(p); + #ifdef CONFIG_FAIR_GROUP_SCHED p->se.cfs_rq = NULL; #endif @@ -4685,6 +4692,7 @@ int sched_fork(unsigned long clone_flags set_load_weight(p, false); p->latency_prio = NICE_TO_PRIO(0); + set_latency_offset(p); /* * We don't need the reset flag anymore after the fork. It has @@ -7457,8 +7465,10 @@ static void __setscheduler_params(struct static void __setscheduler_latency(struct task_struct *p, const struct sched_attr *attr) { - if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { p->latency_prio = NICE_TO_PRIO(attr->sched_latency_nice); + set_latency_offset(p); + } } /* --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -703,6 +703,14 @@ int sched_update_scaling(void) } #endif +long calc_latency_offset(int prio) +{ + u32 weight = sched_prio_to_weight[prio]; + u64 base = sysctl_sched_min_granularity; + + return div_u64(base << SCHED_FIXEDPOINT_SHIFT, weight); +} + /* * delta /= w */ --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2475,6 +2475,8 @@ extern unsigned int sysctl_numa_balancin extern unsigned int sysctl_numa_balancing_hot_threshold; #endif +extern long calc_latency_offset(int prio); + #ifdef CONFIG_SCHED_HRTICK /* From patchwork Tue Mar 28 09:26:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75995 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2128352vqo; Tue, 28 Mar 2023 04:09:49 -0700 (PDT) X-Google-Smtp-Source: AKy350b6b7TQsvLz6pZf9hUNHrOq8Dbotv932nqR7mzcoB1JLGpASuRCDwB6NcJQtykWS3EcHjl9 X-Received: by 2002:a17:906:9c84:b0:932:3ca:e228 with SMTP id fj4-20020a1709069c8400b0093203cae228mr19385422ejc.0.1680001788859; Tue, 28 Mar 2023 04:09:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001788; cv=none; d=google.com; s=arc-20160816; b=C4iI69h45JPRB4bQsq7z7Z5rTam9sDHNX3UPQT9fOz9ZfeOE3Zht0mgd4+BwmvlEOw hzS/fa0YIvrXIaL3oVJzZvx01d7tccqKXac5QYeFIEytw4kGBUHK0v+2fWdSMrNhIKAP Wtxo0Pa4a7qM5P1BJ8VIpPPSf2sd21sVYeFK9KQNQCRXwBFbFywTCR7wL6bIiYYoVI49 oVbEkOvzT58D7IHIdrRbnkWD4l07g5hC3flEmzqoXZUqdOZGxSableijk4nryze/rY1A PZ6JFLvAhBvmx7F4/ADu7ZRysTtT2P/1AtcfRLycbHnuMCnzYxFaBK/t63tdY9vL8QEX fBeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=RbXpNNyQy70ib4Jj0GWzgjHznjjhFi1aoqEifEGuN8w=; b=V0DRwDDIe3D0lo0/ensLiEbLF/avuSqiN8qRKwoMlItVlM/EBkXtXyQVYJnL4h6U0a lF0KqOO+xvG8aT43mLLbjGbc/heE6wub6uwR2cLP24ftAOGEh1kHPi0uVprCEnTqUVIc c4rAHxEsoynykityogWkDDYClETHOpF0BFeHVBZ2ODDGDqusd0zRjuttUy5Pg8svpKqs OT2QYyXS3VyJRagCaCiDCMbZ0NsZIO6GXtRaPbOyVdcZmy2YHi7Y6Ar3jnjPmjGUvj5q V4bD3TntlXBev1HlgC4tEdDSuMrgsa7JGpwgt+5hPwmKsbti2EzcwzbH7GRjNBBUfKo6 wHBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=bcaKZK9K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rg15-20020a1709076b8f00b00933444abbeasi23414296ejc.819.2023.03.28.04.09.24; Tue, 28 Mar 2023 04:09:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=bcaKZK9K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232251AbjC1LH6 (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232531AbjC1LHN (ORCPT ); Tue, 28 Mar 2023 07:07:13 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 492DA7ED2 for ; Tue, 28 Mar 2023 04:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=RbXpNNyQy70ib4Jj0GWzgjHznjjhFi1aoqEifEGuN8w=; b=bcaKZK9KU3oB6IYT5WXTKlI6uY nhIrEwk4ky2Bsx2P9MgHGbx8TLr/cy9WBXS6VPyPHIIdiZeHAEhobzqb1AKB6pS/qB9aEw3BcIwbj Hgn1OMmBgxKngkS7LhzcyUMnJNv9dwyAgOsB4+HH+rHHV7diGwTiykEA/R/VYzxCw0comz1XhrjmP RrgCMtILKdOG7ZM5Pjhw+jXMErzo3BkgPLjyWK1N9rG8Sqw5f0ZxidNI0PS80EdrSuL7WWYjqLfQc PAcmD5dJgeNx6Gl8r7mq0o/JB4lc9jBFpmpp582ZwyzeWmNySVIp0DbjWuwrf6Uud9Ve66Z6hCrH2 szMYaWjA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79l-006XUt-00; Tue, 28 Mar 2023 11:06:25 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id CC6E4300AFB; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id AF23A2CB8CEBA; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110353.779246960@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:25 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 03/17] sched/fair: Add sched group latency support References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609555787295915?= X-GMAIL-MSGID: =?utf-8?q?1761609555787295915?= From: Vincent Guittot Task can set its latency priority with sched_setattr(), which is then used to set the latency offset of its sched_enity, but sched group entities still have the default latency offset value. Add a latency.nice field in cpu cgroup controller to set the latency priority of the group similarly to sched_setattr(). The latency priority is then used to set the offset of the sched_entities of the group. Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Tested-by: K Prateek Nayak Link: https://lkml.kernel.org/r/20230224093454.956298-7-vincent.guittot@linaro.org --- Documentation/admin-guide/cgroup-v2.rst | 10 ++++++++++ kernel/sched/core.c | 30 ++++++++++++++++++++++++++++++ kernel/sched/fair.c | 32 ++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 4 ++++ 4 files changed, 76 insertions(+) --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1121,6 +1121,16 @@ All time durations are in microseconds. values similar to the sched_setattr(2). This maximum utilization value is used to clamp the task specific maximum utilization clamp. + cpu.latency.nice + A read-write single value file which exists on non-root + cgroups. The default is "0". + + The nice value is in the range [-20, 19]. + + This interface file allows reading and setting latency using the + same values used by sched_setattr(2). The latency_nice of a group is + used to limit the impact of the latency_nice of a task outside the + group. Memory --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11068,6 +11068,25 @@ static int cpu_idle_write_s64(struct cgr { return sched_group_set_idle(css_tg(css), idle); } + +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return PRIO_TO_NICE(css_tg(css)->latency_prio); +} + +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css, + struct cftype *cft, s64 nice) +{ + int prio; + + if (nice < MIN_NICE || nice > MAX_NICE) + return -ERANGE; + + prio = NICE_TO_PRIO(nice); + + return sched_group_set_latency(css_tg(css), prio); +} #endif static struct cftype cpu_legacy_files[] = { @@ -11082,6 +11101,11 @@ static struct cftype cpu_legacy_files[] .read_s64 = cpu_idle_read_s64, .write_s64 = cpu_idle_write_s64, }, + { + .name = "latency.nice", + .read_s64 = cpu_latency_nice_read_s64, + .write_s64 = cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { @@ -11299,6 +11323,12 @@ static struct cftype cpu_files[] = { .read_s64 = cpu_idle_read_s64, .write_s64 = cpu_idle_write_s64, }, + { + .name = "latency.nice", + .flags = CFTYPE_NOT_ON_ROOT, + .read_s64 = cpu_latency_nice_read_s64, + .write_s64 = cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12264,6 +12264,7 @@ int alloc_fair_sched_group(struct task_g goto err; tg->shares = NICE_0_LOAD; + tg->latency_prio = DEFAULT_PRIO; init_cfs_bandwidth(tg_cfs_bandwidth(tg)); @@ -12362,6 +12363,9 @@ void init_tg_cfs_entry(struct task_group } se->my_q = cfs_rq; + + se->latency_offset = calc_latency_offset(tg->latency_prio - MAX_RT_PRIO); + /* guarantee group entities always have weight */ update_load_set(&se->load, NICE_0_LOAD); se->parent = parent; @@ -12490,6 +12494,34 @@ int sched_group_set_idle(struct task_gro mutex_unlock(&shares_mutex); return 0; +} + +int sched_group_set_latency(struct task_group *tg, int prio) +{ + long latency_offset; + int i; + + if (tg == &root_task_group) + return -EINVAL; + + mutex_lock(&shares_mutex); + + if (tg->latency_prio == prio) { + mutex_unlock(&shares_mutex); + return 0; + } + + tg->latency_prio = prio; + latency_offset = calc_latency_offset(prio - MAX_RT_PRIO); + + for_each_possible_cpu(i) { + struct sched_entity *se = tg->se[i]; + + WRITE_ONCE(se->latency_offset, latency_offset); + } + + mutex_unlock(&shares_mutex); + return 0; } #else /* CONFIG_FAIR_GROUP_SCHED */ --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -378,6 +378,8 @@ struct task_group { /* A positive value indicates that this is a SCHED_IDLE group. */ int idle; + /* latency priority of the group. */ + int latency_prio; #ifdef CONFIG_SMP /* @@ -488,6 +490,8 @@ extern int sched_group_set_shares(struct extern int sched_group_set_idle(struct task_group *tg, long idle); +extern int sched_group_set_latency(struct task_group *tg, int prio); + #ifdef CONFIG_SMP extern void set_task_rq_fair(struct sched_entity *se, struct cfs_rq *prev, struct cfs_rq *next); From patchwork Tue Mar 28 09:26:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75998 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2133227vqo; Tue, 28 Mar 2023 04:18:06 -0700 (PDT) X-Google-Smtp-Source: AKy350Znk1NKri3BQf402uWwrXXpvp2L2WudxviiRi0GKy/VmseD6Cm1sjIloOhQ26GFeJGvpTv1 X-Received: by 2002:a17:907:a4c4:b0:93f:8223:221b with SMTP id vq4-20020a170907a4c400b0093f8223221bmr11668117ejc.66.1680002285788; Tue, 28 Mar 2023 04:18:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680002285; cv=none; d=google.com; s=arc-20160816; b=rOh6hVdfzGxvJ2HJw2tcW4+CRRJnDo+eureMFHajlZ4jGJQqunENyTDtWMZacuIdLD wW8Oro2dNpVp/dPL7L2w4EQ5zimc3LMkUG0OH8m4ccmB10JRquXOMPRGyvWyQnPAesAj 33tRil0BhgWOqWLntlJpQni1zfgdYV4b2h3/4zBxeAWciYBdGdAkFXM8DLHW+QLF13EP iJjNbpy40d2V1QASXNEGKhefvS+WpcA2Ood5DHzYgKrr2e2g0QaICS+JJWgEIi4KKTw/ UKTO9KjBx13mosu0DrIRJe/aieYWp8sJMgAcMeiEzqiGr927a3s8sIKxKzXK3hGQdkGo VHgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=Z1jwqfV5ktSOWhS/j82zNm8uac5hlj1PrCAP6oYXOb8=; b=b/ZfFY+2xtL1hiT55ssQdyQJElFoRZ8zSEbKhJvT+ouSuVJptV3DnZCorCY3JGB7Oh eJGgu/wlENYNPSUaqCALoqcTqBpDdXMHsVS1+orhHdhdjCfF4qaa5nRCKJ1WxpM2+GMU IOk/DyDdYkz8Aecua5t0ZaVdBROl+BT6Ba6XSdRVezVXHIlEHl8cwT2mkGXxYTP1tvBF 1acqy/fl+iM7Ms/UK0kBV+2oNAOZiRT5coDomrVG41DEn36j1Kj5du1IzqbIb6sYXVMd HieaimIEBsKFOIvNlbm0+idM6c+jVTvGAa5e6grAmdVZ7FMbi1B4V4vvpS0PDVQjQ+ZN zx7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=b8VqpS8o; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id md5-20020a170906ae8500b008e3da5c56f4si29203168ejb.485.2023.03.28.04.17.41; Tue, 28 Mar 2023 04:18:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=b8VqpS8o; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232676AbjC1LIK (ORCPT + 99 others); Tue, 28 Mar 2023 07:08:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232573AbjC1LHO (ORCPT ); Tue, 28 Mar 2023 07:07:14 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 141367EF4 for ; Tue, 28 Mar 2023 04:07:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Z1jwqfV5ktSOWhS/j82zNm8uac5hlj1PrCAP6oYXOb8=; b=b8VqpS8o59yCaDaibvgp5SRCgx het7CULjxr950QMlcBDP1HNCQGa277LjtfhTRntu56qI1nkRIyMrlc8q+1Mph778wEYMGT7xzfpaQ SfpMvnrGAXm/qRSrZikkLEYRp3CupcRd9xa9e6lpgVhD5KJFUS8qYFC85Rb1dp10WuT8+Fr6NsNzz 6nHnK+xMGSOOJRl1at6mV3FjVVKo0qaK7noP192gmMcE3gElkm8Oqx5gsDaZnu7YqrBlzuByUIpy9 FPDHde8Gblv2UWxbq8ATjiGpeaOYWsCB60Dxi33Za5f4cWHBd90Dv36SI2ZzSlKd+dXQd5r6B5Er6 3h2cmy4A==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79k-006XUs-3C; Tue, 28 Mar 2023 11:06:25 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D5BCA300BE3; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id B492E2CB8CEBE; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110353.853385546@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:26 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 04/17] sched/fair: Add avg_vruntime References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761610076871242713?= X-GMAIL-MSGID: =?utf-8?q?1761610076871242713?= In order to move to an eligibility based scheduling policy it is needed to have a better approximation of the ideal scheduler. Specifically, for a virtual time weighted fair queueing based scheduler the ideal scheduler will be the weighted average of the individual virtual runtimes (math in the comment). As such, compute the weighted average to approximate the ideal scheduler -- note that the approximation is in the individual task behaviour, which isn't strictly conformant. Specifically consider adding a task with a vruntime left of center, in this case the average will move backwards in time -- something the ideal scheduler would of course never do. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/debug.c | 32 ++++++-------- kernel/sched/fair.c | 111 +++++++++++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 5 ++ 3 files changed, 128 insertions(+), 20 deletions(-) --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -580,10 +580,9 @@ static void print_rq(struct seq_file *m, void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) { - s64 MIN_vruntime = -1, min_vruntime, max_vruntime = -1, - spread, rq0_min_vruntime, spread0; + s64 left_vruntime = -1, min_vruntime, right_vruntime = -1, spread; + struct sched_entity *last, *first; struct rq *rq = cpu_rq(cpu); - struct sched_entity *last; unsigned long flags; #ifdef CONFIG_FAIR_GROUP_SCHED @@ -597,26 +596,25 @@ void print_cfs_rq(struct seq_file *m, in SPLIT_NS(cfs_rq->exec_clock)); raw_spin_rq_lock_irqsave(rq, flags); - if (rb_first_cached(&cfs_rq->tasks_timeline)) - MIN_vruntime = (__pick_first_entity(cfs_rq))->vruntime; + first = __pick_first_entity(cfs_rq); + if (first) + left_vruntime = first->vruntime; last = __pick_last_entity(cfs_rq); if (last) - max_vruntime = last->vruntime; + right_vruntime = last->vruntime; min_vruntime = cfs_rq->min_vruntime; - rq0_min_vruntime = cpu_rq(0)->cfs.min_vruntime; raw_spin_rq_unlock_irqrestore(rq, flags); - SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "MIN_vruntime", - SPLIT_NS(MIN_vruntime)); + + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "left_vruntime", + SPLIT_NS(left_vruntime)); SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "min_vruntime", SPLIT_NS(min_vruntime)); - SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "max_vruntime", - SPLIT_NS(max_vruntime)); - spread = max_vruntime - MIN_vruntime; - SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", - SPLIT_NS(spread)); - spread0 = min_vruntime - rq0_min_vruntime; - SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread0", - SPLIT_NS(spread0)); + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "avg_vruntime", + SPLIT_NS(avg_vruntime(cfs_rq))); + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "right_vruntime", + SPLIT_NS(right_vruntime)); + spread = right_vruntime - left_vruntime; + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); SEQ_printf(m, " .%-30s: %d\n", "nr_spread_over", cfs_rq->nr_spread_over); SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -601,9 +601,108 @@ static inline bool entity_before(const s return (s64)(a->vruntime - b->vruntime) < 0; } +static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + return (s64)(se->vruntime - cfs_rq->min_vruntime); +} + #define __node_2_se(node) \ rb_entry((node), struct sched_entity, run_node) +/* + * Compute virtual time from the per-task service numbers: + * + * Fair schedulers conserve lag: \Sum lag_i = 0 + * + * lag_i = S - s_i = w_i * (V - v_i) + * + * \Sum lag_i = 0 -> \Sum w_i * (V - v_i) = V * \Sum w_i - \Sum w_i * v_i = 0 + * + * From which we solve V: + * + * \Sum v_i * w_i + * V = -------------- + * \Sum w_i + * + * However, since v_i is u64, and the multiplcation could easily overflow + * transform it into a relative form that uses smaller quantities: + * + * Substitute: v_i == (v_i - v) + v + * + * \Sum ((v_i - v) + v) * w_i \Sum (v_i - v) * w_i + * V = -------------------------- = -------------------- + v + * \Sum w_i \Sum w_i + * + * min_vruntime = v + * avg_vruntime = \Sum (v_i - v) * w_i + * cfs_rq->load = \Sum w_i + * + * Since min_vruntime is a monotonic increasing variable that closely tracks + * the per-task service, these deltas: (v_i - v), will be in the order of the + * maximal (virtual) lag induced in the system due to quantisation. + */ +static void +avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + unsigned long weight = scale_load_down(se->load.weight); + s64 key = entity_key(cfs_rq, se); + + cfs_rq->avg_vruntime += key * weight; + cfs_rq->avg_load += weight; +} + +static void +avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + unsigned long weight = scale_load_down(se->load.weight); + s64 key = entity_key(cfs_rq, se); + + cfs_rq->avg_vruntime -= key * weight; + cfs_rq->avg_load -= weight; +} + +static inline +void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta) +{ + /* + * v' = v + d ==> avg_vruntime' = avg_runtime - d*avg_load + */ + cfs_rq->avg_vruntime -= cfs_rq->avg_load * delta; +} + +u64 avg_vruntime(struct cfs_rq *cfs_rq) +{ + struct sched_entity *curr = cfs_rq->curr; + s64 avg = cfs_rq->avg_vruntime; + long load = cfs_rq->avg_load; + + if (curr && curr->on_rq) { + unsigned long weight = scale_load_down(curr->load.weight); + + avg += entity_key(cfs_rq, curr) * weight; + load += weight; + } + + if (load) + avg = div_s64(avg, load); + + return cfs_rq->min_vruntime + avg; +} + +static u64 __update_min_vruntime(struct cfs_rq *cfs_rq, u64 vruntime) +{ + u64 min_vruntime = cfs_rq->min_vruntime; + /* + * open coded max_vruntime() to allow updating avg_vruntime + */ + s64 delta = (s64)(vruntime - min_vruntime); + if (delta > 0) { + avg_vruntime_update(cfs_rq, delta); + min_vruntime = vruntime; + } + return min_vruntime; +} + static void update_min_vruntime(struct cfs_rq *cfs_rq) { struct sched_entity *curr = cfs_rq->curr; @@ -629,7 +728,7 @@ static void update_min_vruntime(struct c /* ensure we never gain time by being placed backwards. */ u64_u32_store(cfs_rq->min_vruntime, - max_vruntime(cfs_rq->min_vruntime, vruntime)); + __update_min_vruntime(cfs_rq, vruntime)); } static inline bool __entity_less(struct rb_node *a, const struct rb_node *b) @@ -642,12 +741,14 @@ static inline bool __entity_less(struct */ static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { + avg_vruntime_add(cfs_rq, se); rb_add_cached(&se->run_node, &cfs_rq->tasks_timeline, __entity_less); } static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { rb_erase_cached(&se->run_node, &cfs_rq->tasks_timeline); + avg_vruntime_sub(cfs_rq, se); } struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq) @@ -3330,6 +3431,8 @@ static void reweight_entity(struct cfs_r /* commit outstanding execution time */ if (cfs_rq->curr == se) update_curr(cfs_rq); + else + avg_vruntime_sub(cfs_rq, se); update_load_sub(&cfs_rq->load, se->load.weight); } dequeue_load_avg(cfs_rq, se); @@ -3345,9 +3448,11 @@ static void reweight_entity(struct cfs_r #endif enqueue_load_avg(cfs_rq, se); - if (se->on_rq) + if (se->on_rq) { update_load_add(&cfs_rq->load, se->load.weight); - + if (cfs_rq->curr != se) + avg_vruntime_add(cfs_rq, se); + } } void reweight_task(struct task_struct *p, int prio) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -558,6 +558,9 @@ struct cfs_rq { unsigned int idle_nr_running; /* SCHED_IDLE */ unsigned int idle_h_nr_running; /* SCHED_IDLE */ + s64 avg_vruntime; + u64 avg_load; + u64 exec_clock; u64 min_vruntime; #ifdef CONFIG_SCHED_CORE @@ -3312,4 +3315,6 @@ static inline void switch_mm_cid(struct static inline void switch_mm_cid(struct task_struct *prev, struct task_struct *next) { } #endif +extern u64 avg_vruntime(struct cfs_rq *cfs_rq); + #endif /* _KERNEL_SCHED_SCHED_H */ From patchwork Tue Mar 28 09:26:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 76001 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2135689vqo; Tue, 28 Mar 2023 04:22:21 -0700 (PDT) X-Google-Smtp-Source: AKy350Zjgj3YwuMqM5Z4op3O9mlhzUCNWRQ9vdSnZ9GxTiLc9xA2ERVha9kThPOWRexiQNRil223 X-Received: by 2002:aa7:dc10:0:b0:4bd:94b9:b8a8 with SMTP id b16-20020aa7dc10000000b004bd94b9b8a8mr14622617edu.26.1680002541380; Tue, 28 Mar 2023 04:22:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680002541; cv=none; d=google.com; s=arc-20160816; b=jrAirns1s49zLufqlEqWOKMRMSfyH2sbp42OTsmdtt4YCOeXrVEAIKKv79ZK5IZcJM LsRURr+rORUVI8nJ1AgtGFxU0eiLzBDg5W+2JdfuFVwEWaTo97IR7EzPlYYvlNyNxKQ2 MK/ZByrIkGur1hGHkqQYcq2oO2J5OlMXMsVTc2SK3DC5ybact/XGf5RaWh7KGEB5ltcy QC+D5VqrOWbj4zCa+IpsSMGpvOamgofa4TvkkTXKX3f/NBLSNZsAqoFl9M//j5TU1Tq/ ObtIpxQzvKgZpGn0/Yu/Dd0awAYrnqhRXxBgkBTosBge4C0zk2URt/dolpE5uT87mmHf WxBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=LZqDNvc3VZAUuNOXrYMg3ThXH5ZrYNaem+9q9B7vV4c=; b=NuJqcLXoIEXbePZuNjm3J45KHhZtljfFPVpUqjxdnBnzWm6AkQmJixUSTvG1YIT+CC F3qCtELzztaw69+D1pfmOCZKr1txcFDyElSRdmbnkA8xLHCxmIPbZMJz7sSXs5NUH6Wu znCvUUJy5nzh6zWpxFU1rEoZvhUXsIc+NyJ/q8oZgj5F8Y4UAdu5CI24V9X1+RpBocBg 0ODd64Q+mK/nNe4Iqsbb4c9k4353qtrSy4ok2LBH8acmgBoCV4HUIhUisRd1qz/GKqYt +2LB35tIq/M/5buEjYBGW8kPYFEzT7+cgBDUVXGpi8uEESB498MsJRVfgvK4NmOgpZgF 9+SA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=BTDId8Qt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d16-20020a056402517000b004fd2b0b6f16si23412736ede.355.2023.03.28.04.21.58; Tue, 28 Mar 2023 04:22:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=BTDId8Qt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232613AbjC1LHP (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229801AbjC1LHH (ORCPT ); Tue, 28 Mar 2023 07:07:07 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44BFE3ABE for ; Tue, 28 Mar 2023 04:07:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=LZqDNvc3VZAUuNOXrYMg3ThXH5ZrYNaem+9q9B7vV4c=; b=BTDId8QtdLNetcXedB4tOLNnWa bQ/HX2wyXqc+8Zj4AGo/bYLWEyVWQqx8P3/jstQr+lu09sgK5dE+a+6Js0ty5aBQGWCEPrP0v7V0N 4hhrkUs7H7FbqtWp22Oid35wvR1a8CgkM6BpJN8SWlsPPESPcr7C7oktiPCaCCYdhrcxanYFTqjxw leNE8e4TzI8dGzDPjxmteRsDo1K3Z4BRwceekqMX8t45M/v6rwSwrL8bjAv/nKDiTVkrycccbaA/5 PfEmg8ffuQUP3QM83eK6a9Sknts91XGZcIz1Oh0XeBxmBgYy1D3aIl6VJUkAvVEiOS6uzT5vLdDZW TZSRBmRg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79l-006XUx-1s; Tue, 28 Mar 2023 11:06:31 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 51412302E7D; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id B73732CB8CEB6; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110353.920761844@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:27 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 05/17] sched/fair: Remove START_DEBIT References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761610344929588798?= X-GMAIL-MSGID: =?utf-8?q?1761610344929588798?= With the introduction of avg_vruntime() there is no need to use worse approximations. Take the 0-lag point as starting point for inserting new tasks. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 21 +-------------------- kernel/sched/features.h | 6 ------ 2 files changed, 1 insertion(+), 26 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -882,16 +882,6 @@ static u64 sched_slice(struct cfs_rq *cf return slice; } -/* - * We calculate the vruntime slice of a to-be-inserted task. - * - * vs = s/w - */ -static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se) -{ - return calc_delta_fair(sched_slice(cfs_rq, se), se); -} - #include "pelt.h" #ifdef CONFIG_SMP @@ -4781,16 +4771,7 @@ static inline bool entity_is_long_sleepe static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { - u64 vruntime = cfs_rq->min_vruntime; - - /* - * The 'current' period is already promised to the current tasks, - * however the extra weight of the new task will slow them down a - * little, place the new task so that it fits in the slot that - * stays open at the end. - */ - if (initial && sched_feat(START_DEBIT)) - vruntime += sched_vslice(cfs_rq, se); + u64 vruntime = avg_vruntime(cfs_rq); /* sleeps up to a single latency don't count. */ if (!initial) { --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -7,12 +7,6 @@ SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true) /* - * Place new tasks ahead so that they do not starve already running - * tasks - */ -SCHED_FEAT(START_DEBIT, true) - -/* * Prefer to schedule the task we woke last (assuming it failed * wakeup-preemption), since its likely going to consume data we * touched, increases cache locality. From patchwork Tue Mar 28 09:26:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75992 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2128066vqo; Tue, 28 Mar 2023 04:09:22 -0700 (PDT) X-Google-Smtp-Source: AKy350a5U3hh1NJ5MiX1bae5Ag/tb6eKMx7LyAHs0GVYZ7eKIONR+YNKH7TFHxKOye28vchGBc9g X-Received: by 2002:a17:907:3188:b0:8b1:781d:f9a4 with SMTP id xe8-20020a170907318800b008b1781df9a4mr18039833ejb.21.1680001762308; Tue, 28 Mar 2023 04:09:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001762; cv=none; d=google.com; s=arc-20160816; b=bCbs1y+kufrrbSNvvO5BbB78vyoW2VaFuUzQkNOy/MpMAN//sUDi528fCr2joIVtoJ hnDlSpPwRQ/1cK8DLNp9aKOUn4Ahf3yM+xHKlfGIV/8kLCCI4bJbINf+9RNV9HITi0ga Wsl121x6XYwpSi+Jm9aF9j8CzpUQcQyUjwyVKWG/55NiE6ej2536NrU+i2RBQTWvyp1l EfbxiaOOSSTrMGkkjl02X2emFh7K4c1pKhGUC3RLsz3MI9foPq0VqWUYfIuLKcgkRsHs 3L6rJ6lZCNv7yiijJITbY7q9jWJAweLSSo4DHF3xTnaOGJjIkL052NOuvaQ7uYZLI6pE IwBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=Th+8KN7gw9xQ3Ie5C5sF47B/Af5VSq6Fs9EGC7kUB74=; b=G4EVv/cFA40SeAyGbzQqa6X9rD/s+rgIr1kUaWxsjrV/fvY6st0zTOxB5qNmK40ady xdl7UogbgNpRSBTrMymFgj7cRkvrUdNF0bpUz6wvVcnN5CF5p42ApwwxFyao5gLNlJtu b5UsrskbhoFTeGYu+n1IrcXXzECy5SoBa76Pi+x6oilJrjV2mf5PnQpXUBAetZzd+AHu ISHacMweqLKXRLtlo++bw6y56XMvFHe9eWI9cWmbXlgbnG2Jyl2mK2ZdERlGlDGP4t1t tDQkt2Gn3uyg+MACLt7MEggLajnE1Rzr8IjwLlGFQ1eCFJQKbiqjbRLssHw9vl6TaCb8 KGWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Ey292Cg1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id vj2-20020a170907d48200b0093e849ee435si10484123ejc.401.2023.03.28.04.08.58; Tue, 28 Mar 2023 04:09:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Ey292Cg1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232754AbjC1LHp (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232523AbjC1LHM (ORCPT ); Tue, 28 Mar 2023 07:07:12 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCD422101 for ; Tue, 28 Mar 2023 04:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Th+8KN7gw9xQ3Ie5C5sF47B/Af5VSq6Fs9EGC7kUB74=; b=Ey292Cg1NPfNDAUzxpJjTyufdI LfR8+0PUcX4VvWZhdE7pC6j+WAUTN7n56irCa3A0GgHjhDFJ9qIJVJ6cg5OMzLePfaAK2Fi8hkLOC It3IVQuH1PZ++W0YjbZNh8ayNU8EKArAUMzGJAOWtf+AhI7j+zEYSdTHtuQ85d/FiT+KwBETNwPnl KZpBkQhdUyTAyDwcxV1h4t9EeiaoglisievtW0jUGWo5DKnsK+0Tdq9UkE6fDY/kEEWTolXgPDrSG CjcAn38qFIzQXEa9oc5syT+Q8vNm1ntdixpVAA1I//1fa8cq7qESMNd3wnBBlmuxE7xe1LY4Lwq8x cad4EhcQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ph79l-008MDC-GL; Tue, 28 Mar 2023 11:06:25 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 514BB302E82; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id BD1EF2CB8CEC0; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110353.988000317@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:28 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 06/17] sched/fair: Add lag based placement References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609527827115588?= X-GMAIL-MSGID: =?utf-8?q?1761609527827115588?= With the introduction of avg_vruntime, it is possible to approximate lag (the entire purpose of introducing it in fact). Use this to do lag based placement over sleep+wake. Specifically, the FAIR_SLEEPERS thing places things too far to the left and messes up the deadline aspect of EEVDF. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/sched.h | 1 kernel/sched/core.c | 1 kernel/sched/fair.c | 129 ++++++++++++++++++++++++++++++++++-------------- kernel/sched/features.h | 8 ++ 4 files changed, 104 insertions(+), 35 deletions(-) --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -555,6 +555,7 @@ struct sched_entity { u64 sum_exec_runtime; u64 vruntime; u64 prev_sum_exec_runtime; + s64 vlag; u64 nr_migrations; --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4439,6 +4439,7 @@ static void __sched_fork(unsigned long c p->se.prev_sum_exec_runtime = 0; p->se.nr_migrations = 0; p->se.vruntime = 0; + p->se.vlag = 0; INIT_LIST_HEAD(&p->se.group_node); set_latency_offset(p); --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -689,6 +689,15 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq) return cfs_rq->min_vruntime + avg; } +/* + * lag_i = S - s_i = w_i * (V - v_i) + */ +void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + SCHED_WARN_ON(!se->on_rq); + se->vlag = avg_vruntime(cfs_rq) - se->vruntime; +} + static u64 __update_min_vruntime(struct cfs_rq *cfs_rq, u64 vruntime) { u64 min_vruntime = cfs_rq->min_vruntime; @@ -3417,6 +3426,8 @@ dequeue_load_avg(struct cfs_rq *cfs_rq, static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, unsigned long weight) { + unsigned long old_weight = se->load.weight; + if (se->on_rq) { /* commit outstanding execution time */ if (cfs_rq->curr == se) @@ -3429,6 +3440,14 @@ static void reweight_entity(struct cfs_r update_load_set(&se->load, weight); + if (!se->on_rq) { + /* + * Because we keep se->vlag = V - v_i, while: lag_i = w_i*(V - v), + * we need to scale se->vlag when w_i changes. + */ + se->vlag = div_s64(se->vlag * old_weight, weight); + } + #ifdef CONFIG_SMP do { u32 divider = get_pelt_divider(&se->avg); @@ -4778,49 +4797,86 @@ static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { u64 vruntime = avg_vruntime(cfs_rq); + s64 lag = 0; - /* sleeps up to a single latency don't count. */ - if (!initial) { - unsigned long thresh; + /* + * Due to how V is constructed as the weighted average of entities, + * adding tasks with positive lag, or removing tasks with negative lag + * will move 'time' backwards, this can screw around with the lag of + * other tasks. + * + * EEVDF: placement strategy #1 / #2 + */ + if (sched_feat(PLACE_LAG) && cfs_rq->nr_running > 1) { + struct sched_entity *curr = cfs_rq->curr; + unsigned long load; - if (se_is_idle(se)) - thresh = sysctl_sched_min_granularity; - else - thresh = sysctl_sched_latency; + lag = se->vlag; /* - * Halve their sleep time's effect, to allow - * for a gentler effect of sleepers: + * If we want to place a task and preserve lag, we have to + * consider the effect of the new entity on the weighted + * average and compensate for this, otherwise lag can quickly + * evaporate: + * + * l_i = V - v_i <=> v_i = V - l_i + * + * V = v_avg = W*v_avg / W + * + * V' = (W*v_avg + w_i*v_i) / (W + w_i) + * = (W*v_avg + w_i(v_avg - l_i)) / (W + w_i) + * = v_avg + w_i*l_i/(W + w_i) + * + * l_i' = V' - v_i = v_avg + w_i*l_i/(W + w_i) - (v_avg - l) + * = l_i - w_i*l_i/(W + w_i) + * + * l_i = (W + w_i) * l_i' / W */ - if (sched_feat(GENTLE_FAIR_SLEEPERS)) - thresh >>= 1; + load = cfs_rq->avg_load; + if (curr && curr->on_rq) + load += curr->load.weight; + + lag *= load + se->load.weight; + if (WARN_ON_ONCE(!load)) + load = 1; + lag = div_s64(lag, load); - vruntime -= thresh; + vruntime -= lag; } - /* - * Pull vruntime of the entity being placed to the base level of - * cfs_rq, to prevent boosting it if placed backwards. - * However, min_vruntime can advance much faster than real time, with - * the extreme being when an entity with the minimal weight always runs - * on the cfs_rq. If the waking entity slept for a long time, its - * vruntime difference from min_vruntime may overflow s64 and their - * comparison may get inversed, so ignore the entity's original - * vruntime in that case. - * The maximal vruntime speedup is given by the ratio of normal to - * minimal weight: scale_load_down(NICE_0_LOAD) / MIN_SHARES. - * When placing a migrated waking entity, its exec_start has been set - * from a different rq. In order to take into account a possible - * divergence between new and prev rq's clocks task because of irq and - * stolen time, we take an additional margin. - * So, cutting off on the sleep time of - * 2^63 / scale_load_down(NICE_0_LOAD) ~ 104 days - * should be safe. - */ - if (entity_is_long_sleeper(se)) - se->vruntime = vruntime; - else - se->vruntime = max_vruntime(se->vruntime, vruntime); + if (sched_feat(FAIR_SLEEPERS)) { + + /* sleeps up to a single latency don't count. */ + if (!initial) { + unsigned long thresh; + + if (se_is_idle(se)) + thresh = sysctl_sched_min_granularity; + else + thresh = sysctl_sched_latency; + + /* + * Halve their sleep time's effect, to allow + * for a gentler effect of sleepers: + */ + if (sched_feat(GENTLE_FAIR_SLEEPERS)) + thresh >>= 1; + + vruntime -= thresh; + } + + /* + * Pull vruntime of the entity being placed to the base level of + * cfs_rq, to prevent boosting it if placed backwards. If the entity + * slept for a long time, don't even try to compare its vruntime with + * the base as it may be too far off and the comparison may get + * inversed due to s64 overflow. + */ + if (!entity_is_long_sleeper(se)) + vruntime = max_vruntime(se->vruntime, vruntime); + } + + se->vruntime = vruntime; } static void check_enqueue_throttle(struct cfs_rq *cfs_rq); @@ -4991,6 +5047,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, st clear_buddies(cfs_rq, se); + if (flags & DEQUEUE_SLEEP) + update_entity_lag(cfs_rq, se); + if (se != cfs_rq->curr) __dequeue_entity(cfs_rq, se); se->on_rq = 0; --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -1,12 +1,20 @@ /* SPDX-License-Identifier: GPL-2.0 */ + /* * Only give sleepers 50% of their service deficit. This allows * them to run sooner, but does not allow tons of sleepers to * rip the spread apart. */ +SCHED_FEAT(FAIR_SLEEPERS, false) SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true) /* + * Using the avg_vruntime, do the right thing and preserve lag across + * sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled. + */ +SCHED_FEAT(PLACE_LAG, true) + +/* * Prefer to schedule the task we woke last (assuming it failed * wakeup-preemption), since its likely going to consume data we * touched, increases cache locality. From patchwork Tue Mar 28 09:26:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 76000 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2134573vqo; Tue, 28 Mar 2023 04:20:38 -0700 (PDT) X-Google-Smtp-Source: AKy350Yn+R+4L5M3omQnlrB8PQQ4aMvqC6sfACtLPZC/dhOQYaPzsuN6dPDMx5kBccg5/Opinwzb X-Received: by 2002:aa7:cd8b:0:b0:502:3ea4:7f95 with SMTP id x11-20020aa7cd8b000000b005023ea47f95mr8084515edv.35.1680002438171; Tue, 28 Mar 2023 04:20:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680002438; cv=none; d=google.com; s=arc-20160816; b=P0cp+o/xV5GXEQOM2vPm5oBnoljSuVdmXkz7bT6kgbcYFR9naqkisNFnvn5KTf1Aqt Lh0riw9wO5EZ/aSUMia3W40uvEUYf3HtcvSC7mS4CVfdsj/3DjG5B9ZlXuO7p1HftQrQ Q2m3zRpg/3khNCFrP2GA82/fipzU27eirhfk1+gCcwXzUqCHtHGyqgLJN4/FvInh6xZD ko8DFeyz0zcF1wjn9eaTWckkxwzFCXGyYN0p7hKBV62l6riTKCc8KLrP1PerTLOugqez 7Qw4jJip3AvkwdZJ+N8jd5GCTBe6MjFGvcAVINWJwtOUKjIi77fO13xVXcsXRMd+KlWy pOTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=2nCdUjbTqHSXIpP/as6UIFPlbvoMHockVe8OM3H17xE=; b=FMiI1rlHd0qU5iDp0VJl6R3ZLcrft7d1X/OptVK6dZv5MHNgrwdVHVF+njADde0/Az k0wyO8JlXtznAZoQpzq2TDuyNjJhOYhyxKMnb0XHBWoIBZAzUjM7zZq1sUnbkJzp5leR sJCJjfwwr6CQXN/QmwzvGx2gxU3ehHceO6v/bcT+HJOi4dOU/chhFGo/PP4jSKyK4TlO XyImabDFQ+HlMvtjgHK782CGBrOL7SaKTGhfQyiydynwh/AZn8GEulSebRHNrsOh1aJL OnplouycPTe5VoYNtdENEe8Q0jHtbjB93MceaoVz8B3pG0NxLtKDG1aQ3cziQoIaQ/9r kfqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=LpRrNGlo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d19-20020a50fb13000000b004fd2b13b203si11798841edq.550.2023.03.28.04.20.14; Tue, 28 Mar 2023 04:20:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=LpRrNGlo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232060AbjC1LHI (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229646AbjC1LHG (ORCPT ); Tue, 28 Mar 2023 07:07:06 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 395942101 for ; Tue, 28 Mar 2023 04:07:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=2nCdUjbTqHSXIpP/as6UIFPlbvoMHockVe8OM3H17xE=; b=LpRrNGloDFUqUOQM8Sulplk7O9 aI0SZovwuFICcYdhbwj4mNMp/Ys6dgEdR0ixbfdzpq/AjBQarXz4AVIyj24nDXD3SLdd+6IGaMgrM ME1k4Uoeln4hoDLmS8ItPI8hqR6jt/zcwg65HAOR9jWrfY55tFnP1jUxCoanWrfx9lvFS3kpcQOtP 9l213m7Z0VPaefvg6vPH0nBjAwv8dhqt/J76WLnm3NEuzMR6kYDNhj4yKQoPSpIIQTRYeAmBNC4uW vdTVccJmcEj2PUN7yVSbmjzVhggTkOtaxmp6b3d0NNfjfLkFaf7WxuseJaytfZWc48SsOPv+vScps UB2yyRog==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79l-006XUv-1c; Tue, 28 Mar 2023 11:06:34 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 513C8302E27; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id C42892CB8CEC2; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110354.061987722@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:29 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 07/17] rbtree: Add rb_add_augmented_cached() helper References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761610237055138024?= X-GMAIL-MSGID: =?utf-8?q?1761610237055138024?= While slightly sub-optimal, updating the augmented data while going down the tree during lookup would be faster -- alas the augment interface does not currently allow for that, provide a generic helper to add a node to an augmented cached tree. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/rbtree_augmented.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) --- a/include/linux/rbtree_augmented.h +++ b/include/linux/rbtree_augmented.h @@ -60,6 +60,32 @@ rb_insert_augmented_cached(struct rb_nod rb_insert_augmented(node, &root->rb_root, augment); } +static __always_inline struct rb_node * +rb_add_augmented_cached(struct rb_node *node, struct rb_root_cached *tree, + bool (*less)(struct rb_node *, const struct rb_node *), + const struct rb_augment_callbacks *augment) +{ + struct rb_node **link = &tree->rb_root.rb_node; + struct rb_node *parent = NULL; + bool leftmost = true; + + while (*link) { + parent = *link; + if (less(node, parent)) { + link = &parent->rb_left; + } else { + link = &parent->rb_right; + leftmost = false; + } + } + + rb_link_node(node, parent, link); + augment->propagate(parent, NULL); /* suboptimal */ + rb_insert_augmented_cached(node, tree, leftmost, augment); + + return leftmost ? node : NULL; +} + /* * Template for declaring augmented rbtree callbacks (generic case) * From patchwork Tue Mar 28 09:26:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75991 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2128043vqo; Tue, 28 Mar 2023 04:09:20 -0700 (PDT) X-Google-Smtp-Source: AKy350ZXFiumJVPUGxAtLf42UwII/pbfWz0EdwOnBf0xM+rV8MeryV0zVMHUvgGaixDHiDjnJ1Y+ X-Received: by 2002:a17:907:c087:b0:8b1:779c:a8b1 with SMTP id st7-20020a170907c08700b008b1779ca8b1mr16360786ejc.5.1680001760116; Tue, 28 Mar 2023 04:09:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001760; cv=none; d=google.com; s=arc-20160816; b=AOEEMFgnwSgieFx/NoHKppMPVN8l/P+fhYo83PSgb1YqlGpYLNrS8fplTKB0FuJYf1 hpqFLFL7ta1/Uk8QIIKtagLeQZ68Ltg4bDuNLW1DpTg5wRSeoTZr1ntJN+5/CRIGXmEl AORzEHpQwdQCen3HkGeB1wWMdGb8k77t8inGWvkZhBlnMsfn5IcAe4BiKGZRnYmeEx4u sV5orN4Tg4vNLSfyLK+AntiIZnHc29RnvxLzz69kP2CrIp/1JhPcNEu6Ly0aveSXU7ki CJEfekl9FsSwCjlGF2TklV7vWJgO6/W4oa0h5YoAkkFp2Jr1lQ7MHrbe1aWdLG5ehmGu n0yQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=/C/m782qsnmwI6cQ0Y2RQX/I95kOCPkI4DP+5cVPooA=; b=0JLUvThuLOjtEACb1lpKMjyhJN71ceANzqT9mH8pGghT9OymWirdrD0HukuydrN/K5 BRiRZXFQg8fPzzf6PyV6VrcCXGIrZivJ7RJOKRP+F4lm3XR5k2AdYKHNnoD6B/PeZp8e HV36Kio97rAT+zKI8e1y1iUy/1W/7+ovuDwtOX16heKUzClLtciEqMC8bWazW2JYrkvf eipfIYZ4ndX6re3X3gAkLIA+Is//lsoYodArnHA3I8ccB2dWKvaGG7Sti5CMb3rL9yZB UCjLXFHCg+nphQce0VPdTIAXztsT/+fHo7OdV75gLUHfmAbEK4KEDBLaQZnZUXzd47Zs WGtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=XEceOuzi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id he7-20020a1709073d8700b00932af15caa7si30839552ejc.731.2023.03.28.04.08.54; Tue, 28 Mar 2023 04:09:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=XEceOuzi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232747AbjC1LHk (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232518AbjC1LHM (ORCPT ); Tue, 28 Mar 2023 07:07:12 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3410B7AB4 for ; Tue, 28 Mar 2023 04:07:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=/C/m782qsnmwI6cQ0Y2RQX/I95kOCPkI4DP+5cVPooA=; b=XEceOuziBevC8sFld6uhQbiVed FGk8byGMLqpfsRx8GVQT+wuCf2DB8xc6phdNavZaePUBaFnOQP8RCX9AF6w/emIAeJHW3KstdYyKS nc/wxgPvM6erZkGisI+/HssFRFW87Z5ZBQETU528BMIpoWgGBrHFcANZa/SPAHgn9J2XC4OgCkzLG jpphSR5fjOn9gch6p8l2ahp9eEMy/73SqFgQBE183H+hF27OyGBfI8xxx7qX/Z+eGwZq5KBzxxNBX Di6v4qFD6IUOkk7hGdOpbhbxPJTxsHVN88NY0h7AV2ejmc1xM0WN5cynvTcS/Tk/KIuaPmKTMfDee hkh4Ybdg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ph79m-008MDD-0U; Tue, 28 Mar 2023 11:06:26 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 51364302E0F; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id C767E2CB8CEC4; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110354.141543852@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:30 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 08/17] sched/fair: Implement an EEVDF like policy References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609525569012458?= X-GMAIL-MSGID: =?utf-8?q?1761609525569012458?= Where CFS is currently a WFQ based scheduler with only a single knob, the weight. The addition of a second, latency oriented parameter, makes something like WF2Q or EEVDF based a much better fit. Specifically, EEVDF does EDF like scheduling in the left half of the tree -- those entities that are owed service. Except because this is a virtual time scheduler, the deadlines are in virtual time as well, which is what allows over-subscription. EEVDF has two parameters: - weight, or time-slope; which is mapped to nice just as before - relative deadline; which is related to slice length and mapped to the new latency nice. Basically, by setting a smaller slice, the deadline will be earlier and the task will be more eligible and ran earlier. Preemption (both tick and wakeup) is driven by testing against a fresh pick. Because the tree is now effectively an interval tree, and the selection is no longer 'leftmost', over-scheduling is less of a problem. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/sched.h | 4 kernel/sched/debug.c | 6 kernel/sched/fair.c | 324 +++++++++++++++++++++++++++++++++++++++++------- kernel/sched/features.h | 3 kernel/sched/sched.h | 1 5 files changed, 293 insertions(+), 45 deletions(-) --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -548,6 +548,9 @@ struct sched_entity { /* For load-balancing: */ struct load_weight load; struct rb_node run_node; + u64 deadline; + u64 min_deadline; + struct list_head group_node; unsigned int on_rq; @@ -556,6 +559,7 @@ struct sched_entity { u64 vruntime; u64 prev_sum_exec_runtime; s64 vlag; + u64 slice; u64 nr_migrations; --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -535,9 +535,13 @@ print_task(struct seq_file *m, struct rq else SEQ_printf(m, " %c", task_state_to_char(p)); - SEQ_printf(m, " %15s %5d %9Ld.%06ld %9Ld %5d ", + SEQ_printf(m, "%15s %5d %9Ld.%06ld %c %9Ld.%06ld %9Ld.%06ld %9Ld.%06ld %9Ld %5d ", p->comm, task_pid_nr(p), SPLIT_NS(p->se.vruntime), + entity_eligible(cfs_rq_of(&p->se), &p->se) ? 'E' : 'N', + SPLIT_NS(p->se.deadline), + SPLIT_NS(p->se.slice), + SPLIT_NS(p->se.sum_exec_runtime), (long long)(p->nvcsw + p->nivcsw), p->prio); --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -47,6 +47,7 @@ #include #include #include +#include #include @@ -347,6 +348,16 @@ static u64 __calc_delta(u64 delta_exec, return mul_u64_u32_shr(delta_exec, fact, shift); } +/* + * delta /= w + */ +static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se) +{ + if (unlikely(se->load.weight != NICE_0_LOAD)) + delta = __calc_delta(delta, NICE_0_LOAD, &se->load); + + return delta; +} const struct sched_class fair_sched_class; @@ -691,11 +702,62 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq) /* * lag_i = S - s_i = w_i * (V - v_i) + * + * However, since V is approximated by the weighted average of all entities it + * is possible -- by addition/removal/reweight to the tree -- to move V around + * and end up with a larger lag than we started with. + * + * Limit this to either double the slice length with a minimum of TICK_NSEC + * since that is the timing granularity. + * + * EEVDF gives the following limit for a steady state system: + * + * -r_max < lag < max(r_max, q) + * + * XXX could add max_slice to the augmented data to track this. */ void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se) { + s64 lag, limit; + SCHED_WARN_ON(!se->on_rq); - se->vlag = avg_vruntime(cfs_rq) - se->vruntime; + lag = avg_vruntime(cfs_rq) - se->vruntime; + + limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se); + se->vlag = clamp(lag, -limit, limit); +} + +/* + * Entity is eligible once it received less service than it ought to have, + * eg. lag >= 0. + * + * lag_i = S - s_i = w_i*(V - v_i) + * + * lag_i >= 0 -> V >= v_i + * + * \Sum (v_i - v)*w_i + * V = ------------------ + v + * \Sum w_i + * + * lag_i >= 0 -> \Sum (v_i - v)*w_i >= (v_i - v)*(\Sum w_i) + * + * Note: using 'avg_vruntime() > se->vruntime' is inacurate due + * to the loss in precision caused by the division. + */ +int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + struct sched_entity *curr = cfs_rq->curr; + s64 avg = cfs_rq->avg_vruntime; + long load = cfs_rq->avg_load; + + if (curr && curr->on_rq) { + unsigned long weight = scale_load_down(curr->load.weight); + + avg += entity_key(cfs_rq, curr) * weight; + load += weight; + } + + return avg >= entity_key(cfs_rq, se) * load; } static u64 __update_min_vruntime(struct cfs_rq *cfs_rq, u64 vruntime) @@ -714,8 +776,8 @@ static u64 __update_min_vruntime(struct static void update_min_vruntime(struct cfs_rq *cfs_rq) { + struct sched_entity *se = __pick_first_entity(cfs_rq); struct sched_entity *curr = cfs_rq->curr; - struct rb_node *leftmost = rb_first_cached(&cfs_rq->tasks_timeline); u64 vruntime = cfs_rq->min_vruntime; @@ -726,9 +788,7 @@ static void update_min_vruntime(struct c curr = NULL; } - if (leftmost) { /* non-empty tree */ - struct sched_entity *se = __node_2_se(leftmost); - + if (se) { if (!curr) vruntime = se->vruntime; else @@ -745,18 +805,50 @@ static inline bool __entity_less(struct return entity_before(__node_2_se(a), __node_2_se(b)); } +#define deadline_gt(field, lse, rse) ({ (s64)((lse)->field - (rse)->field) > 0; }) + +static inline void __update_min_deadline(struct sched_entity *se, struct rb_node *node) +{ + if (node) { + struct sched_entity *rse = __node_2_se(node); + if (deadline_gt(min_deadline, se, rse)) + se->min_deadline = rse->min_deadline; + } +} + +/* + * se->min_deadline = min(se->deadline, left->min_deadline, right->min_deadline) + */ +static inline bool min_deadline_update(struct sched_entity *se, bool exit) +{ + u64 old_min_deadline = se->min_deadline; + struct rb_node *node = &se->run_node; + + se->min_deadline = se->deadline; + __update_min_deadline(se, node->rb_right); + __update_min_deadline(se, node->rb_left); + + return se->min_deadline == old_min_deadline; +} + +RB_DECLARE_CALLBACKS(static, min_deadline_cb, struct sched_entity, + run_node, min_deadline, min_deadline_update); + /* * Enqueue an entity into the rb-tree: */ static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { avg_vruntime_add(cfs_rq, se); - rb_add_cached(&se->run_node, &cfs_rq->tasks_timeline, __entity_less); + se->min_deadline = se->deadline; + rb_add_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline, + __entity_less, &min_deadline_cb); } static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { - rb_erase_cached(&se->run_node, &cfs_rq->tasks_timeline); + rb_erase_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline, + &min_deadline_cb); avg_vruntime_sub(cfs_rq, se); } @@ -780,6 +872,97 @@ static struct sched_entity *__pick_next_ return __node_2_se(next); } +static struct sched_entity *pick_cfs(struct cfs_rq *cfs_rq, struct sched_entity *curr) +{ + struct sched_entity *left = __pick_first_entity(cfs_rq); + + /* + * If curr is set we have to see if its left of the leftmost entity + * still in the tree, provided there was anything in the tree at all. + */ + if (!left || (curr && entity_before(curr, left))) + left = curr; + + return left; +} + +/* + * Earliest Eligible Virtual Deadline First + * + * In order to provide latency guarantees for different request sizes + * EEVDF selects the best runnable task from two criteria: + * + * 1) the task must be eligible (must be owed service) + * + * 2) from those tasks that meet 1), we select the one + * with the earliest virtual deadline. + * + * We can do this in O(log n) time due to an augmented RB-tree. The + * tree keeps the entries sorted on service, but also functions as a + * heap based on the deadline by keeping: + * + * se->min_deadline = min(se->deadline, se->{left,right}->min_deadline) + * + * Which allows an EDF like search on (sub)trees. + */ +static struct sched_entity *pick_eevdf(struct cfs_rq *cfs_rq) +{ + struct rb_node *node = cfs_rq->tasks_timeline.rb_root.rb_node; + struct sched_entity *curr = cfs_rq->curr; + struct sched_entity *best = NULL; + + if (curr && (!curr->on_rq || !entity_eligible(cfs_rq, curr))) + curr = NULL; + + while (node) { + struct sched_entity *se = __node_2_se(node); + + /* + * If this entity is not eligible, try the left subtree. + */ + if (!entity_eligible(cfs_rq, se)) { + node = node->rb_left; + continue; + } + + /* + * If this entity has an earlier deadline than the previous + * best, take this one. If it also has the earliest deadline + * of its subtree, we're done. + */ + if (!best || deadline_gt(deadline, best, se)) { + best = se; + if (best->deadline == best->min_deadline) + break; + } + + /* + * If the earlest deadline in this subtree is in the fully + * eligible left half of our space, go there. + */ + if (node->rb_left && + __node_2_se(node->rb_left)->min_deadline == se->min_deadline) { + node = node->rb_left; + continue; + } + + node = node->rb_right; + } + + if (!best || (curr && deadline_gt(deadline, best, curr))) + best = curr; + + if (unlikely(!best)) { + struct sched_entity *left = __pick_first_entity(cfs_rq); + if (left) { + pr_err("EEVDF scheduling fail, picking leftmost\n"); + return left; + } + } + + return best; +} + #ifdef CONFIG_SCHED_DEBUG struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq) { @@ -822,17 +1005,6 @@ long calc_latency_offset(int prio) } /* - * delta /= w - */ -static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se) -{ - if (unlikely(se->load.weight != NICE_0_LOAD)) - delta = __calc_delta(delta, NICE_0_LOAD, &se->load); - - return delta; -} - -/* * The idea is to set a period in which each task runs once. * * When there are too many tasks (sched_nr_latency) we have to stretch @@ -897,6 +1069,38 @@ static u64 sched_slice(struct cfs_rq *cf return slice; } +/* + * XXX: strictly: vd_i += N*r_i/w_i such that: vd_i > ve_i + * this is probably good enough. + */ +static void update_deadline(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + if ((s64)(se->vruntime - se->deadline) < 0) + return; + + if (sched_feat(EEVDF)) { + /* + * For EEVDF the virtual time slope is determined by w_i (iow. + * nice) while the request time r_i is determined by + * latency-nice. + */ + se->slice = se->latency_offset; + } else { + /* + * When many tasks blow up the sched_period; it is possible + * that sched_slice() reports unusually large results (when + * many tasks are very light for example). Therefore impose a + * maximum. + */ + se->slice = min_t(u64, sched_slice(cfs_rq, se), sysctl_sched_latency); + } + + /* + * EEVDF: vd_i = ve_i + r_i / w_i + */ + se->deadline = se->vruntime + calc_delta_fair(se->slice, se); +} + #include "pelt.h" #ifdef CONFIG_SMP @@ -1029,6 +1233,7 @@ static void update_curr(struct cfs_rq *c schedstat_add(cfs_rq->exec_clock, delta_exec); curr->vruntime += calc_delta_fair(delta_exec, curr); + update_deadline(cfs_rq, curr); update_min_vruntime(cfs_rq); if (entity_is_task(curr)) { @@ -4796,6 +5001,7 @@ static inline bool entity_is_long_sleepe static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { + u64 vslice = calc_delta_fair(se->slice, se); u64 vruntime = avg_vruntime(cfs_rq); s64 lag = 0; @@ -4834,9 +5040,9 @@ place_entity(struct cfs_rq *cfs_rq, stru */ load = cfs_rq->avg_load; if (curr && curr->on_rq) - load += curr->load.weight; + load += scale_load_down(curr->load.weight); - lag *= load + se->load.weight; + lag *= load + scale_load_down(se->load.weight); if (WARN_ON_ONCE(!load)) load = 1; lag = div_s64(lag, load); @@ -4877,6 +5083,19 @@ place_entity(struct cfs_rq *cfs_rq, stru } se->vruntime = vruntime; + + /* + * When joining the competition; the exisiting tasks will be, + * on average, halfway through their slice, as such start tasks + * off with half a slice to ease into the competition. + */ + if (sched_feat(PLACE_DEADLINE_INITIAL) && initial) + vslice /= 2; + + /* + * EEVDF: vd_i = ve_i + r_i/w_i + */ + se->deadline = se->vruntime + vslice; } static void check_enqueue_throttle(struct cfs_rq *cfs_rq); @@ -5088,19 +5307,20 @@ dequeue_entity(struct cfs_rq *cfs_rq, st static void check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - unsigned long ideal_runtime, delta_exec; + unsigned long delta_exec; struct sched_entity *se; s64 delta; - /* - * When many tasks blow up the sched_period; it is possible that - * sched_slice() reports unusually large results (when many tasks are - * very light for example). Therefore impose a maximum. - */ - ideal_runtime = min_t(u64, sched_slice(cfs_rq, curr), sysctl_sched_latency); + if (sched_feat(EEVDF)) { + if (pick_eevdf(cfs_rq) != curr) + goto preempt; + + return; + } delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime; - if (delta_exec > ideal_runtime) { + if (delta_exec > curr->slice) { +preempt: resched_curr(rq_of(cfs_rq)); /* * The current task ran long enough, ensure it doesn't get @@ -5124,7 +5344,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq if (delta < 0) return; - if (delta > ideal_runtime) + if (delta > curr->slice) resched_curr(rq_of(cfs_rq)); } @@ -5179,17 +5399,20 @@ wakeup_preempt_entity(struct sched_entit static struct sched_entity * pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - struct sched_entity *left = __pick_first_entity(cfs_rq); - struct sched_entity *se; + struct sched_entity *left, *se; - /* - * If curr is set we have to see if its left of the leftmost entity - * still in the tree, provided there was anything in the tree at all. - */ - if (!left || (curr && entity_before(curr, left))) - left = curr; + if (sched_feat(EEVDF)) { + /* + * Enabling NEXT_BUDDY will affect latency but not fairness. + */ + if (sched_feat(NEXT_BUDDY) && + cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) + return cfs_rq->next; + + return pick_eevdf(cfs_rq); + } - se = left; /* ideally we run the leftmost entity */ + se = left = pick_cfs(cfs_rq, curr); /* * Avoid running the skip buddy, if running something else can @@ -6284,13 +6507,12 @@ static inline void unthrottle_offline_cf static void hrtick_start_fair(struct rq *rq, struct task_struct *p) { struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); SCHED_WARN_ON(task_rq(p) != rq); if (rq->cfs.h_nr_running > 1) { - u64 slice = sched_slice(cfs_rq, se); u64 ran = se->sum_exec_runtime - se->prev_sum_exec_runtime; + u64 slice = se->slice; s64 delta = slice - ran; if (delta < 0) { @@ -8010,7 +8232,19 @@ static void check_preempt_wakeup(struct if (cse_is_idle != pse_is_idle) return; - update_curr(cfs_rq_of(se)); + cfs_rq = cfs_rq_of(se); + update_curr(cfs_rq); + + if (sched_feat(EEVDF)) { + /* + * XXX pick_eevdf(cfs_rq) != se ? + */ + if (pick_eevdf(cfs_rq) == pse) + goto preempt; + + return; + } + if (wakeup_preempt_entity(se, pse) == 1) { /* * Bias pick_next to pick the sched entity that is @@ -8256,7 +8490,7 @@ static void yield_task_fair(struct rq *r clear_buddies(cfs_rq, se); - if (curr->policy != SCHED_BATCH) { + if (sched_feat(EEVDF) || curr->policy != SCHED_BATCH) { update_rq_clock(rq); /* * Update run-time statistics of the 'current'. @@ -8269,6 +8503,8 @@ static void yield_task_fair(struct rq *r */ rq_clock_skip_update(rq); } + if (sched_feat(EEVDF)) + se->deadline += calc_delta_fair(se->slice, se); set_skip_buddy(se); } @@ -12012,8 +12248,8 @@ static void rq_offline_fair(struct rq *r static inline bool __entity_slice_used(struct sched_entity *se, int min_nr_tasks) { - u64 slice = sched_slice(cfs_rq_of(se), se); u64 rtime = se->sum_exec_runtime - se->prev_sum_exec_runtime; + u64 slice = se->slice; return (rtime * min_nr_tasks > slice); } @@ -12728,7 +12964,7 @@ static unsigned int get_rr_interval_fair * idle runqueue: */ if (rq->cfs.load.weight) - rr_interval = NS_TO_JIFFIES(sched_slice(cfs_rq_of(se), se)); + rr_interval = NS_TO_JIFFIES(se->slice); return rr_interval; } --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -13,6 +13,7 @@ SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true) * sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled. */ SCHED_FEAT(PLACE_LAG, true) +SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) /* * Prefer to schedule the task we woke last (assuming it failed @@ -103,3 +104,5 @@ SCHED_FEAT(LATENCY_WARN, false) SCHED_FEAT(ALT_PERIOD, true) SCHED_FEAT(BASE_SLICE, true) + +SCHED_FEAT(EEVDF, true) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3316,5 +3316,6 @@ static inline void switch_mm_cid(struct #endif extern u64 avg_vruntime(struct cfs_rq *cfs_rq); +extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se); #endif /* _KERNEL_SCHED_SCHED_H */ From patchwork Tue Mar 28 09:26:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75993 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2128077vqo; Tue, 28 Mar 2023 04:09:23 -0700 (PDT) X-Google-Smtp-Source: AKy350ZltPdEJlWRw8GWbkyPjTobAraAmvxhbxrDqvavdXvqV4wThAaiIcYrVx6pdnOabckV1DPB X-Received: by 2002:a17:906:bcd1:b0:921:5cce:6599 with SMTP id lw17-20020a170906bcd100b009215cce6599mr16821786ejb.41.1680001763290; Tue, 28 Mar 2023 04:09:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001763; cv=none; d=google.com; s=arc-20160816; b=eTtMSOTi/B/zyfnqulurj9BQBkL/T4CFX7JBUkfQRGQXFqKEB4nrkmIus4LBJK9PgE GVEvrQpRjFzQpC7s6LXzDew37NJtG/Dp5A1H9Pc/saY3z81Dgy/wzrILw8xy6JL1KyNg ndzCLa5kp5yzya5J5JsHDa9I73bExiEskO5z8Z5aZ5IZM69otow+6TFpSqdj+DqncCWS a7r9+8JphaAPwyu4WMZUklVNsUVfAy/Fl5j6YG0ho43iGBC91QXXYteG4KRa17rfEX8/ U+Txx8zbtEjqjg2sHhUBPGE0xh8kPV+TbRpQmAB9L9okxPYZVz0/PvE3pWo16qMzEeMW nzeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=1FCF+bva1sfLabUnefjjQV6g3uz7AOGjzBJHJ5+bH5M=; b=n0EuaIo0+ZmjuiLwQPwHt23ecwlq+ewX+w2YQWdpeI8RwuHgWrnfRoLiGvMPHiUmni H2jkXB2U3rZ17qSVWEpuJdjvl9h/Qczb8xlOFIdCeFbfIMxHqmWNZx3lho5rHXVv0hlN VufBTiRGQZB0RDYbeNqHS+aH0sNDgNqcx7dYU1bJTDuNAH0WDu+Ghs3lxTb805DC9Xmj GqC5u/9FhTO1PcZjtHWS94EKefOWYYTXxT76xLxquFlee1cAAqHk2sav8Z5P+sJUPXy3 UdPKTvwuHvDXEvowZDSE8s3h9tx9RWRv7PgbQUh1OMDTWvqyDiOdnGf5kWqTLn+xOB9q wDfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=b9hunbJ4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n21-20020a170906379500b009468695bb0fsi1630823ejc.374.2023.03.28.04.08.58; Tue, 28 Mar 2023 04:09:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=b9hunbJ4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232776AbjC1LHs (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232115AbjC1LHN (ORCPT ); Tue, 28 Mar 2023 07:07:13 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08B9D7ECB for ; Tue, 28 Mar 2023 04:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=1FCF+bva1sfLabUnefjjQV6g3uz7AOGjzBJHJ5+bH5M=; b=b9hunbJ4GhNH0nN57KhO2SGFjj lFc3SQ5aJex15nAVDgTrP45XjbI9SB/PwuXyTY70g8fDHYxiGdZ9bEe1FU2bjvfZUulpyUGFJlTN6 SHzlAz4BIa9ljTSr94cEMwTCPUsHSj+3QBDRyxOSjRUY7XmC8seIGogyF3GDhn3fUGT3BMwl2H9FT MUp/kRpT1YZjzUnJNPaN0aSSrr1OggI+UZMTWb9aLo+WPAm+mPe7JQpFBQy6/z+chMpAy+Z0JVTlQ ES1TCJ/L1jZRCQgVBQwFcm1q0TsdwkAkaHR+ERJX/kwkmPeH85gtprcKQS6ev8sw0enSz4gKbXiIK ODhED+nQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79l-006XUw-1p; Tue, 28 Mar 2023 11:06:25 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 505373005DB; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id CFDA02CB8CECC; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110354.211436917@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:31 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 09/17] sched: Commit to lag based placement References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609528701847744?= X-GMAIL-MSGID: =?utf-8?q?1761609528701847744?= Removes the FAIR_SLEEPERS code in favour of the new LAG based placement. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 59 ------------------------------------------------ kernel/sched/features.h | 8 ------ 2 files changed, 1 insertion(+), 66 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4970,29 +4970,6 @@ static void check_spread(struct cfs_rq * #endif } -static inline bool entity_is_long_sleeper(struct sched_entity *se) -{ - struct cfs_rq *cfs_rq; - u64 sleep_time; - - if (se->exec_start == 0) - return false; - - cfs_rq = cfs_rq_of(se); - - sleep_time = rq_clock_task(rq_of(cfs_rq)); - - /* Happen while migrating because of clock task divergence */ - if (sleep_time <= se->exec_start) - return false; - - sleep_time -= se->exec_start; - if (sleep_time > ((1ULL << 63) / scale_load_down(NICE_0_LOAD))) - return true; - - return false; -} - static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { @@ -5041,43 +5018,9 @@ place_entity(struct cfs_rq *cfs_rq, stru if (WARN_ON_ONCE(!load)) load = 1; lag = div_s64(lag, load); - - vruntime -= lag; - } - - if (sched_feat(FAIR_SLEEPERS)) { - - /* sleeps up to a single latency don't count. */ - if (!initial) { - unsigned long thresh; - - if (se_is_idle(se)) - thresh = sysctl_sched_min_granularity; - else - thresh = sysctl_sched_latency; - - /* - * Halve their sleep time's effect, to allow - * for a gentler effect of sleepers: - */ - if (sched_feat(GENTLE_FAIR_SLEEPERS)) - thresh >>= 1; - - vruntime -= thresh; - } - - /* - * Pull vruntime of the entity being placed to the base level of - * cfs_rq, to prevent boosting it if placed backwards. If the entity - * slept for a long time, don't even try to compare its vruntime with - * the base as it may be too far off and the comparison may get - * inversed due to s64 overflow. - */ - if (!entity_is_long_sleeper(se)) - vruntime = max_vruntime(se->vruntime, vruntime); } - se->vruntime = vruntime; + se->vruntime = vruntime - lag; /* * When joining the competition; the exisiting tasks will be, --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -1,14 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Only give sleepers 50% of their service deficit. This allows - * them to run sooner, but does not allow tons of sleepers to - * rip the spread apart. - */ -SCHED_FEAT(FAIR_SLEEPERS, false) -SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true) - -/* * Using the avg_vruntime, do the right thing and preserve lag across * sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled. */ From patchwork Tue Mar 28 09:26:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75989 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2127905vqo; Tue, 28 Mar 2023 04:09:08 -0700 (PDT) X-Google-Smtp-Source: AKy350aNFOmUtK7QhS189hN8yZ0qatRkx6MA5y6WnUssEZ7C0UDAx3LenM7eJiYEY2VQ9zx0APl4 X-Received: by 2002:a05:6402:329:b0:4fe:9160:6a7b with SMTP id q9-20020a056402032900b004fe91606a7bmr13974657edw.9.1680001748439; Tue, 28 Mar 2023 04:09:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001748; cv=none; d=google.com; s=arc-20160816; b=VyZRXujhLCkFpAcdFRpISlFAKv8zJOiPKJCeQCWwYDXnAWOMM6KJSeKaP3akbYJL13 C4dhdm+vr5BgPXBSIVWOvRFn4sVyv7DVP2+L4D3S/JOLXYC56/DhLR0PzutykjYNcaol patePTMKba/AeBiWxzhFvDdDaZlpRthFCT3fg3jECqgv+m4bTk8OU25ymsQhJHNYj5Sd GXLlcy/9AKeYNetR46q75vUAjYIK2VKAwuD/+WlzsCaEy8+UqoLK/Zne2/TwGYMSlBE3 /WuXs0dFVofLFbCsQRzjmP4jTgbvFzfIe4u7OwGDSz0ksfddNHglNO5g8wEYFj+53M5w XnBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=Xd5Db/jfk2Mhrb2v5Fzhu4reHhPhqOf/4VIb1HzrBa4=; b=qOGbYZNjrflRtYXCfpeHa0hr1kSV244GIeLOYltwI2ktlm2b3cWZk98C3HqnC0Xv6C 80LJBz9iOXEWw6CzzkT0zwPdicSBpy8SIAt5ILnQXJX62QEYUKdk7SUP9FD0HELpS7pg XjdZ/kZUiGR1gOxEhGkcXfKaOdUlBejbGas6Xtx/ENCXDQkOTm3KRpKG/XFbnIEuc4Lr lC0CLpU4WiO/Tgk+yERyJKHTtpgim3HKBSGwu8EA+B5nKH0ekiIwwSoWMOkTsmKO0xxc IsWe/HHNRJ1hQ3aAlFgat1b3lBWFA+++Tti2deiAx7fXXuZaCUb3ArSXdJ2PhIpyCai/ NfOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=N1wWa1NP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q2-20020a056402040200b00500748a37b9si25718838edv.249.2023.03.28.04.08.44; Tue, 28 Mar 2023 04:09:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=N1wWa1NP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232723AbjC1LHe (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232499AbjC1LHL (ORCPT ); Tue, 28 Mar 2023 07:07:11 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B5977DA9 for ; Tue, 28 Mar 2023 04:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Xd5Db/jfk2Mhrb2v5Fzhu4reHhPhqOf/4VIb1HzrBa4=; b=N1wWa1NPLtQHJitmKAkpBKRKa2 6iSZnZpWiCSq1AuaFkppCGuboSyDO14SKorZHiWA5xXcQJnbP6WRUar0+I9CTMUf83iAm55T1acQ4 YhlWBn1zb4n4cy2SaPuOmfsctNrcH9z5MPM/J+ezGVdtI8WRo1hSBen5/eJ+odT/5qHJHd0btyqYe jvK2r0gXVF5neGHbV1nmy+5vNbF3G7KDXulNbYi534Mucg+p+JQsDc/KHyobDR326/6mpPknlGI6x xLiBgEzuEw+p7Uz4dK3ExYH1xNsZj+RaWTraDlx8smyyYZRm8i/iPoj645dPscVQHHr/1JBCHnH1t NjnyOO1w==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ph79m-008MDE-17; Tue, 28 Mar 2023 11:06:26 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5B291302E8A; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id D25CE2CB8CEC6; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110354.279334987@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:32 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 10/17] sched/smp: Use lag to simplify cross-runqueue placement References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609513347605795?= X-GMAIL-MSGID: =?utf-8?q?1761609513347605795?= Using lag is both more correct and simpler when moving between runqueues. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 145 ++++++---------------------------------------------- 1 file changed, 19 insertions(+), 126 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4985,7 +4985,7 @@ place_entity(struct cfs_rq *cfs_rq, stru * * EEVDF: placement strategy #1 / #2 */ - if (sched_feat(PLACE_LAG) && cfs_rq->nr_running > 1) { + if (sched_feat(PLACE_LAG) && cfs_rq->nr_running) { struct sched_entity *curr = cfs_rq->curr; unsigned long load; @@ -5040,61 +5040,21 @@ static void check_enqueue_throttle(struc static inline bool cfs_bandwidth_used(void); -/* - * MIGRATION - * - * dequeue - * update_curr() - * update_min_vruntime() - * vruntime -= min_vruntime - * - * enqueue - * update_curr() - * update_min_vruntime() - * vruntime += min_vruntime - * - * this way the vruntime transition between RQs is done when both - * min_vruntime are up-to-date. - * - * WAKEUP (remote) - * - * ->migrate_task_rq_fair() (p->state == TASK_WAKING) - * vruntime -= min_vruntime - * - * enqueue - * update_curr() - * update_min_vruntime() - * vruntime += min_vruntime - * - * this way we don't have the most up-to-date min_vruntime on the originating - * CPU and an up-to-date min_vruntime on the destination CPU. - */ - static void enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { - bool renorm = !(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_MIGRATED); bool curr = cfs_rq->curr == se; /* * If we're the current task, we must renormalise before calling * update_curr(). */ - if (renorm && curr) - se->vruntime += cfs_rq->min_vruntime; + if (curr) + place_entity(cfs_rq, se, 0); update_curr(cfs_rq); /* - * Otherwise, renormalise after, such that we're placed at the current - * moment in time, instead of some random moment in the past. Being - * placed in the past could significantly boost this task to the - * fairness detriment of existing tasks. - */ - if (renorm && !curr) - se->vruntime += cfs_rq->min_vruntime; - - /* * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new @@ -5105,11 +5065,22 @@ enqueue_entity(struct cfs_rq *cfs_rq, st */ update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH); se_update_runnable(se); + /* + * XXX update_load_avg() above will have attached us to the pelt sum; + * but update_cfs_group() here will re-adjust the weight and have to + * undo/redo all that. Seems wasteful. + */ update_cfs_group(se); - account_entity_enqueue(cfs_rq, se); - if (flags & ENQUEUE_WAKEUP) + /* + * XXX now that the entity has been re-weighted, and it's lag adjusted, + * we can place the entity. + */ + if (!curr) place_entity(cfs_rq, se, 0); + + account_entity_enqueue(cfs_rq, se); + /* Entity has migrated, no longer consider this task hot */ if (flags & ENQUEUE_MIGRATED) se->exec_start = 0; @@ -5204,23 +5175,12 @@ dequeue_entity(struct cfs_rq *cfs_rq, st clear_buddies(cfs_rq, se); - if (flags & DEQUEUE_SLEEP) - update_entity_lag(cfs_rq, se); - + update_entity_lag(cfs_rq, se); if (se != cfs_rq->curr) __dequeue_entity(cfs_rq, se); se->on_rq = 0; account_entity_dequeue(cfs_rq, se); - /* - * Normalize after update_curr(); which will also have moved - * min_vruntime if @se is the one holding it back. But before doing - * update_min_vruntime() again, which will discount @se's position and - * can move min_vruntime forward still more. - */ - if (!(flags & DEQUEUE_SLEEP)) - se->vruntime -= cfs_rq->min_vruntime; - /* return excess runtime on last dequeue */ return_cfs_rq_runtime(cfs_rq); @@ -7975,18 +7935,6 @@ static void migrate_task_rq_fair(struct { struct sched_entity *se = &p->se; - /* - * As blocked tasks retain absolute vruntime the migration needs to - * deal with this by subtracting the old and adding the new - * min_vruntime -- the latter is done by enqueue_entity() when placing - * the task on the new runqueue. - */ - if (READ_ONCE(p->__state) == TASK_WAKING) { - struct cfs_rq *cfs_rq = cfs_rq_of(se); - - se->vruntime -= u64_u32_load(cfs_rq->min_vruntime); - } - if (!task_on_rq_migrating(p)) { remove_entity_load_avg(se); @@ -12331,8 +12279,8 @@ static void task_tick_fair(struct rq *rq */ static void task_fork_fair(struct task_struct *p) { - struct cfs_rq *cfs_rq; struct sched_entity *se = &p->se, *curr; + struct cfs_rq *cfs_rq; struct rq *rq = this_rq(); struct rq_flags rf; @@ -12341,22 +12289,9 @@ static void task_fork_fair(struct task_s cfs_rq = task_cfs_rq(current); curr = cfs_rq->curr; - if (curr) { + if (curr) update_curr(cfs_rq); - se->vruntime = curr->vruntime; - } place_entity(cfs_rq, se, 1); - - if (sysctl_sched_child_runs_first && curr && entity_before(curr, se)) { - /* - * Upon rescheduling, sched_class::put_prev_task() will place - * 'current' within the tree based on its new key value. - */ - swap(curr->vruntime, se->vruntime); - resched_curr(rq); - } - - se->vruntime -= cfs_rq->min_vruntime; rq_unlock(rq, &rf); } @@ -12385,34 +12320,6 @@ prio_changed_fair(struct rq *rq, struct check_preempt_curr(rq, p, 0); } -static inline bool vruntime_normalized(struct task_struct *p) -{ - struct sched_entity *se = &p->se; - - /* - * In both the TASK_ON_RQ_QUEUED and TASK_ON_RQ_MIGRATING cases, - * the dequeue_entity(.flags=0) will already have normalized the - * vruntime. - */ - if (p->on_rq) - return true; - - /* - * When !on_rq, vruntime of the task has usually NOT been normalized. - * But there are some cases where it has already been normalized: - * - * - A forked child which is waiting for being woken up by - * wake_up_new_task(). - * - A task which has been woken up by try_to_wake_up() and - * waiting for actually being woken up by sched_ttwu_pending(). - */ - if (!se->sum_exec_runtime || - (READ_ONCE(p->__state) == TASK_WAKING && p->sched_remote_wakeup)) - return true; - - return false; -} - #ifdef CONFIG_FAIR_GROUP_SCHED /* * Propagate the changes of the sched_entity across the tg tree to make it @@ -12483,16 +12390,6 @@ static void attach_entity_cfs_rq(struct static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); - - if (!vruntime_normalized(p)) { - /* - * Fix up our vruntime so that the current sleep doesn't - * cause 'unlimited' sleep bonus. - */ - place_entity(cfs_rq, se, 0); - se->vruntime -= cfs_rq->min_vruntime; - } detach_entity_cfs_rq(se); } @@ -12500,12 +12397,8 @@ static void detach_task_cfs_rq(struct ta static void attach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); attach_entity_cfs_rq(se); - - if (!vruntime_normalized(p)) - se->vruntime += cfs_rq->min_vruntime; } static void switched_from_fair(struct rq *rq, struct task_struct *p) From patchwork Tue Mar 28 09:26:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75999 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2133458vqo; Tue, 28 Mar 2023 04:18:34 -0700 (PDT) X-Google-Smtp-Source: AKy350ZxF0jwE0I3MqiG5MafxOW3ytNrFYdnRzR7Z/XndJe+QbSkadtd7sssplejpLVjmpskcrzH X-Received: by 2002:a05:6402:1503:b0:4fd:2b0e:ce87 with SMTP id f3-20020a056402150300b004fd2b0ece87mr13892986edw.24.1680002313853; Tue, 28 Mar 2023 04:18:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680002313; cv=none; d=google.com; s=arc-20160816; b=DJOXT81NrlHNNMEw8RTxB+k/yFQL5XsKpLDQYu3QtBEnZpe9rLoLiaAmeqtJ1QWvA6 c3DeVsugS5VDpSZ6rAFP9kzqWeq8pOwc1jB+S/39ALEZIMZvoJrY9dMkn6DybtiHTGY6 mdYkj4td6Oe3EbsfyVSLRQQwpe7T4skZ+S5RQpZh9qVKXSGAsSEfoHIB9y5KRHaYq6s8 fbC14jL8Ga77voEFwb7i3X95nRNlRxR4RITRhFUtSiPBQ4OYpe4RV0KuMXGOhDlhLg4S ZzCtXIR6zHGc0+UfwMDnneV2glADMF4r/kYOC8oAeJjWQNoHHAm6CoR4dG0nPh5lJmnn GUeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=Syl/dxNC09RVktZ72NIu0BWKFYQ35u82RuiUJjwXlbI=; b=XaotBjcJyd6GVImEUVrsqtTr1hlreX8wybxcWKljgRIZxNp3xbsE2L0Ljvxam/YBPS 5nb+evWpG8QOqaVnMQuk3xEyIgfF/Py4CUJkIc5cVGomU7j2f1K+k44ekdx8v0nNmzPH yhQ0P58Q6+VGFPiFESmOsoi3O0GcjVIFhvA8f6jVlab+izqZ4XYsgYEqc57unPMphpGQ bx68O1/fdcK4x/N1mCkCAS7wcBrR1bCRE6qJa7xzn0ueJBCbjHDKxmR6VPw3LVhEs1ge x5X2Z+ihL+YbLxK8xvH0uJkoiTpKxWbeab7WIzv4aHdsB2XtyXDviZ8ZO8qoc/dpQIXJ fgqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=oJwoFYsB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r8-20020aa7d148000000b004c12b7f5a48si28034494edo.177.2023.03.28.04.18.10; Tue, 28 Mar 2023 04:18:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=oJwoFYsB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231979AbjC1LHy (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232156AbjC1LHN (ORCPT ); Tue, 28 Mar 2023 07:07:13 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 470EE7DA8 for ; Tue, 28 Mar 2023 04:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Syl/dxNC09RVktZ72NIu0BWKFYQ35u82RuiUJjwXlbI=; b=oJwoFYsBKpGR+4FE0d7wvxxY/Y MXtNtmD+jfvxscM3uV6SEwQRHpRIYY8e2VvGw6eP3PfLzP8bQgQJvSiRrja9eQSI7/NGTJ5vD9Ql7 Gg3zKPrA+f0EaqHAXqNdMs0i5ikl3vwHPGAp9SwNRl1I8cjjen3WKxfueBALqvgKk+QyCMdXw2JQ1 rKx1PcFW3XG8EDJ5ddLoFtFU0LEdXJ25FEkJn6vi1FMP2h2+9Pb+ZCh3ypEFQrGc8mW8ROf9mS4yG oMPaOHcix0BLL1O1uzMDybggRXX/e+DQITi7/4K/LX8hjCPUMHscBJrRObgk0NgMVR68AfCaiZ2rY 0tctCm1w==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79m-006XV9-1e; Tue, 28 Mar 2023 11:06:36 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 67D07302EC7; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id D6D4D2CB8D7A0; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110354.356193222@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:33 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 11/17] sched: Commit to EEVDF References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761610106613792426?= X-GMAIL-MSGID: =?utf-8?q?1761610106613792426?= Remove all the dead code... Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/debug.c | 6 kernel/sched/fair.c | 440 +++--------------------------------------------- kernel/sched/features.h | 12 - kernel/sched/sched.h | 5 4 files changed, 31 insertions(+), 432 deletions(-) --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -308,10 +308,7 @@ static __init int sched_init_debug(void) debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops); #endif - debugfs_create_u32("latency_ns", 0644, debugfs_sched, &sysctl_sched_latency); debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity); - debugfs_create_u32("idle_min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_idle_min_granularity); - debugfs_create_u32("wakeup_granularity_ns", 0644, debugfs_sched, &sysctl_sched_wakeup_granularity); debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms); debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once); @@ -819,10 +816,7 @@ static void sched_debug_header(struct se SEQ_printf(m, " .%-40s: %Ld\n", #x, (long long)(x)) #define PN(x) \ SEQ_printf(m, " .%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x)) - PN(sysctl_sched_latency); PN(sysctl_sched_min_granularity); - PN(sysctl_sched_idle_min_granularity); - PN(sysctl_sched_wakeup_granularity); P(sysctl_sched_child_runs_first); P(sysctl_sched_features); #undef PN --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -58,22 +58,6 @@ #include "autogroup.h" /* - * Targeted preemption latency for CPU-bound tasks: - * - * NOTE: this latency value is not the same as the concept of - * 'timeslice length' - timeslices in CFS are of variable length - * and have no persistent notion like in traditional, time-slice - * based scheduling concepts. - * - * (to see the precise effective timeslice length of your workload, - * run vmstat and monitor the context-switches (cs) field) - * - * (default: 6ms * (1 + ilog(ncpus)), units: nanoseconds) - */ -unsigned int sysctl_sched_latency = 6000000ULL; -static unsigned int normalized_sysctl_sched_latency = 6000000ULL; - -/* * The initial- and re-scaling of tunables is configurable * * Options are: @@ -95,36 +79,11 @@ unsigned int sysctl_sched_min_granularit static unsigned int normalized_sysctl_sched_min_granularity = 750000ULL; /* - * Minimal preemption granularity for CPU-bound SCHED_IDLE tasks. - * Applies only when SCHED_IDLE tasks compete with normal tasks. - * - * (default: 0.75 msec) - */ -unsigned int sysctl_sched_idle_min_granularity = 750000ULL; - -/* - * This value is kept at sysctl_sched_latency/sysctl_sched_min_granularity - */ -static unsigned int sched_nr_latency = 8; - -/* * After fork, child runs first. If set to 0 (default) then * parent will (try to) run first. */ unsigned int sysctl_sched_child_runs_first __read_mostly; -/* - * SCHED_OTHER wake-up granularity. - * - * This option delays the preemption effects of decoupled workloads - * and reduces their over-scheduling. Synchronous workloads will still - * have immediate wakeup/sleep latencies. - * - * (default: 1 msec * (1 + ilog(ncpus)), units: nanoseconds) - */ -unsigned int sysctl_sched_wakeup_granularity = 1000000UL; -static unsigned int normalized_sysctl_sched_wakeup_granularity = 1000000UL; - const_debug unsigned int sysctl_sched_migration_cost = 500000UL; int sched_thermal_decay_shift; @@ -279,8 +238,6 @@ static void update_sysctl(void) #define SET_SYSCTL(name) \ (sysctl_##name = (factor) * normalized_sysctl_##name) SET_SYSCTL(sched_min_granularity); - SET_SYSCTL(sched_latency); - SET_SYSCTL(sched_wakeup_granularity); #undef SET_SYSCTL } @@ -853,30 +810,6 @@ struct sched_entity *__pick_first_entity return __node_2_se(left); } -static struct sched_entity *__pick_next_entity(struct sched_entity *se) -{ - struct rb_node *next = rb_next(&se->run_node); - - if (!next) - return NULL; - - return __node_2_se(next); -} - -static struct sched_entity *pick_cfs(struct cfs_rq *cfs_rq, struct sched_entity *curr) -{ - struct sched_entity *left = __pick_first_entity(cfs_rq); - - /* - * If curr is set we have to see if its left of the leftmost entity - * still in the tree, provided there was anything in the tree at all. - */ - if (!left || (curr && entity_before(curr, left))) - left = curr; - - return left; -} - /* * Earliest Eligible Virtual Deadline First * @@ -977,14 +910,9 @@ int sched_update_scaling(void) { unsigned int factor = get_update_sysctl_factor(); - sched_nr_latency = DIV_ROUND_UP(sysctl_sched_latency, - sysctl_sched_min_granularity); - #define WRT_SYSCTL(name) \ (normalized_sysctl_##name = sysctl_##name / (factor)) WRT_SYSCTL(sched_min_granularity); - WRT_SYSCTL(sched_latency); - WRT_SYSCTL(sched_wakeup_granularity); #undef WRT_SYSCTL return 0; @@ -1000,71 +928,6 @@ long calc_latency_offset(int prio) } /* - * The idea is to set a period in which each task runs once. - * - * When there are too many tasks (sched_nr_latency) we have to stretch - * this period because otherwise the slices get too small. - * - * p = (nr <= nl) ? l : l*nr/nl - */ -static u64 __sched_period(unsigned long nr_running) -{ - if (unlikely(nr_running > sched_nr_latency)) - return nr_running * sysctl_sched_min_granularity; - else - return sysctl_sched_latency; -} - -static bool sched_idle_cfs_rq(struct cfs_rq *cfs_rq); - -/* - * We calculate the wall-time slice from the period by taking a part - * proportional to the weight. - * - * s = p*P[w/rw] - */ -static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) -{ - unsigned int nr_running = cfs_rq->nr_running; - struct sched_entity *init_se = se; - unsigned int min_gran; - u64 slice; - - if (sched_feat(ALT_PERIOD)) - nr_running = rq_of(cfs_rq)->cfs.h_nr_running; - - slice = __sched_period(nr_running + !se->on_rq); - - for_each_sched_entity(se) { - struct load_weight *load; - struct load_weight lw; - struct cfs_rq *qcfs_rq; - - qcfs_rq = cfs_rq_of(se); - load = &qcfs_rq->load; - - if (unlikely(!se->on_rq)) { - lw = qcfs_rq->load; - - update_load_add(&lw, se->load.weight); - load = &lw; - } - slice = __calc_delta(slice, se->load.weight, load); - } - - if (sched_feat(BASE_SLICE)) { - if (se_is_idle(init_se) && !sched_idle_cfs_rq(cfs_rq)) - min_gran = sysctl_sched_idle_min_granularity; - else - min_gran = sysctl_sched_min_granularity; - - slice = max_t(u64, slice, min_gran); - } - - return slice; -} - -/* * XXX: strictly: vd_i += N*r_i/w_i such that: vd_i > ve_i * this is probably good enough. */ @@ -1073,22 +936,12 @@ static void update_deadline(struct cfs_r if ((s64)(se->vruntime - se->deadline) < 0) return; - if (sched_feat(EEVDF)) { - /* - * For EEVDF the virtual time slope is determined by w_i (iow. - * nice) while the request time r_i is determined by - * latency-nice. - */ - se->slice = se->latency_offset; - } else { - /* - * When many tasks blow up the sched_period; it is possible - * that sched_slice() reports unusually large results (when - * many tasks are very light for example). Therefore impose a - * maximum. - */ - se->slice = min_t(u64, sched_slice(cfs_rq, se), sysctl_sched_latency); - } + /* + * For EEVDF the virtual time slope is determined by w_i (iow. + * nice) while the request time r_i is determined by + * latency-nice. + */ + se->slice = se->latency_offset; /* * EEVDF: vd_i = ve_i + r_i / w_i @@ -4957,19 +4810,6 @@ static inline void update_misfit_status( #endif /* CONFIG_SMP */ -static void check_spread(struct cfs_rq *cfs_rq, struct sched_entity *se) -{ -#ifdef CONFIG_SCHED_DEBUG - s64 d = se->vruntime - cfs_rq->min_vruntime; - - if (d < 0) - d = -d; - - if (d > 3*sysctl_sched_latency) - schedstat_inc(cfs_rq->nr_spread_over); -#endif -} - static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { @@ -5087,7 +4927,6 @@ enqueue_entity(struct cfs_rq *cfs_rq, st check_schedstat_required(); update_stats_enqueue_fair(cfs_rq, se, flags); - check_spread(cfs_rq, se); if (!curr) __enqueue_entity(cfs_rq, se); se->on_rq = 1; @@ -5099,17 +4938,6 @@ enqueue_entity(struct cfs_rq *cfs_rq, st } } -static void __clear_buddies_last(struct sched_entity *se) -{ - for_each_sched_entity(se) { - struct cfs_rq *cfs_rq = cfs_rq_of(se); - if (cfs_rq->last != se) - break; - - cfs_rq->last = NULL; - } -} - static void __clear_buddies_next(struct sched_entity *se) { for_each_sched_entity(se) { @@ -5121,27 +4949,10 @@ static void __clear_buddies_next(struct } } -static void __clear_buddies_skip(struct sched_entity *se) -{ - for_each_sched_entity(se) { - struct cfs_rq *cfs_rq = cfs_rq_of(se); - if (cfs_rq->skip != se) - break; - - cfs_rq->skip = NULL; - } -} - static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se) { - if (cfs_rq->last == se) - __clear_buddies_last(se); - if (cfs_rq->next == se) __clear_buddies_next(se); - - if (cfs_rq->skip == se) - __clear_buddies_skip(se); } static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq); @@ -5205,45 +5016,14 @@ dequeue_entity(struct cfs_rq *cfs_rq, st static void check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - unsigned long delta_exec; - struct sched_entity *se; - s64 delta; - - if (sched_feat(EEVDF)) { - if (pick_eevdf(cfs_rq) != curr) - goto preempt; - - return; - } - - delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime; - if (delta_exec > curr->slice) { -preempt: + if (pick_eevdf(cfs_rq) != curr) { resched_curr(rq_of(cfs_rq)); /* * The current task ran long enough, ensure it doesn't get * re-elected due to buddy favours. */ clear_buddies(cfs_rq, curr); - return; } - - /* - * Ensure that a task that missed wakeup preemption by a - * narrow margin doesn't have to wait for a full slice. - * This also mitigates buddy induced latencies under load. - */ - if (delta_exec < sysctl_sched_min_granularity) - return; - - se = __pick_first_entity(cfs_rq); - delta = curr->vruntime - se->vruntime; - - if (delta < 0) - return; - - if (delta > curr->slice) - resched_curr(rq_of(cfs_rq)); } static void @@ -5284,9 +5064,6 @@ set_next_entity(struct cfs_rq *cfs_rq, s se->prev_sum_exec_runtime = se->sum_exec_runtime; } -static int -wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se); - /* * Pick the next process, keeping these things in mind, in this order: * 1) keep things fair between processes/task groups @@ -5297,53 +5074,14 @@ wakeup_preempt_entity(struct sched_entit static struct sched_entity * pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - struct sched_entity *left, *se; - - if (sched_feat(EEVDF)) { - /* - * Enabling NEXT_BUDDY will affect latency but not fairness. - */ - if (sched_feat(NEXT_BUDDY) && - cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) - return cfs_rq->next; - - return pick_eevdf(cfs_rq); - } - - se = left = pick_cfs(cfs_rq, curr); - /* - * Avoid running the skip buddy, if running something else can - * be done without getting too unfair. + * Enabling NEXT_BUDDY will affect latency but not fairness. */ - if (cfs_rq->skip && cfs_rq->skip == se) { - struct sched_entity *second; + if (sched_feat(NEXT_BUDDY) && + cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) + return cfs_rq->next; - if (se == curr) { - second = __pick_first_entity(cfs_rq); - } else { - second = __pick_next_entity(se); - if (!second || (curr && entity_before(curr, second))) - second = curr; - } - - if (second && wakeup_preempt_entity(second, left) < 1) - se = second; - } - - if (cfs_rq->next && wakeup_preempt_entity(cfs_rq->next, left) < 1) { - /* - * Someone really wants this to run. If it's not unfair, run it. - */ - se = cfs_rq->next; - } else if (cfs_rq->last && wakeup_preempt_entity(cfs_rq->last, left) < 1) { - /* - * Prefer last buddy, try to return the CPU to a preempted task. - */ - se = cfs_rq->last; - } - - return se; + return pick_eevdf(cfs_rq); } static bool check_cfs_rq_runtime(struct cfs_rq *cfs_rq); @@ -5360,8 +5098,6 @@ static void put_prev_entity(struct cfs_r /* throttle cfs_rqs exceeding runtime */ check_cfs_rq_runtime(cfs_rq); - check_spread(cfs_rq, prev); - if (prev->on_rq) { update_stats_wait_start_fair(cfs_rq, prev); /* Put 'current' back into the tree. */ @@ -6434,8 +6170,7 @@ static void hrtick_update(struct rq *rq) if (!hrtick_enabled_fair(rq) || curr->sched_class != &fair_sched_class) return; - if (cfs_rq_of(&curr->se)->nr_running < sched_nr_latency) - hrtick_start_fair(rq, curr); + hrtick_start_fair(rq, curr); } #else /* !CONFIG_SCHED_HRTICK */ static inline void @@ -6476,17 +6211,6 @@ static int sched_idle_rq(struct rq *rq) rq->nr_running); } -/* - * Returns true if cfs_rq only has SCHED_IDLE entities enqueued. Note the use - * of idle_nr_running, which does not consider idle descendants of normal - * entities. - */ -static bool sched_idle_cfs_rq(struct cfs_rq *cfs_rq) -{ - return cfs_rq->nr_running && - cfs_rq->nr_running == cfs_rq->idle_nr_running; -} - #ifdef CONFIG_SMP static int sched_idle_cpu(int cpu) { @@ -7972,66 +7696,6 @@ balance_fair(struct rq *rq, struct task_ } #endif /* CONFIG_SMP */ -static unsigned long wakeup_gran(struct sched_entity *se) -{ - unsigned long gran = sysctl_sched_wakeup_granularity; - - /* - * Since its curr running now, convert the gran from real-time - * to virtual-time in his units. - * - * By using 'se' instead of 'curr' we penalize light tasks, so - * they get preempted easier. That is, if 'se' < 'curr' then - * the resulting gran will be larger, therefore penalizing the - * lighter, if otoh 'se' > 'curr' then the resulting gran will - * be smaller, again penalizing the lighter task. - * - * This is especially important for buddies when the leftmost - * task is higher priority than the buddy. - */ - return calc_delta_fair(gran, se); -} - -/* - * Should 'se' preempt 'curr'. - * - * |s1 - * |s2 - * |s3 - * g - * |<--->|c - * - * w(c, s1) = -1 - * w(c, s2) = 0 - * w(c, s3) = 1 - * - */ -static int -wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se) -{ - s64 gran, vdiff = curr->vruntime - se->vruntime; - - if (vdiff <= 0) - return -1; - - gran = wakeup_gran(se); - if (vdiff > gran) - return 1; - - return 0; -} - -static void set_last_buddy(struct sched_entity *se) -{ - for_each_sched_entity(se) { - if (SCHED_WARN_ON(!se->on_rq)) - return; - if (se_is_idle(se)) - return; - cfs_rq_of(se)->last = se; - } -} - static void set_next_buddy(struct sched_entity *se) { for_each_sched_entity(se) { @@ -8043,12 +7707,6 @@ static void set_next_buddy(struct sched_ } } -static void set_skip_buddy(struct sched_entity *se) -{ - for_each_sched_entity(se) - cfs_rq_of(se)->skip = se; -} - /* * Preempt the current task with a newly woken task if needed: */ @@ -8057,7 +7715,6 @@ static void check_preempt_wakeup(struct struct task_struct *curr = rq->curr; struct sched_entity *se = &curr->se, *pse = &p->se; struct cfs_rq *cfs_rq = task_cfs_rq(curr); - int scale = cfs_rq->nr_running >= sched_nr_latency; int next_buddy_marked = 0; int cse_is_idle, pse_is_idle; @@ -8073,7 +7730,7 @@ static void check_preempt_wakeup(struct if (unlikely(throttled_hierarchy(cfs_rq_of(pse)))) return; - if (sched_feat(NEXT_BUDDY) && scale && !(wake_flags & WF_FORK)) { + if (sched_feat(NEXT_BUDDY) && !(wake_flags & WF_FORK)) { set_next_buddy(pse); next_buddy_marked = 1; } @@ -8121,44 +7778,16 @@ static void check_preempt_wakeup(struct cfs_rq = cfs_rq_of(se); update_curr(cfs_rq); - if (sched_feat(EEVDF)) { - /* - * XXX pick_eevdf(cfs_rq) != se ? - */ - if (pick_eevdf(cfs_rq) == pse) - goto preempt; - - return; - } - - if (wakeup_preempt_entity(se, pse) == 1) { - /* - * Bias pick_next to pick the sched entity that is - * triggering this preemption. - */ - if (!next_buddy_marked) - set_next_buddy(pse); + /* + * XXX pick_eevdf(cfs_rq) != se ? + */ + if (pick_eevdf(cfs_rq) == pse) goto preempt; - } return; preempt: resched_curr(rq); - /* - * Only set the backward buddy when the current task is still - * on the rq. This can happen when a wakeup gets interleaved - * with schedule on the ->pre_schedule() or idle_balance() - * point, either of which can * drop the rq lock. - * - * Also, during early boot the idle thread is in the fair class, - * for obvious reasons its a bad idea to schedule back to it. - */ - if (unlikely(!se->on_rq || curr == rq->idle)) - return; - - if (sched_feat(LAST_BUDDY) && scale && entity_is_task(se)) - set_last_buddy(se); } #ifdef CONFIG_SMP @@ -8359,8 +7988,6 @@ static void put_prev_task_fair(struct rq /* * sched_yield() is very simple - * - * The magic of dealing with the ->skip buddy is in pick_next_entity. */ static void yield_task_fair(struct rq *rq) { @@ -8376,23 +8003,19 @@ static void yield_task_fair(struct rq *r clear_buddies(cfs_rq, se); - if (sched_feat(EEVDF) || curr->policy != SCHED_BATCH) { - update_rq_clock(rq); - /* - * Update run-time statistics of the 'current'. - */ - update_curr(cfs_rq); - /* - * Tell update_rq_clock() that we've just updated, - * so we don't do microscopic update in schedule() - * and double the fastpath cost. - */ - rq_clock_skip_update(rq); - } - if (sched_feat(EEVDF)) - se->deadline += calc_delta_fair(se->slice, se); + update_rq_clock(rq); + /* + * Update run-time statistics of the 'current'. + */ + update_curr(cfs_rq); + /* + * Tell update_rq_clock() that we've just updated, + * so we don't do microscopic update in schedule() + * and double the fastpath cost. + */ + rq_clock_skip_update(rq); - set_skip_buddy(se); + se->deadline += calc_delta_fair(se->slice, se); } static bool yield_to_task_fair(struct rq *rq, struct task_struct *p) @@ -8635,8 +8258,7 @@ static int task_hot(struct task_struct * * Buddy candidates are cache hot: */ if (sched_feat(CACHE_HOT_BUDDY) && env->dst_rq->nr_running && - (&p->se == cfs_rq_of(&p->se)->next || - &p->se == cfs_rq_of(&p->se)->last)) + (&p->se == cfs_rq_of(&p->se)->next)) return 1; if (sysctl_sched_migration_cost == -1) --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -15,13 +15,6 @@ SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) SCHED_FEAT(NEXT_BUDDY, false) /* - * Prefer to schedule the task that ran last (when we did - * wake-preempt) as that likely will touch the same data, increases - * cache locality. - */ -SCHED_FEAT(LAST_BUDDY, true) - -/* * Consider buddies to be cache hot, decreases the likeliness of a * cache buddy being migrated away, increases cache locality. */ @@ -93,8 +86,3 @@ SCHED_FEAT(UTIL_EST, true) SCHED_FEAT(UTIL_EST_FASTUP, true) SCHED_FEAT(LATENCY_WARN, false) - -SCHED_FEAT(ALT_PERIOD, true) -SCHED_FEAT(BASE_SLICE, true) - -SCHED_FEAT(EEVDF, true) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -580,8 +580,6 @@ struct cfs_rq { */ struct sched_entity *curr; struct sched_entity *next; - struct sched_entity *last; - struct sched_entity *skip; #ifdef CONFIG_SCHED_DEBUG unsigned int nr_spread_over; @@ -2466,10 +2464,7 @@ extern const_debug unsigned int sysctl_s extern const_debug unsigned int sysctl_sched_migration_cost; #ifdef CONFIG_SCHED_DEBUG -extern unsigned int sysctl_sched_latency; extern unsigned int sysctl_sched_min_granularity; -extern unsigned int sysctl_sched_idle_min_granularity; -extern unsigned int sysctl_sched_wakeup_granularity; extern int sysctl_resched_latency_warn_ms; extern int sysctl_resched_latency_warn_once; From patchwork Tue Mar 28 09:26:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75988 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2127874vqo; Tue, 28 Mar 2023 04:09:05 -0700 (PDT) X-Google-Smtp-Source: AKy350bLVL/Z7Kf0Fdk5Hk7RZlQHBQhPTr2dj9qhJsGQRdBXi9GBAiLb76IQdMSrp9/YQmIUO/Gv X-Received: by 2002:a17:907:a0b:b0:93e:739f:b0b3 with SMTP id bb11-20020a1709070a0b00b0093e739fb0b3mr19355894ejc.50.1680001745624; Tue, 28 Mar 2023 04:09:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001745; cv=none; d=google.com; s=arc-20160816; b=FgJic3aXarpCdu5a3Ru3LLkUWm36VWAuIzLQy459AnwD159EAGUnNelCoLXRsHqeTv +VhxHH7kdJCmpaqBZQEKX6saQvwaOlA0xENl2udwUR6okZVJtCHbMq7Ax/2pGYHy3gL6 ioTauYb3PzzY1mJt6Ppnt+PCNCwZl5JOQ8CjnAn/aqEjSlfa9SehPM9F4BK5/14yay4g 1jARik8yHnJvvcjD32ChJELAYlFKwlfqTWucQmMetatYnAOtVkiylT4hgV6xFsHNcQpC TlKcB4e0JNXUQPZyckkAdWhqUYEtwUbKCmkAhEEnge4RiU9AZQZvAlYiSEZPQRtrhOyg +Icw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=ChQJJCdcacurj6PDeiSczxM1Ai8HwF4oJqBp0TPIkNE=; b=AaupcgysNbljzb05Aqm4NNAi/Dpikmfov++61xSR17MrWfZtEMf7NaCzfZs3G2i6Qf 1Zm/KqfPTjoQPTkwBNPEKor6nZUbPsNNLtb4qskGxO0FyrR1wNUBqvhqaGRLwQ92qDu2 poNiYL+pkELrOHbEXsz6RpIBsBsfYjsX8egFFlcWiVO+o29TiumSP1ChnMN8tFSGJBqY 2CRhdlpKd/e9xT4Z/iAJeLWL5ORDKJEHmdNObKqpeTmTwpbLxbEeepZo/Gk7nNvKIzbi 52OOx81vJyR0vvq1znDyRkUXkxply9L5YtReMhZrFQnVrW3pgY9GqrRNDtbRgx12Ybvo invw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cIB8rYoc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o3-20020a056402038300b00501ea9bef40si17376320edv.159.2023.03.28.04.08.41; Tue, 28 Mar 2023 04:09:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cIB8rYoc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232625AbjC1LH3 (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231365AbjC1LHL (ORCPT ); Tue, 28 Mar 2023 07:07:11 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 461057D93 for ; Tue, 28 Mar 2023 04:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=ChQJJCdcacurj6PDeiSczxM1Ai8HwF4oJqBp0TPIkNE=; b=cIB8rYocU95kjDFWxHwMBhIwpF vDNY2Gp8+ReobdmgwEegEa/lJA2YS/solY8Tw48Z26ONoNBrXS/heKRSkZ9bzz7KcQ+TbVekoRCpk axsUCEO3k+9B5bY5uhyU9C3oLzu1tyDkefSFYISJEU5C4Le+en5Xcq3Wvr5OTfu0hkYQS00GwbtpW 4yAqtpICPcMdeH/9GqumGWi4gVqLY5oGDlg4GKWEGGJztVcoipObDgdSju489RK2c3vye9QQDRhMc 4L18p3CJkDaVw2uNIEsBr7ewyL8VOYhQ3i8vyo6f1PgGr7j1jqjRMdSr/rp/uwHuLICtD/zl3G2RX Fj0W0AdQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ph79m-008MDH-A0; Tue, 28 Mar 2023 11:06:26 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 682B2302ECC; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id E41D82CB8D7A6; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110354.426388922@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:34 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 12/17] sched/debug: Rename min_granularity to base_slice References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609510528002640?= X-GMAIL-MSGID: =?utf-8?q?1761609510528002640?= Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/debug.c | 4 ++-- kernel/sched/fair.c | 10 +++++----- kernel/sched/sched.h | 2 +- 3 files changed, 8 insertions(+), 8 deletions(-) --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -308,7 +308,7 @@ static __init int sched_init_debug(void) debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops); #endif - debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity); + debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice); debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms); debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once); @@ -816,7 +816,7 @@ static void sched_debug_header(struct se SEQ_printf(m, " .%-40s: %Ld\n", #x, (long long)(x)) #define PN(x) \ SEQ_printf(m, " .%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x)) - PN(sysctl_sched_min_granularity); + PN(sysctl_sched_base_slice); P(sysctl_sched_child_runs_first); P(sysctl_sched_features); #undef PN --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -75,8 +75,8 @@ unsigned int sysctl_sched_tunable_scalin * * (default: 0.75 msec * (1 + ilog(ncpus)), units: nanoseconds) */ -unsigned int sysctl_sched_min_granularity = 750000ULL; -static unsigned int normalized_sysctl_sched_min_granularity = 750000ULL; +unsigned int sysctl_sched_base_slice = 750000ULL; +static unsigned int normalized_sysctl_sched_base_slice = 750000ULL; /* * After fork, child runs first. If set to 0 (default) then @@ -237,7 +237,7 @@ static void update_sysctl(void) #define SET_SYSCTL(name) \ (sysctl_##name = (factor) * normalized_sysctl_##name) - SET_SYSCTL(sched_min_granularity); + SET_SYSCTL(sched_base_slice); #undef SET_SYSCTL } @@ -882,7 +882,7 @@ int sched_update_scaling(void) #define WRT_SYSCTL(name) \ (normalized_sysctl_##name = sysctl_##name / (factor)) - WRT_SYSCTL(sched_min_granularity); + WRT_SYSCTL(sched_base_slice); #undef WRT_SYSCTL return 0; @@ -892,7 +892,7 @@ int sched_update_scaling(void) long calc_latency_offset(int prio) { u32 weight = sched_prio_to_weight[prio]; - u64 base = sysctl_sched_min_granularity; + u64 base = sysctl_sched_base_slice; return div_u64(base << SCHED_FIXEDPOINT_SHIFT, weight); } --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2464,7 +2464,7 @@ extern const_debug unsigned int sysctl_s extern const_debug unsigned int sysctl_sched_migration_cost; #ifdef CONFIG_SCHED_DEBUG -extern unsigned int sysctl_sched_min_granularity; +extern unsigned int sysctl_sched_base_slice; extern int sysctl_resched_latency_warn_ms; extern int sysctl_resched_latency_warn_once; From patchwork Tue Mar 28 09:26:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75987 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2127852vqo; Tue, 28 Mar 2023 04:09:04 -0700 (PDT) X-Google-Smtp-Source: AKy350byIJIlevzRJ4zQIR5u6rEtGdzQ7MM3Mh5ieFC1p/YsxRwmqdA9/Bt4qObN33bE6gnXtwp9 X-Received: by 2002:aa7:c7d4:0:b0:4fb:1b0d:9f84 with SMTP id o20-20020aa7c7d4000000b004fb1b0d9f84mr15134211eds.6.1680001744113; Tue, 28 Mar 2023 04:09:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001744; cv=none; d=google.com; s=arc-20160816; b=KBoTA26MxZMtfQjuo5tQxVTcYhKbT/sA8UMGK/SmToLDG9+bCgdvGf+ceTSYtyhcg1 ALl8MbpN2vf0yPpbxSMES1T8wI8YA8i7kki6/AQg1BfLPJbYIH66TJbxM36XEjLpzKOL fyntaoRa6/i7cg1epuYyYLziu6VzelCqaTo2oLdXpDkk3wX97/2htghdwGqONdlbYmN7 DfzfBGbbh+iW+x3IQEOFbCrNf4lvNdFDLEzTkZlFi5ch4HMvXIblS+Yeho7++bthYQEG 9kbbrEiQXsln/sll+PuyM8mPgdruZWVYQsEk0tGWWabS5t7PlJh8ij65OetMPWwspT5y Envg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=rFkZWbr6epe4ysBPcRnlqJMbFVzCAlpZk+mTEWDb3+8=; b=YQTh8gG51m/TEaxkSa1E3NqNP3ma4/yyUzEHzIGDHE/GR3P19bb34rOK4ktdepx2DX qqnurILT+zTbzUgrqcfraOqN4NiMw3ka0N+pYzNSvSsUO3dVW9/CXs7IjQ8AfWjxgp6a dkt39cTZirzvE6pClHMdDXHJx/Ol44Hgrymqcn7ZSILV52s7OtHc+8iWhdimweb4538f TEIByZmwthB0PvtkXL8X2hS5X4iQizrKz1iZlC6IT2zLEvqVNwCz+x5p6uaqyK3gE0Tt 29ba1GnC6yMX+byMmI06sHZBhnJmb+mZdIp5ge10fFxSa73k6OptM/fVfnz3QAYR2iko slEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=h+aPeTz9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g24-20020aa7c598000000b005021f0d5759si11355706edq.672.2023.03.28.04.08.40; Tue, 28 Mar 2023 04:09:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=h+aPeTz9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232565AbjC1LH0 (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232480AbjC1LHK (ORCPT ); Tue, 28 Mar 2023 07:07:10 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 342687AB7 for ; Tue, 28 Mar 2023 04:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=rFkZWbr6epe4ysBPcRnlqJMbFVzCAlpZk+mTEWDb3+8=; b=h+aPeTz936eH8XwSGnoE8yDWVa 8dIb19pjAJ9gA48QJ34XYOJtoMlbLNEUlpE47vwjjNxuz/IbjJunRn51Z8NHJmiQhbHF17F6g/fpa SNn7oW9rkVSUiKxdDr8XKDXftPo8z+M9/WNstyhf57DvydrId68Z50Ym0CJR+Km/ODR2gGD8nsFWF K3Xe/his6HITHHUyIS48hdrW1xcxI4MrUEHJQ+57wCgQu48dcqCJLynxIyf9ezvg9JMx7FHVsTei/ mg69Q96RhBJAVLq1X6x+kps6j+lCSKJAumk96m6BnnwckON0udVwXcvKMFPnVLCW0coKSUbY9AZUp lVP5/7Zg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79m-006XV5-0W; Tue, 28 Mar 2023 11:06:27 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 68A743030F1; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id EF17E2CB8D7A8; Tue, 28 Mar 2023 13:06:20 +0200 (CEST) Message-ID: <20230328110354.494493579@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:35 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 13/17] sched: Merge latency_offset into slice References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609508765783692?= X-GMAIL-MSGID: =?utf-8?q?1761609508765783692?= Signed-off-by: Peter Zijlstra (Intel) --- include/linux/sched.h | 2 -- kernel/sched/core.c | 17 +++++++---------- kernel/sched/fair.c | 29 ++++++++++++----------------- kernel/sched/sched.h | 2 +- 4 files changed, 20 insertions(+), 30 deletions(-) --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -573,8 +573,6 @@ struct sched_entity { /* cached value of my_q->h_nr_running */ unsigned long runnable_weight; #endif - /* preemption offset in ns */ - long latency_offset; #ifdef CONFIG_SMP /* --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1285,9 +1285,10 @@ static void set_load_weight(struct task_ } } -static void set_latency_offset(struct task_struct *p) +static inline void set_latency_prio(struct task_struct *p, int prio) { - p->se.latency_offset = calc_latency_offset(p->latency_prio - MAX_RT_PRIO); + p->latency_prio = prio; + set_latency_fair(&p->se, prio - MAX_RT_PRIO); } #ifdef CONFIG_UCLAMP_TASK @@ -4442,7 +4443,7 @@ static void __sched_fork(unsigned long c p->se.vlag = 0; INIT_LIST_HEAD(&p->se.group_node); - set_latency_offset(p); + set_latency_prio(p, p->latency_prio); #ifdef CONFIG_FAIR_GROUP_SCHED p->se.cfs_rq = NULL; @@ -4694,9 +4695,7 @@ int sched_fork(unsigned long clone_flags p->prio = p->normal_prio = p->static_prio; set_load_weight(p, false); - - p->latency_prio = NICE_TO_PRIO(0); - set_latency_offset(p); + set_latency_prio(p, NICE_TO_PRIO(0)); /* * We don't need the reset flag anymore after the fork. It has @@ -7469,10 +7468,8 @@ static void __setscheduler_params(struct static void __setscheduler_latency(struct task_struct *p, const struct sched_attr *attr) { - if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { - p->latency_prio = NICE_TO_PRIO(attr->sched_latency_nice); - set_latency_offset(p); - } + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) + set_latency_prio(p, NICE_TO_PRIO(attr->sched_latency_nice)); } /* --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -919,12 +919,19 @@ int sched_update_scaling(void) } #endif -long calc_latency_offset(int prio) +void set_latency_fair(struct sched_entity *se, int prio) { u32 weight = sched_prio_to_weight[prio]; u64 base = sysctl_sched_base_slice; - return div_u64(base << SCHED_FIXEDPOINT_SHIFT, weight); + /* + * For EEVDF the virtual time slope is determined by w_i (iow. + * nice) while the request time r_i is determined by + * latency-nice. + * + * Smaller request gets better latency. + */ + se->slice = div_u64(base << SCHED_FIXEDPOINT_SHIFT, weight); } /* @@ -937,13 +944,6 @@ static void update_deadline(struct cfs_r return; /* - * For EEVDF the virtual time slope is determined by w_i (iow. - * nice) while the request time r_i is determined by - * latency-nice. - */ - se->slice = se->latency_offset; - - /* * EEVDF: vd_i = ve_i + r_i / w_i */ se->deadline = se->vruntime + calc_delta_fair(se->slice, se); @@ -12231,7 +12231,7 @@ void init_tg_cfs_entry(struct task_group se->my_q = cfs_rq; - se->latency_offset = calc_latency_offset(tg->latency_prio - MAX_RT_PRIO); + set_latency_fair(se, tg->latency_prio - MAX_RT_PRIO); /* guarantee group entities always have weight */ update_load_set(&se->load, NICE_0_LOAD); @@ -12365,7 +12365,6 @@ int sched_group_set_idle(struct task_gro int sched_group_set_latency(struct task_group *tg, int prio) { - long latency_offset; int i; if (tg == &root_task_group) @@ -12379,13 +12378,9 @@ int sched_group_set_latency(struct task_ } tg->latency_prio = prio; - latency_offset = calc_latency_offset(prio - MAX_RT_PRIO); - for_each_possible_cpu(i) { - struct sched_entity *se = tg->se[i]; - - WRITE_ONCE(se->latency_offset, latency_offset); - } + for_each_possible_cpu(i) + set_latency_fair(tg->se[i], prio - MAX_RT_PRIO); mutex_unlock(&shares_mutex); return 0; --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2477,7 +2477,7 @@ extern unsigned int sysctl_numa_balancin extern unsigned int sysctl_numa_balancing_hot_threshold; #endif -extern long calc_latency_offset(int prio); +extern void set_latency_fair(struct sched_entity *se, int prio); #ifdef CONFIG_SCHED_HRTICK From patchwork Tue Mar 28 09:26:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75985 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2127796vqo; Tue, 28 Mar 2023 04:08:58 -0700 (PDT) X-Google-Smtp-Source: AKy350YnrdxXfRiB0jdKth+vukG52hgqS/Uq+RSSRWU//h+UEWbjSXYjEasZB6flJVzDszTT1QAy X-Received: by 2002:a17:906:b349:b0:931:91a:fa4f with SMTP id cd9-20020a170906b34900b00931091afa4fmr17639492ejb.41.1680001738138; Tue, 28 Mar 2023 04:08:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001738; cv=none; d=google.com; s=arc-20160816; b=p6M4unTCeUoFTpcKX9BkWO7YKQeq5+Zsj6EBWFjFa0TzRv41WNTPCEyJvOUzUH3suh 1mLgjmwx2eu1Jtvwa3iV6RlpSOqiKi/wuGdwmB5TXrZXlkMokqkMmdwYHOnDm6m6Kivx 7enaLC239thLzuOhy58nUKmK7YX2nRibBArbutX2W9fVbGTp6+uqiyXRxn7E5UPmtlE/ xQscATRutQlOBjWgvGhw/oAuJld00XuEFYXznnQ+z//V3vL9MFxgAo1QUcgxK2MMLt6C ycKN7VmSxv/pbfkf128DFDtJKvl1DmiQRhqbwfTaGe+1jWS27OFEtPFUYXybi6xc0eKP GKzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=6jo/I+r1XZjfSV9hErqb9NOr5B8VgbE2esFe+ZOAMPE=; b=u/mO0rjH2u3UUg3n/mPvpu8H+jh/SslqasWt0P67/zNpI/ytU+nfb73fp1gxU6p9Sb ExP6zs+cLc+gtb+TGG7K5WsNCRgkLa0vMDs9RNmhJwg+TT/7HMbnit65mxFFJe1VsG2j 9t6Yssw1BSpOCr1RmJYRHGUnmwKaFod+NqsgziiflYi8fgJOnV0En+KzKMWHDHCuNP7J u1FOSoRi+lOVtOnN+xefhFe8WG5IHaXQJyZIpKNGQuX/k/MgwAFhKEN2g3owhTWdi4yf qgQtbkyEn8zonDmXrrTDpA7jxhbyxqlxKuNnlOY7oIrbUsfSd2Ydd5vLEEF0wnl+nyIF Xzxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=klW0hMjP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jg3-20020a170907970300b0093112944d07si33457902ejc.44.2023.03.28.04.08.34; Tue, 28 Mar 2023 04:08:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=klW0hMjP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230054AbjC1LHK (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229718AbjC1LHG (ORCPT ); Tue, 28 Mar 2023 07:07:06 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39C872D40 for ; Tue, 28 Mar 2023 04:07:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=6jo/I+r1XZjfSV9hErqb9NOr5B8VgbE2esFe+ZOAMPE=; b=klW0hMjPWSwLhuCSR4NXBdMd4E PeJH63w9BozL4Qbyu7WF17R3BuOgQn9SZgdTVvbQ8S7Ub7O2mjQD6PVE/NsToESkN3bVr9sm3tV9p X/XlSOzw4ko0s1Jf7le/CO07cUjkE0ujiRsmrKN4BlnvJ3mgv44qUpEQHWoLCRl/eieMURadYHKFP utK+0enrC8OJ2FW9dzPDSDtcCDGSyjY32cOn/GHT/C+rH+1iV5d/xGp/XK1Cz+f93q1mTx7S50hqt PwQMtG6g1sWxz94rJHuHZ8yPyn60+auq1Pfxcv42gHWA6sQeJaNmCiaiNu8ztN5lkhciJwn7y+7de u6mwR+Uw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79m-006XV7-14; Tue, 28 Mar 2023 11:06:27 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 69A3C303229; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 0317C2CB8D7AA; Tue, 28 Mar 2023 13:06:21 +0200 (CEST) Message-ID: <20230328110354.562078801@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:36 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 14/17] sched/eevdf: Better handle mixed slice length References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609502463916510?= X-GMAIL-MSGID: =?utf-8?q?1761609502463916510?= In the case where (due to latency-nice) there are different request sizes in the tree, the smaller requests tend to be dominated by the larger. Also note how the EEVDF lag limits are based on r_max. Therefore; add a heuristic that for the mixed request size case, moves smaller requests to placement strategy #2 which ensures they're immidiately eligible and and due to their smaller (virtual) deadline will cause preemption. NOTE: this relies on update_entity_lag() to impose lag limits above a single slice. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 14 ++++++++++++++ kernel/sched/features.h | 1 + kernel/sched/sched.h | 1 + 3 files changed, 16 insertions(+) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -616,6 +616,7 @@ avg_vruntime_add(struct cfs_rq *cfs_rq, s64 key = entity_key(cfs_rq, se); cfs_rq->avg_vruntime += key * weight; + cfs_rq->avg_slice += se->slice * weight; cfs_rq->avg_load += weight; } @@ -626,6 +627,7 @@ avg_vruntime_sub(struct cfs_rq *cfs_rq, s64 key = entity_key(cfs_rq, se); cfs_rq->avg_vruntime -= key * weight; + cfs_rq->avg_slice -= se->slice * weight; cfs_rq->avg_load -= weight; } @@ -4832,6 +4834,18 @@ place_entity(struct cfs_rq *cfs_rq, stru lag = se->vlag; /* + * For latency sensitive tasks; those that have a shorter than + * average slice and do not fully consume the slice, transition + * to EEVDF placement strategy #2. + */ + if (sched_feat(PLACE_FUDGE) && + cfs_rq->avg_slice > se->slice * cfs_rq->avg_load) { + lag += vslice; + if (lag > 0) + lag = 0; + } + + /* * If we want to place a task and preserve lag, we have to * consider the effect of the new entity on the weighted * average and compensate for this, otherwise lag can quickly --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -5,6 +5,7 @@ * sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled. */ SCHED_FEAT(PLACE_LAG, true) +SCHED_FEAT(PLACE_FUDGE, true) SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) /* --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -559,6 +559,7 @@ struct cfs_rq { unsigned int idle_h_nr_running; /* SCHED_IDLE */ s64 avg_vruntime; + u64 avg_slice; u64 avg_load; u64 exec_clock; From patchwork Tue Mar 28 09:26:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75994 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2128124vqo; Tue, 28 Mar 2023 04:09:28 -0700 (PDT) X-Google-Smtp-Source: AKy350aEEi0TuNBUPJVtgSds1bFMIoaA361BMRYXKWZmGe2JHWeKYbFZNlw8CxBhbkoJKfy1QQUt X-Received: by 2002:aa7:c557:0:b0:4fa:39a6:af25 with SMTP id s23-20020aa7c557000000b004fa39a6af25mr17963459edr.16.1680001768045; Tue, 28 Mar 2023 04:09:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001768; cv=none; d=google.com; s=arc-20160816; b=ej5MHaunKF0AH9rGdLAxIMlrW2w8cMppCo7QVWwI2t8gdn24YmfdNH52Zw6tKV1hZe TSIOq319zbAW8WW3KNd1tjxY7qy7WiINxvs4DZX2JfHNAz4/Pp6LSvTuziJWHrkxoysv wGNVbhfGyo1fG/7rhBquQxrQ05Qnx2/JbpoHehu8wNmq0wfArUJq/Hv4rQLcb0K3iS9I 0tHuLuca3GBbyYE5MO6AYC70bYwi37AOJ1ZT3Q4Tln7pBWvMMfoNTLvrCAlEG+Ind9IO WAzlEYAqsPzYsvWDyJT+X6knk+4d++aG0+PSotBoGXRFU1VznZnl6LezY/iz1QxBEpI5 BJIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=AAnu7Q5bQR+VfsEfiOofLxtBu4O6nuVihXVVPOvrlPw=; b=ASGMRL5BZV1bp3IPEfEV6gr/69lrJhd2rn8EfZ0Ic4s124/CDUbH3Sxik0SSPNrVq9 RlJomneImE28LtV1EEZMA/k1VADEjhJFc+xNYDeAIrXt0zSaN8Xv5b/e3qq6KnRlaNhD oOB0IkLFWIjxdLN0FXdhfvJMlfSMHWoEOS30BaEjoBTxxUHSOpLse+7yZyDqcq36KLat +vsRTX0ja8/4mfbLf/qgviwd3mLo8VDVfAMQidLRZttj5tdZDaGrqGoMVzRw4ysFk6lT jeHahkjnc3abrhuhhHHrWA7wg/beY4ky34TMKTTawt6xLnVoQ757dAd8/d6c3a10GYSx hNfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=NRtKeab0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o1-20020aa7d3c1000000b005021f0d5755si11146565edr.668.2023.03.28.04.09.03; Tue, 28 Mar 2023 04:09:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=NRtKeab0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232440AbjC1LHu (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232524AbjC1LHN (ORCPT ); Tue, 28 Mar 2023 07:07:13 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A5BA7ED3 for ; Tue, 28 Mar 2023 04:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=AAnu7Q5bQR+VfsEfiOofLxtBu4O6nuVihXVVPOvrlPw=; b=NRtKeab0oUOXxiy/HN17f9mVBL OCG8avrlO16uEMFv06m6Dug9rj8wHnhV6XjA/qgZEhlzqy1+sZWNAa/8X+ZhdhdegOdaWbPUOQLzu F74e+s0Qg1O1eRoWBDgdRb80CziOsAcn+67SlUE57nIq0Qud8Lj0ODHfDBpNO4VfSn2a+SrX6oY5T j1Xn9UXSCnOpEWyekm9JJgDBC4Xr2irX1vY4fhLyb2c2dKKYnBKYVWnexlGkdzZtbmFmD1Mdm13g1 VAnNIu0aivARIPRLsF4Sj2qHyZMvSQbn+q1dCFPEpkAMTuonkZlYUkIdlwkpY8tlzJmzzqln8OL8v LMhP/HvQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79m-006XVA-1h; Tue, 28 Mar 2023 11:06:38 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 697DD303109; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 0DD5F2CB8D7AC; Tue, 28 Mar 2023 13:06:21 +0200 (CEST) Message-ID: <20230328110354.641979416@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:37 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 15/17] [RFC] sched/eevdf: Sleeper bonus References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609534178982925?= X-GMAIL-MSGID: =?utf-8?q?1761609534178982925?= Add a sleeper bonus hack, but keep it default disabled. This should allow easy testing if regressions are due to this. Specifically; this 'restores' performance for things like starve and stress-futex, stress-nanosleep that rely on sleeper bonus to compete against an always running parent (the fair 67%/33% split vs the 50%/50% bonus thing). OTOH this completely destroys latency and hackbench (as in 5x worse). Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 47 ++++++++++++++++++++++++++++++++++++++++------- kernel/sched/features.h | 1 + kernel/sched/sched.h | 3 ++- 3 files changed, 43 insertions(+), 8 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4819,7 +4819,7 @@ static inline void update_misfit_status( #endif /* CONFIG_SMP */ static void -place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) +place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { u64 vslice = calc_delta_fair(se->slice, se); u64 vruntime = avg_vruntime(cfs_rq); @@ -4878,22 +4878,55 @@ place_entity(struct cfs_rq *cfs_rq, stru if (WARN_ON_ONCE(!load)) load = 1; lag = div_s64(lag, load); + + vruntime -= lag; + } + + /* + * Base the deadline on the 'normal' EEVDF placement policy in an + * attempt to not let the bonus crud below wreck things completely. + */ + se->deadline = vruntime; + + /* + * The whole 'sleeper' bonus hack... :-/ This is strictly unfair. + * + * By giving a sleeping task a little boost, it becomes possible for a + * 50% task to compete equally with a 100% task. That is, strictly fair + * that setup would result in a 67% / 33% split. Sleeper bonus will + * change that to 50% / 50%. + * + * This thing hurts my brain, because tasks leaving with negative lag + * will move 'time' backward, so comparing against a historical + * se->vruntime is dodgy as heck. + */ + if (sched_feat(PLACE_BONUS) && + (flags & ENQUEUE_WAKEUP) && !(flags & ENQUEUE_MIGRATED)) { + /* + * If se->vruntime is ahead of vruntime, something dodgy + * happened and we cannot give bonus due to not having valid + * history. + */ + if ((s64)(se->vruntime - vruntime) < 0) { + vruntime -= se->slice/2; + vruntime = max_vruntime(se->vruntime, vruntime); + } } - se->vruntime = vruntime - lag; + se->vruntime = vruntime; /* * When joining the competition; the exisiting tasks will be, * on average, halfway through their slice, as such start tasks * off with half a slice to ease into the competition. */ - if (sched_feat(PLACE_DEADLINE_INITIAL) && initial) + if (sched_feat(PLACE_DEADLINE_INITIAL) && (flags & ENQUEUE_INITIAL)) vslice /= 2; /* * EEVDF: vd_i = ve_i + r_i/w_i */ - se->deadline = se->vruntime + vslice; + se->deadline += vslice; } static void check_enqueue_throttle(struct cfs_rq *cfs_rq); @@ -4910,7 +4943,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, st * update_curr(). */ if (curr) - place_entity(cfs_rq, se, 0); + place_entity(cfs_rq, se, flags); update_curr(cfs_rq); @@ -4937,7 +4970,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, st * we can place the entity. */ if (!curr) - place_entity(cfs_rq, se, 0); + place_entity(cfs_rq, se, flags); account_entity_enqueue(cfs_rq, se); @@ -11933,7 +11966,7 @@ static void task_fork_fair(struct task_s curr = cfs_rq->curr; if (curr) update_curr(cfs_rq); - place_entity(cfs_rq, se, 1); + place_entity(cfs_rq, se, ENQUEUE_INITIAL); rq_unlock(rq, &rf); } --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -7,6 +7,7 @@ SCHED_FEAT(PLACE_LAG, true) SCHED_FEAT(PLACE_FUDGE, true) SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) +SCHED_FEAT(PLACE_BONUS, false) /* * Prefer to schedule the task we woke last (assuming it failed --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2143,7 +2143,7 @@ extern const u32 sched_prio_to_wmult[40 * ENQUEUE_HEAD - place at front of runqueue (tail if not specified) * ENQUEUE_REPLENISH - CBS (replenish runtime and postpone deadline) * ENQUEUE_MIGRATED - the task was migrated during wakeup - * + * ENQUEUE_INITIAL - place a new task (fork/clone) */ #define DEQUEUE_SLEEP 0x01 @@ -2163,6 +2163,7 @@ extern const u32 sched_prio_to_wmult[40 #else #define ENQUEUE_MIGRATED 0x00 #endif +#define ENQUEUE_INITIAL 0x80 #define RETRY_TASK ((void *)-1UL) From patchwork Tue Mar 28 09:26:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75996 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2128388vqo; Tue, 28 Mar 2023 04:09:52 -0700 (PDT) X-Google-Smtp-Source: AKy350YDOVfN2QYC1LJqxtDjL1O7idwNSLqTz5r5klsuDK8HGXjj2m0l3GDFI2N11QcXq0VRIBLd X-Received: by 2002:a05:6402:d3:b0:4fa:e1fd:5a30 with SMTP id i19-20020a05640200d300b004fae1fd5a30mr15295763edu.19.1680001792668; Tue, 28 Mar 2023 04:09:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001792; cv=none; d=google.com; s=arc-20160816; b=S5wXZAM4CCNoerxol57VfgYP+vdaC63M47mILoEytMA/kr0knu/6zGFMEzO1yepLyj nT6sHO9jXNYmARAHNF/i5BPllTGHyKbo0KWgMbOm5sqRbRIRxDB0fCmOfOo0S56vKZ8P dXgC1Vlw/QrREe4ae/SeUfGaxDjPggmMwnm4hULs3RimpdEQBeOsSIoX4EAYuP8GYLUS vOceEkMw6bwPEshI8Re1c70Kkf79pTJkPxwbMamM3bOH1oqZayikqzgtA39j6Vw1oq6M h9CoVNmcirnNtcClPl2FuipgprDXG1PCoVDtqMm7mhMzaJY2+3jKN88zFpAGdnkalSB7 yYIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=kQG2l/tKeYJT6zXifHihyhG77ZWdoGPsh4TIiXCPCIM=; b=f1oUW+lnZv4xpAfDtq/9C7eavROmpXc4ZTUpMRVqwG5De7IHKGtitdRw05nlrDaBgP SlyFEbt/PdowhbqQr4G0w7vE9PhPJy74FuQEjj3QDIJdMsVkyyHw8NUBLS1YxrBv3MjJ DK+C+9ah+4HTfTnH9JbDX8Luf2Xt7ckAMJypbY6d+EJEzGCvDOowni3zNlmABAXRoWzm dMIrxCTehWDkkE17Fvp1AxfPvInvB/1J93f20Bklp7uJ/6MGHd/Q3M5fCQBOq7j4oIn9 ZlteRDdzS/526zx8Wd6nHuZzIJhSWMk+wKNbUg6ud3MTAwoQar44/wD8EumSQgZP7yHI O15A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=lbu36IMj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g26-20020a50ee1a000000b0050034f0f8c8si31788326eds.35.2023.03.28.04.09.27; Tue, 28 Mar 2023 04:09:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=lbu36IMj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232531AbjC1LIB (ORCPT + 99 others); Tue, 28 Mar 2023 07:08:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232532AbjC1LHN (ORCPT ); Tue, 28 Mar 2023 07:07:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B39D37EEE for ; Tue, 28 Mar 2023 04:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=kQG2l/tKeYJT6zXifHihyhG77ZWdoGPsh4TIiXCPCIM=; b=lbu36IMjlnbZ2kmncp4XjiFD4S i3J0R2TMJ6Tb4BEUJrGw6WRS0uq/qns36hVsnHPE8jKj46BT53O/mrq/aoeKyThcg5llGdZQI4bA2 mZLeqDFN8rvhncnZXd8/ViPxsT2441z1IOb5LrTAnMa07YRzKV/88YGt2Rzmp4vVA9lPbx0MxGRzd sEMG5dzn4SsyKVRZdJy2NqPNPUEDzhw4kjKmf3NbpCemlCedJ8wIkMUI+atWrQwjM6PHOFShG7pLn W2/doO/rgzSzrXb27d18r5M8WYR3rS8/02+a+fJjE0l8PSb1CI1M05nbH2McX+pskO+28pVj8Yf/a QZsg6WYw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1ph79m-008MDF-5D; Tue, 28 Mar 2023 11:06:26 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6994F303128; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 117D32CB8D7A2; Tue, 28 Mar 2023 13:06:21 +0200 (CEST) Message-ID: <20230328110354.712296502@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:38 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 16/17] [RFC] sched/eevdf: Minimal vavg option References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609559995163111?= X-GMAIL-MSGID: =?utf-8?q?1761609559995163111?= Alternative means of tracking min_vruntime to minimize the deltas going into avg_vruntime -- note that because vavg move backwards this is all sorts of tricky. Also more expensive because of extra divisions... Not found this convincing. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 51 ++++++++++++++++++++++++++++-------------------- kernel/sched/features.h | 2 + 2 files changed, 32 insertions(+), 21 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -732,28 +732,37 @@ static u64 __update_min_vruntime(struct static void update_min_vruntime(struct cfs_rq *cfs_rq) { - struct sched_entity *se = __pick_first_entity(cfs_rq); - struct sched_entity *curr = cfs_rq->curr; - - u64 vruntime = cfs_rq->min_vruntime; - - if (curr) { - if (curr->on_rq) - vruntime = curr->vruntime; - else - curr = NULL; + if (sched_feat(MINIMAL_VA)) { + u64 vruntime = avg_vruntime(cfs_rq); + s64 delta = (s64)(vruntime - cfs_rq->min_vruntime); + + avg_vruntime_update(cfs_rq, delta); + + u64_u32_store(cfs_rq->min_vruntime, vruntime); + } else { + struct sched_entity *se = __pick_first_entity(cfs_rq); + struct sched_entity *curr = cfs_rq->curr; + + u64 vruntime = cfs_rq->min_vruntime; + + if (curr) { + if (curr->on_rq) + vruntime = curr->vruntime; + else + curr = NULL; + } + + if (se) { + if (!curr) + vruntime = se->vruntime; + else + vruntime = min_vruntime(vruntime, se->vruntime); + } + + /* ensure we never gain time by being placed backwards. */ + u64_u32_store(cfs_rq->min_vruntime, + __update_min_vruntime(cfs_rq, vruntime)); } - - if (se) { - if (!curr) - vruntime = se->vruntime; - else - vruntime = min_vruntime(vruntime, se->vruntime); - } - - /* ensure we never gain time by being placed backwards. */ - u64_u32_store(cfs_rq->min_vruntime, - __update_min_vruntime(cfs_rq, vruntime)); } static inline bool __entity_less(struct rb_node *a, const struct rb_node *b) --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -9,6 +9,8 @@ SCHED_FEAT(PLACE_FUDGE, true) SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) SCHED_FEAT(PLACE_BONUS, false) +SCHED_FEAT(MINIMAL_VA, false) + /* * Prefer to schedule the task we woke last (assuming it failed * wakeup-preemption), since its likely going to consume data we From patchwork Tue Mar 28 09:26:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 75986 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2127830vqo; Tue, 28 Mar 2023 04:09:01 -0700 (PDT) X-Google-Smtp-Source: AKy350YkDId0hqXtBN3kx9w5/fY7rvSc91YwVhdt1e2gLz6n8pY1OVeam2p4W4PUAtY0VyhLiRhV X-Received: by 2002:a05:6402:618:b0:501:caae:7e92 with SMTP id n24-20020a056402061800b00501caae7e92mr15198194edv.31.1680001741729; Tue, 28 Mar 2023 04:09:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680001741; cv=none; d=google.com; s=arc-20160816; b=Pf2vTpkPiDKGKRwW7jLQYo/7qsqLUH1gP8EEwkIeGUBzdTOS2T1rWhNVp/pmlkx8// s+xxhrigbrQ3DtnRfcSOyS2RbCAms4XTO3Zha9OHtrP3NCMSYmKBW0h8KPH6rQQyb7qZ RR+6R/MvOrVhEOkUtVex7PVFZyhG7EqE9v8qOIlpEzMHHlowmn7gzFQzhziPeypeRvlu bqNPwrvKuqH9oYIPlWJ15dWJyn0Br5rrLY8FzR4PCveT/7kyCmmaOXRHhoRzIrN8tZK3 H6QctCi7dS6We2SKkvFbwrDUxH8bnZ0M4R52fof0KV8uX2iyfXpzN0FAp4s0g43lYSQ/ unng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=H9x+ShaeP9YoW47hGaGvsVgtEZo3ZxR71JolnNzDIQ8=; b=vH9fiMFInapn/wOzUZRGsm76YCrHY9eXqWI2K/dgBRmNs+XrYsIzQ/9idRmJzwsJmi lYu7bilBvpAr1kVfLM+3RYLFDEiKbON0t2PHDCpysvtSEgEw2X5y7wndkN/FrZ4WI+oA d2nFkoCcPcE81qVi5PhViNxtIIc2xcNQa+4EwWc8mEoC+TZe4IUezmiG9CPTxTEbWArv GxLYVDHOvz15h2nvOuUW5IatOuSaprB2G5SMNOuZQYWaoe015ZspGhozT1boVfie/XWX WSpvMzOc2emdz3mXWd2s1qViaZlzmWMAAWIE9XfnqTu0X85+/rReN96zZd01EwYf1wYR CBHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=TYkZFOTP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k13-20020aa7d8cd000000b004acb24be0d2si32492342eds.315.2023.03.28.04.08.38; Tue, 28 Mar 2023 04:09:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=TYkZFOTP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232671AbjC1LHW (ORCPT + 99 others); Tue, 28 Mar 2023 07:07:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232470AbjC1LHI (ORCPT ); Tue, 28 Mar 2023 07:07:08 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA8271BE4 for ; Tue, 28 Mar 2023 04:07:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=H9x+ShaeP9YoW47hGaGvsVgtEZo3ZxR71JolnNzDIQ8=; b=TYkZFOTPB9o8fat0R9ZFyZPr9I toX25dX7KaJ4Q0aIpka2EYarmoXS354x1SwXr4/VtLxoVytg9etPESaikkHnJBhHidg8fp9LYaa2x yHXR9QDc3QUhi29HvgVHE2i1l7Jq/ws8DMI9RLWLWFZ1nWv7FRM2eBt7dXzogjLlOyEvKCRMlcZlf 3Ik+3NDMm4WiY0InOBjRARBKs/cL3YSaB/6NxQTFNkrURAL6zhbA5//Davwz+Niw87HSSW2x8esdA OM78VSO0NQ0NcHS2TC36zIunNJWYbSoyU6RHpbIo+lML2tZBiXEVYagSxfGbwuXXkPBeDeGdhIZoh qjUwzPwA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1ph79m-006XVB-29; Tue, 28 Mar 2023 11:06:34 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 723D8303283; Tue, 28 Mar 2023 13:06:23 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 187762CB8D7B0; Tue, 28 Mar 2023 13:06:21 +0200 (CEST) Message-ID: <20230328110354.780171563@infradead.org> User-Agent: quilt/0.66 Date: Tue, 28 Mar 2023 11:26:39 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de Subject: [PATCH 17/17] [DEBUG] sched/eevdf: Debug / validation crud References: <20230328092622.062917921@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761609506287254376?= X-GMAIL-MSGID: =?utf-8?q?1761609506287254376?= XXX do not merge Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++++ kernel/sched/features.h | 2 + 2 files changed, 97 insertions(+) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -793,6 +793,92 @@ static inline bool min_deadline_update(s RB_DECLARE_CALLBACKS(static, min_deadline_cb, struct sched_entity, run_node, min_deadline, min_deadline_update); +#ifdef CONFIG_SCHED_DEBUG +struct validate_data { + s64 va; + s64 avg_vruntime; + s64 avg_load; + s64 min_deadline; +}; + +static void __print_se(struct cfs_rq *cfs_rq, struct sched_entity *se, int level, + struct validate_data *data) +{ + static const char indent[] = " "; + unsigned long weight = scale_load_down(se->load.weight); + struct task_struct *p = NULL; + + s64 v = se->vruntime - cfs_rq->min_vruntime; + s64 d = se->deadline - cfs_rq->min_vruntime; + + data->avg_vruntime += v * weight; + data->avg_load += weight; + + data->min_deadline = min(data->min_deadline, d); + + if (entity_is_task(se)) + p = task_of(se); + + trace_printk("%.*s%lx w: %ld ve: %Ld lag: %Ld vd: %Ld vmd: %Ld %s (%d/%s)\n", + level*2, indent, (unsigned long)se, + weight, + v, data->va - se->vruntime, d, + se->min_deadline - cfs_rq->min_vruntime, + entity_eligible(cfs_rq, se) ? "E" : "N", + p ? p->pid : -1, + p ? p->comm : "(null)"); +} + +static void __print_node(struct cfs_rq *cfs_rq, struct rb_node *node, int level, + struct validate_data *data) +{ + if (!node) + return; + + __print_se(cfs_rq, __node_2_se(node), level, data); + __print_node(cfs_rq, node->rb_left, level+1, data); + __print_node(cfs_rq, node->rb_right, level+1, data); +} + +static struct sched_entity *pick_eevdf(struct cfs_rq *cfs_rq); + +static void validate_cfs_rq(struct cfs_rq *cfs_rq, bool pick) +{ + struct sched_entity *curr = cfs_rq->curr; + struct rb_node *root = cfs_rq->tasks_timeline.rb_root.rb_node; + struct validate_data _data = { + .va = avg_vruntime(cfs_rq), + .min_deadline = (~0ULL) >> 1, + }, *data = &_data; + + trace_printk("---\n"); + + __print_node(cfs_rq, root, 0, data); + + trace_printk("min_deadline: %Ld avg_vruntime: %Ld / %Ld = %Ld\n", + data->min_deadline, + data->avg_vruntime, data->avg_load, + data->avg_load ? div_s64(data->avg_vruntime, data->avg_load) : 0); + + if (WARN_ON_ONCE(cfs_rq->avg_vruntime != data->avg_vruntime)) + cfs_rq->avg_vruntime = data->avg_vruntime; + + if (WARN_ON_ONCE(cfs_rq->avg_load != data->avg_load)) + cfs_rq->avg_load = data->avg_load; + + data->min_deadline += cfs_rq->min_vruntime; + WARN_ON_ONCE(cfs_rq->avg_load && __node_2_se(root)->min_deadline != data->min_deadline); + + if (curr && curr->on_rq) + __print_se(cfs_rq, curr, 0, data); + + if (pick) + trace_printk("pick: %lx\n", (unsigned long)pick_eevdf(cfs_rq)); +} +#else +static inline void validate_cfs_rq(struct cfs_rq *cfs_rq, bool pick) { } +#endif + /* * Enqueue an entity into the rb-tree: */ @@ -802,6 +888,9 @@ static void __enqueue_entity(struct cfs_ se->min_deadline = se->deadline; rb_add_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline, __entity_less, &min_deadline_cb); + + if (sched_feat(VALIDATE_QUEUE)) + validate_cfs_rq(cfs_rq, true); } static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) @@ -809,6 +898,9 @@ static void __dequeue_entity(struct cfs_ rb_erase_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline, &min_deadline_cb); avg_vruntime_sub(cfs_rq, se); + + if (sched_feat(VALIDATE_QUEUE)) + validate_cfs_rq(cfs_rq, true); } struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq) @@ -894,6 +986,9 @@ static struct sched_entity *pick_eevdf(s if (unlikely(!best)) { struct sched_entity *left = __pick_first_entity(cfs_rq); if (left) { + trace_printk("EEVDF scheduling fail, picking leftmost\n"); + validate_cfs_rq(cfs_rq, false); + tracing_off(); pr_err("EEVDF scheduling fail, picking leftmost\n"); return left; } --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -6,6 +6,8 @@ SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) SCHED_FEAT(MINIMAL_VA, false) +SCHED_FEAT(VALIDATE_QUEUE, false) + /* * Prefer to schedule the task we woke last (assuming it failed * wakeup-preemption), since its likely going to consume data we