From patchwork Wed May 31 11:58:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101430 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2856974vqr; Wed, 31 May 2023 05:58:27 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5h/MKJ7RMH9wxv3mM9pzL88CU7fxSugu39IZ88PDDBpQlo3sj/E5SdhQGLjnY8kBn7NyUp X-Received: by 2002:a17:902:e744:b0:1ac:86b5:70d9 with SMTP id p4-20020a170902e74400b001ac86b570d9mr15153551plf.32.1685537907037; Wed, 31 May 2023 05:58:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537907; cv=none; d=google.com; s=arc-20160816; b=Rrt+hp6E1gOYdjEWkzAHlAwnTrlDaUA3gwArhPGH8pYloz4KxqYrI8IYwT2qvlV6Tn pHdEw7uCtLVAWOML1K8CzuJzgY29ATv/lvyfUDIi2HMeC9R1deAqE+sV3PSmmuQEee3K dhtJGba22knEl2nK8L+NT+tkMTqmF5NzVkeFTlHzjWoO3OmjhpZ4MNjuODgVEBbb92d8 AfkFMkG2AXY9cJgLF8ol6qONSXSt7MRonUlXmYh2AvQa8t7gmEc8g7j9Nnchonr0Vu3y LlsLgjjmvgqg0XHUw0z5H+QUw94C8AWnK8p4JY29fkN2CTVMMEbjVdqCEDrHQQmkBbiA ds2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=5PvF4f/r+K/UwfM+HZkdbF5YC4UF0KRIlWJ5Q6ebNuQ=; b=gk2NQ16MSHMlFBCBuRPJ6SBQWoetrdYgIpl7JDkxDLmaxYDngpnqftkY00IiYK/JGj f+o6ZR6FQ96Zb7usRFA0EkGNRP3dhgUcfBeToHbZtlTCwCebYkSWYjZhxnfyyGYGdNnh vn8XAOrGrJEdCZEH+rRSBFmKdMBMpu+cpOPkRvN4WkrLpVzI9fHIPOgiRoT2N1WaQFa3 GZyReiYq1nWvyv49BRn0Nn7sYlkESoJPHlqIZjPG8S9f5sLHnk8rpCzkkucHqm/ZFVnx eO+sumj8vRfKmjuGJa0TK2GV0zTmEKpj7YXK6a2VCU9lLmWf9kSIbZkTwlTplpYYjBpd 1GtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=hlRNhfTR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j3-20020a170903028300b001b0559c58d4si917030plr.138.2023.05.31.05.58.13; Wed, 31 May 2023 05:58:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=hlRNhfTR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236009AbjEaMtH (ORCPT + 99 others); Wed, 31 May 2023 08:49:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235979AbjEaMs6 (ORCPT ); Wed, 31 May 2023 08:48:58 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00D0612B for ; Wed, 31 May 2023 05:48:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=5PvF4f/r+K/UwfM+HZkdbF5YC4UF0KRIlWJ5Q6ebNuQ=; b=hlRNhfTRkwhZy3PksvaM5KIWLz uh5hyF0ZJcVyqntvVcXqhYR2ZyySurq8KbNEIS++zPP5+glONMFHEJaQoykSRjIywwLLJgCwG6+sw g5lziX4nw/J8TLcWT20OjOrMYConCALWyKu1Zi1N54LsBmmMK1ZP1kuHvsJQJna1sUnhm6xqsTCcw dhDn8oVbZDnJ+ISy/okz3Q5Ulif2pvCuySZK8yUvVSzEviRl6IMzY4oUPkJuxN6sF3kmttjtelHor kmu0TIUpklFS7wZ2ipva047sy9aLskk/5hHXujxto6a8bM38e289nQ6TipP1ehx/WI0qAcjNMLAez ZrzJkbZg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEo-00FSL2-2t; Wed, 31 May 2023 12:47:39 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 058EB3002F0; Wed, 31 May 2023 14:47:34 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id D8A1C203E4ADE; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124603.654144274@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:40 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 01/15] sched/fair: Add avg_vruntime References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414596443968954?= X-GMAIL-MSGID: =?utf-8?q?1767414596443968954?= In order to move to an eligibility based scheduling policy it is needed to have a better approximation of the ideal scheduler. Specifically, for a virtual time weighted fair queueing based scheduler the ideal scheduler will be the weighted average of the individual virtual runtimes (math in the comment). As such, compute the weighted average to approximate the ideal scheduler -- note that the approximation is in the individual task behaviour, which isn't strictly conformant. Specifically consider adding a task with a vruntime left of center, in this case the average will move backwards in time -- something the ideal scheduler would of course never do. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/debug.c | 32 +++++------ kernel/sched/fair.c | 137 +++++++++++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 5 + 3 files changed, 154 insertions(+), 20 deletions(-) --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -626,10 +626,9 @@ static void print_rq(struct seq_file *m, void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) { - s64 MIN_vruntime = -1, min_vruntime, max_vruntime = -1, - spread, rq0_min_vruntime, spread0; + s64 left_vruntime = -1, min_vruntime, right_vruntime = -1, spread; + struct sched_entity *last, *first; struct rq *rq = cpu_rq(cpu); - struct sched_entity *last; unsigned long flags; #ifdef CONFIG_FAIR_GROUP_SCHED @@ -643,26 +642,25 @@ void print_cfs_rq(struct seq_file *m, in SPLIT_NS(cfs_rq->exec_clock)); raw_spin_rq_lock_irqsave(rq, flags); - if (rb_first_cached(&cfs_rq->tasks_timeline)) - MIN_vruntime = (__pick_first_entity(cfs_rq))->vruntime; + first = __pick_first_entity(cfs_rq); + if (first) + left_vruntime = first->vruntime; last = __pick_last_entity(cfs_rq); if (last) - max_vruntime = last->vruntime; + right_vruntime = last->vruntime; min_vruntime = cfs_rq->min_vruntime; - rq0_min_vruntime = cpu_rq(0)->cfs.min_vruntime; raw_spin_rq_unlock_irqrestore(rq, flags); - SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "MIN_vruntime", - SPLIT_NS(MIN_vruntime)); + + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "left_vruntime", + SPLIT_NS(left_vruntime)); SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "min_vruntime", SPLIT_NS(min_vruntime)); - SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "max_vruntime", - SPLIT_NS(max_vruntime)); - spread = max_vruntime - MIN_vruntime; - SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", - SPLIT_NS(spread)); - spread0 = min_vruntime - rq0_min_vruntime; - SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread0", - SPLIT_NS(spread0)); + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "avg_vruntime", + SPLIT_NS(avg_vruntime(cfs_rq))); + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "right_vruntime", + SPLIT_NS(right_vruntime)); + spread = right_vruntime - left_vruntime; + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); SEQ_printf(m, " .%-30s: %d\n", "nr_spread_over", cfs_rq->nr_spread_over); SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -601,9 +601,134 @@ static inline bool entity_before(const s return (s64)(a->vruntime - b->vruntime) < 0; } +static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + return (s64)(se->vruntime - cfs_rq->min_vruntime); +} + #define __node_2_se(node) \ rb_entry((node), struct sched_entity, run_node) +/* + * Compute virtual time from the per-task service numbers: + * + * Fair schedulers conserve lag: + * + * \Sum lag_i = 0 + * + * Where lag_i is given by: + * + * lag_i = S - s_i = w_i * (V - v_i) + * + * Where S is the ideal service time and V is it's virtual time counterpart. + * Therefore: + * + * \Sum lag_i = 0 + * \Sum w_i * (V - v_i) = 0 + * \Sum w_i * V - w_i * v_i = 0 + * + * From which we can solve an expression for V in v_i (which we have in + * se->vruntime): + * + * \Sum v_i * w_i \Sum v_i * w_i + * V = -------------- = -------------- + * \Sum w_i W + * + * Specifically, this is the weighted average of all entity virtual runtimes. + * + * [[ NOTE: this is only equal to the ideal scheduler under the condition + * that join/leave operations happen at lag_i = 0, otherwise the + * virtual time has non-continguous motion equivalent to: + * + * V +-= lag_i / W + * + * Also see the comment in place_entity() that deals with this. ]] + * + * However, since v_i is u64, and the multiplcation could easily overflow + * transform it into a relative form that uses smaller quantities: + * + * Substitute: v_i == (v_i - v0) + v0 + * + * \Sum ((v_i - v0) + v0) * w_i \Sum (v_i - v0) * w_i + * V = ---------------------------- = --------------------- + v0 + * W W + * + * Which we track using: + * + * v0 := cfs_rq->min_vruntime + * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime + * \Sum w_i := cfs_rq->avg_load + * + * Since min_vruntime is a monotonic increasing variable that closely tracks + * the per-task service, these deltas: (v_i - v), will be in the order of the + * maximal (virtual) lag induced in the system due to quantisation. + * + * Also, we use scale_load_down() to reduce the size. + * + * As measured, the max (key * weight) value was ~44 bits for a kernel build. + */ +static void +avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + unsigned long weight = scale_load_down(se->load.weight); + s64 key = entity_key(cfs_rq, se); + + cfs_rq->avg_vruntime += key * weight; + cfs_rq->avg_load += weight; +} + +static void +avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + unsigned long weight = scale_load_down(se->load.weight); + s64 key = entity_key(cfs_rq, se); + + cfs_rq->avg_vruntime -= key * weight; + cfs_rq->avg_load -= weight; +} + +static inline +void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta) +{ + /* + * v' = v + d ==> avg_vruntime' = avg_runtime - d*avg_load + */ + cfs_rq->avg_vruntime -= cfs_rq->avg_load * delta; +} + +u64 avg_vruntime(struct cfs_rq *cfs_rq) +{ + struct sched_entity *curr = cfs_rq->curr; + s64 avg = cfs_rq->avg_vruntime; + long load = cfs_rq->avg_load; + + if (curr && curr->on_rq) { + unsigned long weight = scale_load_down(curr->load.weight); + + avg += entity_key(cfs_rq, curr) * weight; + load += weight; + } + + if (load) + avg = div_s64(avg, load); + + return cfs_rq->min_vruntime + avg; +} + +static u64 __update_min_vruntime(struct cfs_rq *cfs_rq, u64 vruntime) +{ + u64 min_vruntime = cfs_rq->min_vruntime; + /* + * open coded max_vruntime() to allow updating avg_vruntime + */ + s64 delta = (s64)(vruntime - min_vruntime); + if (delta > 0) { + avg_vruntime_update(cfs_rq, delta); + min_vruntime = vruntime; + } + return min_vruntime; +} + static void update_min_vruntime(struct cfs_rq *cfs_rq) { struct sched_entity *curr = cfs_rq->curr; @@ -629,7 +754,7 @@ static void update_min_vruntime(struct c /* ensure we never gain time by being placed backwards. */ u64_u32_store(cfs_rq->min_vruntime, - max_vruntime(cfs_rq->min_vruntime, vruntime)); + __update_min_vruntime(cfs_rq, vruntime)); } static inline bool __entity_less(struct rb_node *a, const struct rb_node *b) @@ -642,12 +767,14 @@ static inline bool __entity_less(struct */ static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { + avg_vruntime_add(cfs_rq, se); rb_add_cached(&se->run_node, &cfs_rq->tasks_timeline, __entity_less); } static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { rb_erase_cached(&se->run_node, &cfs_rq->tasks_timeline); + avg_vruntime_sub(cfs_rq, se); } struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq) @@ -3379,6 +3506,8 @@ static void reweight_entity(struct cfs_r /* commit outstanding execution time */ if (cfs_rq->curr == se) update_curr(cfs_rq); + else + avg_vruntime_sub(cfs_rq, se); update_load_sub(&cfs_rq->load, se->load.weight); } dequeue_load_avg(cfs_rq, se); @@ -3394,9 +3523,11 @@ static void reweight_entity(struct cfs_r #endif enqueue_load_avg(cfs_rq, se); - if (se->on_rq) + if (se->on_rq) { update_load_add(&cfs_rq->load, se->load.weight); - + if (cfs_rq->curr != se) + avg_vruntime_add(cfs_rq, se); + } } void reweight_task(struct task_struct *p, int prio) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -554,6 +554,9 @@ struct cfs_rq { unsigned int idle_nr_running; /* SCHED_IDLE */ unsigned int idle_h_nr_running; /* SCHED_IDLE */ + s64 avg_vruntime; + u64 avg_load; + u64 exec_clock; u64 min_vruntime; #ifdef CONFIG_SCHED_CORE @@ -3503,4 +3506,6 @@ static inline void task_tick_mm_cid(stru static inline void init_sched_mm_cid(struct task_struct *t) { } #endif +extern u64 avg_vruntime(struct cfs_rq *cfs_rq); + #endif /* _KERNEL_SCHED_SCHED_H */ From patchwork Wed May 31 11:58:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101421 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2853993vqr; Wed, 31 May 2023 05:52:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4/MpaZCr0DZMfbWcOLyRyeS91q2tuYJLNMPmgQ17jVWPmUknTo0Rpxg4kGsFnnrzxMj94g X-Received: by 2002:aa7:8891:0:b0:64e:c85:4457 with SMTP id z17-20020aa78891000000b0064e0c854457mr5259138pfe.8.1685537572889; Wed, 31 May 2023 05:52:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537572; cv=none; d=google.com; s=arc-20160816; b=O4DaW2WN24Tp7pTCzMQfFGYhYve+/YAzgir2V4Zu3H+NfQnO2BN+C6wpxQfmRHDJ5I OwkVBjvdRF+Vgow/NgIW/LxfZhLLMNhLETrTEUZ3wmHMt16gORCZ6LbvkkX7+kzRRYlb 7nCuFURPHR6FB83xocZw+25V7DvGfrLKcHQkx+yI11azscDD0GsjAoTYeNCXLFU9FCq3 yghfOkyU9I6N9wI3H/0IiYomozCPPLj5RaZzT5baPgyKKQr14zvSxfHYmBLw4MW4p0NK AV42xBr6ZF4wafvYDYGh+5MZwg0YttQk5NvymIdriMJVflUVX0dvWRR+y6NWIRipzOxw j9aQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=d6f6GzFIj5K9A2qr18ASSJpjfM1miAqI5KEhG6H6so4=; b=QnaixMGHXSoWTNvVz5EAhwYXp7/54IFoJsM97BGp/5FyszeiqfHLP3LM/utCwQl1AQ KLt1T6/+oUp9wUHvq2t+9U7JcxZj6cKiVzmhYQc/NCeTlAk6RvrLoTIgixm3FA6Xua5Y ZJ3kAKlD3FQDrK3t7DwaDEXPB2W2xvVo/A8jYeNB3T8OsGpRsqGNyNLzA+6V0Uk/PSGf HvNJM6u64spENBxNm4bo2f+kizzDRNuqw3T4EuaqwZoDYFOPctENtfBuPPiWMqTOYvVV musXIGC4gkLNFOMhA0nQ+r3TDxjbd3r2GaXuJSRRqiiZhxzLT7Jcq+vFYn07YCZSjl0U CR4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=hoIJnzEM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n9-20020aa79849000000b0063d3aed5eb3si3508086pfq.197.2023.05.31.05.52.40; Wed, 31 May 2023 05:52:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=hoIJnzEM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235990AbjEaMtE (ORCPT + 99 others); Wed, 31 May 2023 08:49:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235965AbjEaMs6 (ORCPT ); Wed, 31 May 2023 08:48:58 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01CB4133 for ; Wed, 31 May 2023 05:48:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=d6f6GzFIj5K9A2qr18ASSJpjfM1miAqI5KEhG6H6so4=; b=hoIJnzEMWDQr0xKTIrEY8UQtJn 9DbCU0P7SlSfTUupfBgV7sDxsKy1Vl0GHcFXzsPusp+W+Bpt/IavRrg2vu01IoVYHta4PGs7bkEPV 7KltK/84C8NoX/chiPVfxfrhq2fO/WCqyl90rodVIsvFgkBha4tEj3vA4c13ovN3pQ6wVmEPsXqFQ SOAMCa6WDXl/SGDnN6hcG6zBolelprSiNHhWJ6nKeCPBfLxqNG3tEhvEvA/omf2PF0/cN57/tRpKo VkthcSqttCQu0LqZKbg+NikqekdSZrhKJuq41LblatAe0RoK9L9TR1OaWUjTbLs1xom8czG2pQ7Rz TLCN2wGQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEo-00FSL4-2t; Wed, 31 May 2023 12:47:39 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 099613006C4; Wed, 31 May 2023 14:47:34 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id DB9FB2107CB02; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124603.722361178@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:41 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 02/15] sched/fair: Remove START_DEBIT References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DIET_1,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414246170757079?= X-GMAIL-MSGID: =?utf-8?q?1767414246170757079?= With the introduction of avg_vruntime() there is no need to use worse approximations. Take the 0-lag point as starting point for inserting new tasks. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 21 +-------------------- kernel/sched/features.h | 6 ------ 2 files changed, 1 insertion(+), 26 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -906,16 +906,6 @@ static u64 sched_slice(struct cfs_rq *cf return slice; } -/* - * We calculate the vruntime slice of a to-be-inserted task. - * - * vs = s/w - */ -static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se) -{ - return calc_delta_fair(sched_slice(cfs_rq, se), se); -} - #include "pelt.h" #ifdef CONFIG_SMP @@ -4862,16 +4852,7 @@ static inline bool entity_is_long_sleepe static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { - u64 vruntime = cfs_rq->min_vruntime; - - /* - * The 'current' period is already promised to the current tasks, - * however the extra weight of the new task will slow them down a - * little, place the new task so that it fits in the slot that - * stays open at the end. - */ - if (initial && sched_feat(START_DEBIT)) - vruntime += sched_vslice(cfs_rq, se); + u64 vruntime = avg_vruntime(cfs_rq); /* sleeps up to a single latency don't count. */ if (!initial) { --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -7,12 +7,6 @@ SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true) /* - * Place new tasks ahead so that they do not starve already running - * tasks - */ -SCHED_FEAT(START_DEBIT, true) - -/* * Prefer to schedule the task we woke last (assuming it failed * wakeup-preemption), since its likely going to consume data we * touched, increases cache locality. From patchwork Wed May 31 11:58:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101420 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2853619vqr; Wed, 31 May 2023 05:52:13 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ60hrB+R29KK6I3ker6D8+DMfEMAMOwglvtooOYZAmMWdkrFqjDqyBsaBNMWjkEVnJ0mHbA X-Received: by 2002:a05:6808:1315:b0:394:2c43:c3f3 with SMTP id y21-20020a056808131500b003942c43c3f3mr3929755oiv.59.1685537532860; Wed, 31 May 2023 05:52:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537532; cv=none; d=google.com; s=arc-20160816; b=fMEkq0Jn/C8RveLBkyb4P81NmjrkRjnOS5/OmIcrW5ccizt6zgAmIEDpD9BiBkwOEQ gs4BNJ+Us+4xVZpZmCQCppFl0Yly603MkG5x0ZQcGEe2jZtOAvM+s1gDsbVdWveSwnyt vm58B5casbjtcVJuF87cbOtGz5KvZEwj01cJ4BFxWcKur4KZL4HjfacqbsgooG2pKL03 OBh0mJXENq0tNYlzO0hpt6fAfULzBIvF+ltdyAGD7AmgRT2Ti1CD2HGpQqzu2uu/s43h wCKc1HvZhxMyBkZPgvTnl0gmo1WJ8XaXLX52OXhbiJ+7D/NIMHPVR/K97vhSm3mwrINu XhnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=Y2qF1WLB0AZ4anVdmmmUDdiDtR2jM6ttTNwCHWmsp8k=; b=NKFp78jn3BHltDpwdHk7VmyT9Dp8p34+aAXGhSOcw4kdKp/JJCPJep4uCUpnuuE5ut Vi611C9T8lOyDOXu67LSol3p9RtUAgyiNWFWK/Bs7KQCmLpo76cJtITjKByjiFwuwHLI ZVNbwzzTil2HehakatRsN08TrpjvopOY7XSV423z32EF/3Oxn4ksWbOa6gVkWD/p50TL byfJMfPhHPM+fAY02rnQHTqXcpPRUtjyfPrqpcvYUp5w/0ToZttcYxbayrCq5fPI3HFJ f5ZaUJHZbAKdvWeOAO2nl/H7QlpRHFUiLSw52OjzMNkVNrS2+0pGs/uKtZ9FgXs31ty4 Paag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=Bi+JIMQr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a26-20020a63705a000000b00528d90d40e4si948226pgn.88.2023.05.31.05.51.59; Wed, 31 May 2023 05:52:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=Bi+JIMQr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235970AbjEaMtC (ORCPT + 99 others); Wed, 31 May 2023 08:49:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235976AbjEaMs5 (ORCPT ); Wed, 31 May 2023 08:48:57 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDE4E1B7 for ; Wed, 31 May 2023 05:48:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Y2qF1WLB0AZ4anVdmmmUDdiDtR2jM6ttTNwCHWmsp8k=; b=Bi+JIMQrJAU/6aQnSR55o0oDoq NAyw+GNUxXFx3OaqdCgGcQ4p4VtZA193HWlN+L3YiyAXpW6iHYle37cta929qJHZcd/1plYETtvZo 2bbsWSgf5olAi/KXGDgV+Sss6mE0xDTx/aP5qF/JLSRpfGchMRlhKVHMC/cjDOMwgPeDg0y7BGMDt JG3pnmbvCkZtbe8+90raL4TZMHaWKt9ERZu7L09J9qUarXnsncEZcGgr9f+Tz37tk4ETLdwnxxxGZ WndGqdUMxhsySxvrBxjleogfdEzB2auzW7E6Qfq0js3AvyLIcwD6R3eU1mk77VNH8nenPm7j+VTtR 3+3VzDHg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEo-00FSL1-2t; Wed, 31 May 2023 12:47:39 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 07B5230068D; Wed, 31 May 2023 14:47:34 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id E1A7422BA6450; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124603.794929315@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:42 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 03/15] sched/fair: Add lag based placement References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414203780505848?= X-GMAIL-MSGID: =?utf-8?q?1767414203780505848?= With the introduction of avg_vruntime, it is possible to approximate lag (the entire purpose of introducing it in fact). Use this to do lag based placement over sleep+wake. Specifically, the FAIR_SLEEPERS thing places things too far to the left and messes up the deadline aspect of EEVDF. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/sched.h | 3 kernel/sched/core.c | 1 kernel/sched/fair.c | 162 +++++++++++++++++++++++++++++++++++++----------- kernel/sched/features.h | 8 ++ 4 files changed, 138 insertions(+), 36 deletions(-) --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -555,8 +555,9 @@ struct sched_entity { u64 exec_start; u64 sum_exec_runtime; - u64 vruntime; u64 prev_sum_exec_runtime; + u64 vruntime; + s64 vlag; u64 nr_migrations; --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4463,6 +4463,7 @@ static void __sched_fork(unsigned long c p->se.prev_sum_exec_runtime = 0; p->se.nr_migrations = 0; p->se.vruntime = 0; + p->se.vlag = 0; INIT_LIST_HEAD(&p->se.group_node); #ifdef CONFIG_FAIR_GROUP_SCHED --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -715,6 +715,15 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq) return cfs_rq->min_vruntime + avg; } +/* + * lag_i = S - s_i = w_i * (V - v_i) + */ +void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + SCHED_WARN_ON(!se->on_rq); + se->vlag = avg_vruntime(cfs_rq) - se->vruntime; +} + static u64 __update_min_vruntime(struct cfs_rq *cfs_rq, u64 vruntime) { u64 min_vruntime = cfs_rq->min_vruntime; @@ -3492,6 +3501,8 @@ dequeue_load_avg(struct cfs_rq *cfs_rq, static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, unsigned long weight) { + unsigned long old_weight = se->load.weight; + if (se->on_rq) { /* commit outstanding execution time */ if (cfs_rq->curr == se) @@ -3504,6 +3515,14 @@ static void reweight_entity(struct cfs_r update_load_set(&se->load, weight); + if (!se->on_rq) { + /* + * Because we keep se->vlag = V - v_i, while: lag_i = w_i*(V - v_i), + * we need to scale se->vlag when w_i changes. + */ + se->vlag = div_s64(se->vlag * old_weight, weight); + } + #ifdef CONFIG_SMP do { u32 divider = get_pelt_divider(&se->avg); @@ -4853,49 +4872,119 @@ static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { u64 vruntime = avg_vruntime(cfs_rq); + s64 lag = 0; - /* sleeps up to a single latency don't count. */ - if (!initial) { - unsigned long thresh; + /* + * Due to how V is constructed as the weighted average of entities, + * adding tasks with positive lag, or removing tasks with negative lag + * will move 'time' backwards, this can screw around with the lag of + * other tasks. + * + * EEVDF: placement strategy #1 / #2 + */ + if (sched_feat(PLACE_LAG) && cfs_rq->nr_running > 1) { + struct sched_entity *curr = cfs_rq->curr; + unsigned long load; - if (se_is_idle(se)) - thresh = sysctl_sched_min_granularity; - else - thresh = sysctl_sched_latency; + lag = se->vlag; /* - * Halve their sleep time's effect, to allow - * for a gentler effect of sleepers: + * If we want to place a task and preserve lag, we have to + * consider the effect of the new entity on the weighted + * average and compensate for this, otherwise lag can quickly + * evaporate. + * + * Lag is defined as: + * + * lag_i = S - s_i = w_i * (V - v_i) + * + * To avoid the 'w_i' term all over the place, we only track + * the virtual lag: + * + * vl_i = V - v_i <=> v_i = V - vl_i + * + * And we take V to be the weighted average of all v: + * + * V = (\Sum w_j*v_j) / W + * + * Where W is: \Sum w_j + * + * Then, the weighted average after adding an entity with lag + * vl_i is given by: + * + * V' = (\Sum w_j*v_j + w_i*v_i) / (W + w_i) + * = (W*V + w_i*(V - vl_i)) / (W + w_i) + * = (W*V + w_i*V - w_i*vl_i) / (W + w_i) + * = (V*(W + w_i) - w_i*l) / (W + w_i) + * = V - w_i*vl_i / (W + w_i) + * + * And the actual lag after adding an entity with vl_i is: + * + * vl'_i = V' - v_i + * = V - w_i*vl_i / (W + w_i) - (V - vl_i) + * = vl_i - w_i*vl_i / (W + w_i) + * + * Which is strictly less than vl_i. So in order to preserve lag + * we should inflate the lag before placement such that the + * effective lag after placement comes out right. + * + * As such, invert the above relation for vl'_i to get the vl_i + * we need to use such that the lag after placement is the lag + * we computed before dequeue. + * + * vl'_i = vl_i - w_i*vl_i / (W + w_i) + * = ((W + w_i)*vl_i - w_i*vl_i) / (W + w_i) + * + * (W + w_i)*vl'_i = (W + w_i)*vl_i - w_i*vl_i + * = W*vl_i + * + * vl_i = (W + w_i)*vl'_i / W */ - if (sched_feat(GENTLE_FAIR_SLEEPERS)) - thresh >>= 1; + load = cfs_rq->avg_load; + if (curr && curr->on_rq) + load += curr->load.weight; - vruntime -= thresh; + lag *= load + se->load.weight; + if (WARN_ON_ONCE(!load)) + load = 1; + lag = div_s64(lag, load); + + vruntime -= lag; } - /* - * Pull vruntime of the entity being placed to the base level of - * cfs_rq, to prevent boosting it if placed backwards. - * However, min_vruntime can advance much faster than real time, with - * the extreme being when an entity with the minimal weight always runs - * on the cfs_rq. If the waking entity slept for a long time, its - * vruntime difference from min_vruntime may overflow s64 and their - * comparison may get inversed, so ignore the entity's original - * vruntime in that case. - * The maximal vruntime speedup is given by the ratio of normal to - * minimal weight: scale_load_down(NICE_0_LOAD) / MIN_SHARES. - * When placing a migrated waking entity, its exec_start has been set - * from a different rq. In order to take into account a possible - * divergence between new and prev rq's clocks task because of irq and - * stolen time, we take an additional margin. - * So, cutting off on the sleep time of - * 2^63 / scale_load_down(NICE_0_LOAD) ~ 104 days - * should be safe. - */ - if (entity_is_long_sleeper(se)) - se->vruntime = vruntime; - else - se->vruntime = max_vruntime(se->vruntime, vruntime); + if (sched_feat(FAIR_SLEEPERS)) { + + /* sleeps up to a single latency don't count. */ + if (!initial) { + unsigned long thresh; + + if (se_is_idle(se)) + thresh = sysctl_sched_min_granularity; + else + thresh = sysctl_sched_latency; + + /* + * Halve their sleep time's effect, to allow + * for a gentler effect of sleepers: + */ + if (sched_feat(GENTLE_FAIR_SLEEPERS)) + thresh >>= 1; + + vruntime -= thresh; + } + + /* + * Pull vruntime of the entity being placed to the base level of + * cfs_rq, to prevent boosting it if placed backwards. If the entity + * slept for a long time, don't even try to compare its vruntime with + * the base as it may be too far off and the comparison may get + * inversed due to s64 overflow. + */ + if (!entity_is_long_sleeper(se)) + vruntime = max_vruntime(se->vruntime, vruntime); + } + + se->vruntime = vruntime; } static void check_enqueue_throttle(struct cfs_rq *cfs_rq); @@ -5066,6 +5155,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, st clear_buddies(cfs_rq, se); + if (flags & DEQUEUE_SLEEP) + update_entity_lag(cfs_rq, se); + if (se != cfs_rq->curr) __dequeue_entity(cfs_rq, se); se->on_rq = 0; --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -1,12 +1,20 @@ /* SPDX-License-Identifier: GPL-2.0 */ + /* * Only give sleepers 50% of their service deficit. This allows * them to run sooner, but does not allow tons of sleepers to * rip the spread apart. */ +SCHED_FEAT(FAIR_SLEEPERS, false) SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true) /* + * Using the avg_vruntime, do the right thing and preserve lag across + * sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled. + */ +SCHED_FEAT(PLACE_LAG, true) + +/* * Prefer to schedule the task we woke last (assuming it failed * wakeup-preemption), since its likely going to consume data we * touched, increases cache locality. From patchwork Wed May 31 11:58:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101428 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2856515vqr; Wed, 31 May 2023 05:57:37 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ496u6dq76/QQlFh41SNmUkS4SEKzMgd8qxjcrdcTY4K1vwWj4NQG6ExjlXpMrMzc/FXjNb X-Received: by 2002:a17:903:190:b0:1b0:276f:b26c with SMTP id z16-20020a170903019000b001b0276fb26cmr5917342plg.56.1685537856856; Wed, 31 May 2023 05:57:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537856; cv=none; d=google.com; s=arc-20160816; b=UGvrHdIvlXXqW25PXRKBQHsYq9DqYx2x4CD9M69Tvdnzt3vWnQcN2GTsRHT0aIwU4M bin8QxQLZ2oxfQF9gsGWMGFl92R0eibiBGpHEXdHF9rxuOPDvK7PBmxFlagGQW6IwXV7 54DQ6tQrPFeki5QJM1pMTw+W38pqjpnSCjQNBwTYQFwRJ8GFRz3oL7MGl3FJp9eEVGUQ VSC+rd9Iqkf9rbzrjMFsc8pFX0Bzs4nhJNBMmlL00EGecpuZJUD38xzeES/GwMGyk3qK azAx0NBVgwGqgs0Qv4JK2pXPGheP2p53iwDtyob6GyuufvMqyTqKq+PuH+h5AiKBDePa lCrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=2nCdUjbTqHSXIpP/as6UIFPlbvoMHockVe8OM3H17xE=; b=mlBtZURmZcJaUF4cgn45ZlAbQacoOIjJyed8swA+GoEavapdTYCio+Li5vZRB53B88 jI8/wNSdTQGE55/glAvgFTihWsFxSwI/pdfttvaeqch1vHNbWjFRnA8jzR1a85IDTHNx 3ev3sHzFbUyaGwnXTbCetXgMNa4V934Eq2RdqKzkVKBdgwSzo9vmfg9buWHOf6Ka2DJu 4nzUSAO8Rm/LJAAf2jejpPvhEF2HZvjdOY+LLmXVULQiYr5V1Hlg9qUeTkNPd1Se8rOw z1/9uz/iitYE0pey2KcsC9lS/ztl1n7oxprtAde8SjasWmaTqTMvliyB3lmM7+hTglnj n0vg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=rZ8TOYTm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p1-20020a17090a868100b00256023fabd3si892442pjn.151.2023.05.31.05.57.22; Wed, 31 May 2023 05:57:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=rZ8TOYTm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236027AbjEaMtM (ORCPT + 99 others); Wed, 31 May 2023 08:49:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235981AbjEaMs6 (ORCPT ); Wed, 31 May 2023 08:48:58 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC5D1132 for ; Wed, 31 May 2023 05:48:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=2nCdUjbTqHSXIpP/as6UIFPlbvoMHockVe8OM3H17xE=; b=rZ8TOYTmCeaU+TEpQ6AZQ3+lsI PlQqOUV9ONOXgCd8BOV1ztsvoVi1hi38iy2X9VT2YhjjNLgLdyPphoEWi+x/IMhaDxoV8t1yI2Wo4 SUqnkZ5X9SoDyoE0Tm5qLJWpZBEDG2bezi0sdsI5ZZC85PwYqeF3HdS7sKSzGx2g7n4RTDg3Lz54E JOq0R4JufldyHPrwO1rkiFmqq93mhjRzSKeeAba/RMbkfWCKqVLNB9jjj/tMuLXgE1ASCNvu7cWw+ f/+CKTxRgqq/STk2cx4wJ2Du0BL78eW+pM8XOF8QQ31SzH7Cw0bvU5QCtif8ljpzbCvSHElMTPPQi SvXl6rxg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEo-00FSL3-2t; Wed, 31 May 2023 12:47:39 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0E2DD30074B; Wed, 31 May 2023 14:47:34 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id E4391214873C1; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124603.862983648@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:43 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 04/15] rbtree: Add rb_add_augmented_cached() helper References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414543574262451?= X-GMAIL-MSGID: =?utf-8?q?1767414543574262451?= While slightly sub-optimal, updating the augmented data while going down the tree during lookup would be faster -- alas the augment interface does not currently allow for that, provide a generic helper to add a node to an augmented cached tree. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/rbtree_augmented.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) --- a/include/linux/rbtree_augmented.h +++ b/include/linux/rbtree_augmented.h @@ -60,6 +60,32 @@ rb_insert_augmented_cached(struct rb_nod rb_insert_augmented(node, &root->rb_root, augment); } +static __always_inline struct rb_node * +rb_add_augmented_cached(struct rb_node *node, struct rb_root_cached *tree, + bool (*less)(struct rb_node *, const struct rb_node *), + const struct rb_augment_callbacks *augment) +{ + struct rb_node **link = &tree->rb_root.rb_node; + struct rb_node *parent = NULL; + bool leftmost = true; + + while (*link) { + parent = *link; + if (less(node, parent)) { + link = &parent->rb_left; + } else { + link = &parent->rb_right; + leftmost = false; + } + } + + rb_link_node(node, parent, link); + augment->propagate(parent, NULL); /* suboptimal */ + rb_insert_augmented_cached(node, tree, leftmost, augment); + + return leftmost ? node : NULL; +} + /* * Template for declaring augmented rbtree callbacks (generic case) * From patchwork Wed May 31 11:58:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101431 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2857141vqr; Wed, 31 May 2023 05:58:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5RFhezyUW5WMvJP4keFfvKMYZIfMlMbp34ALCdRDQXHz935L1tmkHGWE807NlnxwRzO6bj X-Received: by 2002:a17:903:11cf:b0:1a1:f5dd:2dce with SMTP id q15-20020a17090311cf00b001a1f5dd2dcemr6256405plh.6.1685537928465; Wed, 31 May 2023 05:58:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537928; cv=none; d=google.com; s=arc-20160816; b=EhmlS9YjxSGhDUO41K/Pne/1T29pXczmeV6CnCam3+r/MaBNo6wHcKMpL+B8NTwcp8 PKWQxF30YRQuwEQmPZem4Jchor4GyM/s6HZvHEbyN//fGEceeN1L6S8nKLr7CbaBLO// /HrEWgW8jy/plzdQ9t/1wELVjOv6lcWuK0mMtSDAta67XaPuRwGKsFoAc9rZqxnmdNYZ 6eN4sgpqk7Ln/GjtuOgQ1qubH7qBzp5eoZmwqLPJo66Rb44nVvrSjYRdXuQVn1W6kBf1 my2EvTrwy7+L89j4+JAKvow/bxxlyPtwh57bAfaoDA1dG3aKLyiR79gPsZ1ruxL6CGC5 UiDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=UUZu/hqcsQvYgFHsQR4VxKw3ZZWY0neTSKhwBSImriw=; b=nlCmDli0C6lQGpw/oqwY6qJkCw0F1SQzD0aX2t6o8+qsOHacrbF8xPv3Q2knwIndmB CP3sCB4qHhbhFs3MMz2A2IdZJiO/6WRZyWz9nkXYiz+M74eZpEoRUGTorM8DUrtZHITz n6qEvgAOAzYTs81Mzrb/Zjr86FRr8UBuz1GThst8eQ3gbGMNtqUZAPn4AY7/gDvd+yZU 3BeCMyhgVew0sbdeinFUP6x/eNXLVRCgfE6RfFpehCiouxgt60dwrVhJ9t9l63T2EEUI Mx7FSyQinJONWaGnUG3Qxm2WbhQxmeMQFpZKHFnkinKVrds7Srbk+aEXJO4otgtSizKR FfOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cR1EDCMQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id je10-20020a170903264a00b001b046acc84bsi848256plb.226.2023.05.31.05.58.33; Wed, 31 May 2023 05:58:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=cR1EDCMQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236127AbjEaMtu (ORCPT + 99 others); Wed, 31 May 2023 08:49:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235980AbjEaMtC (ORCPT ); Wed, 31 May 2023 08:49:02 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 179D918C for ; Wed, 31 May 2023 05:48:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=UUZu/hqcsQvYgFHsQR4VxKw3ZZWY0neTSKhwBSImriw=; b=cR1EDCMQlRUGh8tM5bzyOutoJU 6grMs8MRXAVH9rq05QKQ+xC+qyWy1MciOZNYXf1uSGeNBfF+8RgzKmztKdm2YvVG1jZ12nKrO8Zi+ AeWpTzT2emRZ5Z+/16c8Ym7l9Gqh2MlCfxi8YOx1b1JODVHldRrDg9SCLFMuEQFS8Z1mnd3Lg7yd2 dVOw7/MCdMTIA7JRoAs3szTHUIsZm+lW8TBkcy4/zsdbbR5CQmDLJnvhqK3Eb5fJcNtYPzFaf31yG slJzrIWvJf2EKUGJNG5NI5Pfux0/cvDVov0h6Yl4iv3sBspmWxRxB4yvk8nmx+FNGkUHoGBx+4f60 FuluVvUA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEp-007GRZ-Gh; Wed, 31 May 2023 12:47:39 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 8382A30084B; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id ED33222BA6458; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124603.931005524@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:44 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 05/15] sched/fair: Implement an EEVDF like policy References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414619072619277?= X-GMAIL-MSGID: =?utf-8?q?1767414619072619277?= Where CFS is currently a WFQ based scheduler with only a single knob, the weight. The addition of a second, latency oriented parameter, makes something like WF2Q or EEVDF based a much better fit. Specifically, EEVDF does EDF like scheduling in the left half of the tree -- those entities that are owed service. Except because this is a virtual time scheduler, the deadlines are in virtual time as well, which is what allows over-subscription. EEVDF has two parameters: - weight, or time-slope; which is mapped to nice just as before - request size, or slice length; which is used to compute the virtual deadline as: vd_i = ve_i + r_i/w_i Basically, by setting a smaller slice, the deadline will be earlier and the task will be more eligible and ran earlier. Tick driven preemption is driven by request/slice completion; while wakeup preemption is driven by the deadline. Because the tree is now effectively an interval tree, and the selection is no longer 'leftmost', over-scheduling is less of a problem. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/sched.h | 4 kernel/sched/core.c | 1 kernel/sched/debug.c | 6 kernel/sched/fair.c | 338 +++++++++++++++++++++++++++++++++++++++++------- kernel/sched/features.h | 3 kernel/sched/sched.h | 4 6 files changed, 308 insertions(+), 48 deletions(-) --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -550,6 +550,9 @@ struct sched_entity { /* For load-balancing: */ struct load_weight load; struct rb_node run_node; + u64 deadline; + u64 min_deadline; + struct list_head group_node; unsigned int on_rq; @@ -558,6 +561,7 @@ struct sched_entity { u64 prev_sum_exec_runtime; u64 vruntime; s64 vlag; + u64 slice; u64 nr_migrations; --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4464,6 +4464,7 @@ static void __sched_fork(unsigned long c p->se.nr_migrations = 0; p->se.vruntime = 0; p->se.vlag = 0; + p->se.slice = sysctl_sched_min_granularity; INIT_LIST_HEAD(&p->se.group_node); #ifdef CONFIG_FAIR_GROUP_SCHED --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -581,9 +581,13 @@ print_task(struct seq_file *m, struct rq else SEQ_printf(m, " %c", task_state_to_char(p)); - SEQ_printf(m, " %15s %5d %9Ld.%06ld %9Ld %5d ", + SEQ_printf(m, "%15s %5d %9Ld.%06ld %c %9Ld.%06ld %9Ld.%06ld %9Ld.%06ld %9Ld %5d ", p->comm, task_pid_nr(p), SPLIT_NS(p->se.vruntime), + entity_eligible(cfs_rq_of(&p->se), &p->se) ? 'E' : 'N', + SPLIT_NS(p->se.deadline), + SPLIT_NS(p->se.slice), + SPLIT_NS(p->se.sum_exec_runtime), (long long)(p->nvcsw + p->nivcsw), p->prio); --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -47,6 +47,7 @@ #include #include #include +#include #include @@ -347,6 +348,16 @@ static u64 __calc_delta(u64 delta_exec, return mul_u64_u32_shr(delta_exec, fact, shift); } +/* + * delta /= w + */ +static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se) +{ + if (unlikely(se->load.weight != NICE_0_LOAD)) + delta = __calc_delta(delta, NICE_0_LOAD, &se->load); + + return delta; +} const struct sched_class fair_sched_class; @@ -717,11 +728,62 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq) /* * lag_i = S - s_i = w_i * (V - v_i) + * + * However, since V is approximated by the weighted average of all entities it + * is possible -- by addition/removal/reweight to the tree -- to move V around + * and end up with a larger lag than we started with. + * + * Limit this to either double the slice length with a minimum of TICK_NSEC + * since that is the timing granularity. + * + * EEVDF gives the following limit for a steady state system: + * + * -r_max < lag < max(r_max, q) + * + * XXX could add max_slice to the augmented data to track this. */ void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se) { + s64 lag, limit; + SCHED_WARN_ON(!se->on_rq); - se->vlag = avg_vruntime(cfs_rq) - se->vruntime; + lag = avg_vruntime(cfs_rq) - se->vruntime; + + limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se); + se->vlag = clamp(lag, -limit, limit); +} + +/* + * Entity is eligible once it received less service than it ought to have, + * eg. lag >= 0. + * + * lag_i = S - s_i = w_i*(V - v_i) + * + * lag_i >= 0 -> V >= v_i + * + * \Sum (v_i - v)*w_i + * V = ------------------ + v + * \Sum w_i + * + * lag_i >= 0 -> \Sum (v_i - v)*w_i >= (v_i - v)*(\Sum w_i) + * + * Note: using 'avg_vruntime() > se->vruntime' is inacurate due + * to the loss in precision caused by the division. + */ +int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + struct sched_entity *curr = cfs_rq->curr; + s64 avg = cfs_rq->avg_vruntime; + long load = cfs_rq->avg_load; + + if (curr && curr->on_rq) { + unsigned long weight = scale_load_down(curr->load.weight); + + avg += entity_key(cfs_rq, curr) * weight; + load += weight; + } + + return avg >= entity_key(cfs_rq, se) * load; } static u64 __update_min_vruntime(struct cfs_rq *cfs_rq, u64 vruntime) @@ -740,8 +802,8 @@ static u64 __update_min_vruntime(struct static void update_min_vruntime(struct cfs_rq *cfs_rq) { + struct sched_entity *se = __pick_first_entity(cfs_rq); struct sched_entity *curr = cfs_rq->curr; - struct rb_node *leftmost = rb_first_cached(&cfs_rq->tasks_timeline); u64 vruntime = cfs_rq->min_vruntime; @@ -752,9 +814,7 @@ static void update_min_vruntime(struct c curr = NULL; } - if (leftmost) { /* non-empty tree */ - struct sched_entity *se = __node_2_se(leftmost); - + if (se) { if (!curr) vruntime = se->vruntime; else @@ -771,18 +831,50 @@ static inline bool __entity_less(struct return entity_before(__node_2_se(a), __node_2_se(b)); } +#define deadline_gt(field, lse, rse) ({ (s64)((lse)->field - (rse)->field) > 0; }) + +static inline void __update_min_deadline(struct sched_entity *se, struct rb_node *node) +{ + if (node) { + struct sched_entity *rse = __node_2_se(node); + if (deadline_gt(min_deadline, se, rse)) + se->min_deadline = rse->min_deadline; + } +} + +/* + * se->min_deadline = min(se->deadline, left->min_deadline, right->min_deadline) + */ +static inline bool min_deadline_update(struct sched_entity *se, bool exit) +{ + u64 old_min_deadline = se->min_deadline; + struct rb_node *node = &se->run_node; + + se->min_deadline = se->deadline; + __update_min_deadline(se, node->rb_right); + __update_min_deadline(se, node->rb_left); + + return se->min_deadline == old_min_deadline; +} + +RB_DECLARE_CALLBACKS(static, min_deadline_cb, struct sched_entity, + run_node, min_deadline, min_deadline_update); + /* * Enqueue an entity into the rb-tree: */ static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { avg_vruntime_add(cfs_rq, se); - rb_add_cached(&se->run_node, &cfs_rq->tasks_timeline, __entity_less); + se->min_deadline = se->deadline; + rb_add_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline, + __entity_less, &min_deadline_cb); } static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { - rb_erase_cached(&se->run_node, &cfs_rq->tasks_timeline); + rb_erase_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline, + &min_deadline_cb); avg_vruntime_sub(cfs_rq, se); } @@ -806,6 +898,97 @@ static struct sched_entity *__pick_next_ return __node_2_se(next); } +static struct sched_entity *pick_cfs(struct cfs_rq *cfs_rq, struct sched_entity *curr) +{ + struct sched_entity *left = __pick_first_entity(cfs_rq); + + /* + * If curr is set we have to see if its left of the leftmost entity + * still in the tree, provided there was anything in the tree at all. + */ + if (!left || (curr && entity_before(curr, left))) + left = curr; + + return left; +} + +/* + * Earliest Eligible Virtual Deadline First + * + * In order to provide latency guarantees for different request sizes + * EEVDF selects the best runnable task from two criteria: + * + * 1) the task must be eligible (must be owed service) + * + * 2) from those tasks that meet 1), we select the one + * with the earliest virtual deadline. + * + * We can do this in O(log n) time due to an augmented RB-tree. The + * tree keeps the entries sorted on service, but also functions as a + * heap based on the deadline by keeping: + * + * se->min_deadline = min(se->deadline, se->{left,right}->min_deadline) + * + * Which allows an EDF like search on (sub)trees. + */ +static struct sched_entity *pick_eevdf(struct cfs_rq *cfs_rq) +{ + struct rb_node *node = cfs_rq->tasks_timeline.rb_root.rb_node; + struct sched_entity *curr = cfs_rq->curr; + struct sched_entity *best = NULL; + + if (curr && (!curr->on_rq || !entity_eligible(cfs_rq, curr))) + curr = NULL; + + while (node) { + struct sched_entity *se = __node_2_se(node); + + /* + * If this entity is not eligible, try the left subtree. + */ + if (!entity_eligible(cfs_rq, se)) { + node = node->rb_left; + continue; + } + + /* + * If this entity has an earlier deadline than the previous + * best, take this one. If it also has the earliest deadline + * of its subtree, we're done. + */ + if (!best || deadline_gt(deadline, best, se)) { + best = se; + if (best->deadline == best->min_deadline) + break; + } + + /* + * If the earlest deadline in this subtree is in the fully + * eligible left half of our space, go there. + */ + if (node->rb_left && + __node_2_se(node->rb_left)->min_deadline == se->min_deadline) { + node = node->rb_left; + continue; + } + + node = node->rb_right; + } + + if (!best || (curr && deadline_gt(deadline, best, curr))) + best = curr; + + if (unlikely(!best)) { + struct sched_entity *left = __pick_first_entity(cfs_rq); + if (left) { + pr_err("EEVDF scheduling fail, picking leftmost\n"); + return left; + } + } + + return best; +} + #ifdef CONFIG_SCHED_DEBUG struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq) { @@ -840,17 +1023,6 @@ int sched_update_scaling(void) #endif /* - * delta /= w - */ -static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se) -{ - if (unlikely(se->load.weight != NICE_0_LOAD)) - delta = __calc_delta(delta, NICE_0_LOAD, &se->load); - - return delta; -} - -/* * The idea is to set a period in which each task runs once. * * When there are too many tasks (sched_nr_latency) we have to stretch @@ -915,6 +1087,48 @@ static u64 sched_slice(struct cfs_rq *cf return slice; } +static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se); + +/* + * XXX: strictly: vd_i += N*r_i/w_i such that: vd_i > ve_i + * this is probably good enough. + */ +static void update_deadline(struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + if ((s64)(se->vruntime - se->deadline) < 0) + return; + + if (sched_feat(EEVDF)) { + /* + * For EEVDF the virtual time slope is determined by w_i (iow. + * nice) while the request time r_i is determined by + * sysctl_sched_min_granularity. + */ + se->slice = sysctl_sched_min_granularity; + + /* + * The task has consumed its request, reschedule. + */ + if (cfs_rq->nr_running > 1) { + resched_curr(rq_of(cfs_rq)); + clear_buddies(cfs_rq, se); + } + } else { + /* + * When many tasks blow up the sched_period; it is possible + * that sched_slice() reports unusually large results (when + * many tasks are very light for example). Therefore impose a + * maximum. + */ + se->slice = min_t(u64, sched_slice(cfs_rq, se), sysctl_sched_latency); + } + + /* + * EEVDF: vd_i = ve_i + r_i / w_i + */ + se->deadline = se->vruntime + calc_delta_fair(se->slice, se); +} + #include "pelt.h" #ifdef CONFIG_SMP @@ -1047,6 +1261,7 @@ static void update_curr(struct cfs_rq *c schedstat_add(cfs_rq->exec_clock, delta_exec); curr->vruntime += calc_delta_fair(delta_exec, curr); + update_deadline(cfs_rq, curr); update_min_vruntime(cfs_rq); if (entity_is_task(curr)) { @@ -3521,6 +3736,14 @@ static void reweight_entity(struct cfs_r * we need to scale se->vlag when w_i changes. */ se->vlag = div_s64(se->vlag * old_weight, weight); + } else { + s64 deadline = se->deadline - se->vruntime; + /* + * When the weight changes, the virtual time slope changes and + * we should adjust the relative virtual deadline accordingly. + */ + deadline = div_s64(deadline * old_weight, weight); + se->deadline = se->vruntime + deadline; } #ifdef CONFIG_SMP @@ -4871,6 +5094,7 @@ static inline bool entity_is_long_sleepe static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { + u64 vslice = calc_delta_fair(se->slice, se); u64 vruntime = avg_vruntime(cfs_rq); s64 lag = 0; @@ -4942,9 +5166,9 @@ place_entity(struct cfs_rq *cfs_rq, stru */ load = cfs_rq->avg_load; if (curr && curr->on_rq) - load += curr->load.weight; + load += scale_load_down(curr->load.weight); - lag *= load + se->load.weight; + lag *= load + scale_load_down(se->load.weight); if (WARN_ON_ONCE(!load)) load = 1; lag = div_s64(lag, load); @@ -4985,6 +5209,19 @@ place_entity(struct cfs_rq *cfs_rq, stru } se->vruntime = vruntime; + + /* + * When joining the competition; the exisiting tasks will be, + * on average, halfway through their slice, as such start tasks + * off with half a slice to ease into the competition. + */ + if (sched_feat(PLACE_DEADLINE_INITIAL) && initial) + vslice /= 2; + + /* + * EEVDF: vd_i = ve_i + r_i/w_i + */ + se->deadline = se->vruntime + vslice; } static void check_enqueue_throttle(struct cfs_rq *cfs_rq); @@ -5196,19 +5433,12 @@ dequeue_entity(struct cfs_rq *cfs_rq, st static void check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - unsigned long ideal_runtime, delta_exec; + unsigned long delta_exec; struct sched_entity *se; s64 delta; - /* - * When many tasks blow up the sched_period; it is possible that - * sched_slice() reports unusually large results (when many tasks are - * very light for example). Therefore impose a maximum. - */ - ideal_runtime = min_t(u64, sched_slice(cfs_rq, curr), sysctl_sched_latency); - delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime; - if (delta_exec > ideal_runtime) { + if (delta_exec > curr->slice) { resched_curr(rq_of(cfs_rq)); /* * The current task ran long enough, ensure it doesn't get @@ -5232,7 +5462,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq if (delta < 0) return; - if (delta > ideal_runtime) + if (delta > curr->slice) resched_curr(rq_of(cfs_rq)); } @@ -5287,17 +5517,20 @@ wakeup_preempt_entity(struct sched_entit static struct sched_entity * pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - struct sched_entity *left = __pick_first_entity(cfs_rq); - struct sched_entity *se; + struct sched_entity *left, *se; - /* - * If curr is set we have to see if its left of the leftmost entity - * still in the tree, provided there was anything in the tree at all. - */ - if (!left || (curr && entity_before(curr, left))) - left = curr; + if (sched_feat(EEVDF)) { + /* + * Enabling NEXT_BUDDY will affect latency but not fairness. + */ + if (sched_feat(NEXT_BUDDY) && + cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) + return cfs_rq->next; + + return pick_eevdf(cfs_rq); + } - se = left; /* ideally we run the leftmost entity */ + se = left = pick_cfs(cfs_rq, curr); /* * Avoid running the skip buddy, if running something else can @@ -5390,7 +5623,7 @@ entity_tick(struct cfs_rq *cfs_rq, struc return; #endif - if (cfs_rq->nr_running > 1) + if (!sched_feat(EEVDF) && cfs_rq->nr_running > 1) check_preempt_tick(cfs_rq, curr); } @@ -6396,13 +6629,12 @@ static inline void unthrottle_offline_cf static void hrtick_start_fair(struct rq *rq, struct task_struct *p) { struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); SCHED_WARN_ON(task_rq(p) != rq); if (rq->cfs.h_nr_running > 1) { - u64 slice = sched_slice(cfs_rq, se); u64 ran = se->sum_exec_runtime - se->prev_sum_exec_runtime; + u64 slice = se->slice; s64 delta = slice - ran; if (delta < 0) { @@ -8122,7 +8354,19 @@ static void check_preempt_wakeup(struct if (cse_is_idle != pse_is_idle) return; - update_curr(cfs_rq_of(se)); + cfs_rq = cfs_rq_of(se); + update_curr(cfs_rq); + + if (sched_feat(EEVDF)) { + /* + * XXX pick_eevdf(cfs_rq) != se ? + */ + if (pick_eevdf(cfs_rq) == pse) + goto preempt; + + return; + } + if (wakeup_preempt_entity(se, pse) == 1) { /* * Bias pick_next to pick the sched entity that is @@ -8368,7 +8612,7 @@ static void yield_task_fair(struct rq *r clear_buddies(cfs_rq, se); - if (curr->policy != SCHED_BATCH) { + if (sched_feat(EEVDF) || curr->policy != SCHED_BATCH) { update_rq_clock(rq); /* * Update run-time statistics of the 'current'. @@ -8381,6 +8625,8 @@ static void yield_task_fair(struct rq *r */ rq_clock_skip_update(rq); } + if (sched_feat(EEVDF)) + se->deadline += calc_delta_fair(se->slice, se); set_skip_buddy(se); } @@ -12136,8 +12382,8 @@ static void rq_offline_fair(struct rq *r static inline bool __entity_slice_used(struct sched_entity *se, int min_nr_tasks) { - u64 slice = sched_slice(cfs_rq_of(se), se); u64 rtime = se->sum_exec_runtime - se->prev_sum_exec_runtime; + u64 slice = se->slice; return (rtime * min_nr_tasks > slice); } @@ -12832,7 +13078,7 @@ static unsigned int get_rr_interval_fair * idle runqueue: */ if (rq->cfs.load.weight) - rr_interval = NS_TO_JIFFIES(sched_slice(cfs_rq_of(se), se)); + rr_interval = NS_TO_JIFFIES(se->slice); return rr_interval; } --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -13,6 +13,7 @@ SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true) * sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled. */ SCHED_FEAT(PLACE_LAG, true) +SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) /* * Prefer to schedule the task we woke last (assuming it failed @@ -103,3 +104,5 @@ SCHED_FEAT(LATENCY_WARN, false) SCHED_FEAT(ALT_PERIOD, true) SCHED_FEAT(BASE_SLICE, true) + +SCHED_FEAT(EEVDF, true) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2481,9 +2481,10 @@ extern void check_preempt_curr(struct rq extern const_debug unsigned int sysctl_sched_nr_migrate; extern const_debug unsigned int sysctl_sched_migration_cost; +extern unsigned int sysctl_sched_min_granularity; + #ifdef CONFIG_SCHED_DEBUG extern unsigned int sysctl_sched_latency; -extern unsigned int sysctl_sched_min_granularity; extern unsigned int sysctl_sched_idle_min_granularity; extern unsigned int sysctl_sched_wakeup_granularity; extern int sysctl_resched_latency_warn_ms; @@ -3507,5 +3508,6 @@ static inline void init_sched_mm_cid(str #endif extern u64 avg_vruntime(struct cfs_rq *cfs_rq); +extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se); #endif /* _KERNEL_SCHED_SCHED_H */ From patchwork Wed May 31 11:58:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101423 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2855623vqr; Wed, 31 May 2023 05:56:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5fXULsb2KOqVnY8W1ietnCu6VvP1pUzQv0csKhiYx6VSNZHou7HZI5GOzsPHQqCcggqK3o X-Received: by 2002:a05:6a20:8f1d:b0:104:b21f:26b0 with SMTP id b29-20020a056a208f1d00b00104b21f26b0mr6187324pzk.47.1685537761054; Wed, 31 May 2023 05:56:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537761; cv=none; d=google.com; s=arc-20160816; b=WvuMSAcKZ4OyMgNJCJfulDK8ElwUXkJ5YSju6NExeBXqJeXwT01H5Ri8JWYrWFt5xj vABAbAakFLiEKk1mNOF7dn6wFJYBRflPj1WqcvOK6GEA4+YkC5SelevZ22NtVzMB1Izv UJ1Mc+wQI7bAz++8Imz5+3t0OXVYIhfM4TqbKoh9KDRKFglIhCxp5gpjtJIOu8L/JpCx 2s7EmGcJVxEt8+ueyiTsnA6NkZhDqLNI8Jr0oFY6QxnkOZKj0hYqZameLWvsfVMobJvv vU0vpqFKhQ+AFvMHDMo07NvENAwQaL+DRr5wjIkCeMH9YF/SVcO1Dgky939zx+CgbUKZ MUXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=ooCzPCYSqVr6QyMDeQ079tUQl3VStjnWctZzZdJhV4U=; b=BcpBatUu9d8eZI/pSgDixxjx0TTZSsjPEqWi1JxjW6W6QR4fBZFMPIBpjRS14z8ZiC ykr9q8nk9oJAtmdP3kn4Jj+Lq6GuEU/tApzUBjSeqHLVssXMXqW6/MsGLWUrrMBAjclj nn1+NC6XX8KSV6XxDLFz6ovsgQv2NCJ2TjqA6gvxkp/UKMUbXYOC0MjUFfR3m1WdUvrp vfO9H7deKfC+k3IQFJfn60xqLpPTBOifUA2hYuGR3fWS1jjmFmZxeen8kOwbDpquD79D LqPTw9Tm1AIWQnw2gp+upGcZ+t4d2ybZKZWv1HV6+pZKp35udfMaAOucojiXfJXrb7xk YFyw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=LJZTEnix; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o25-20020a637319000000b0051b29733bc9si873023pgc.715.2023.05.31.05.55.46; Wed, 31 May 2023 05:56:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=LJZTEnix; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236037AbjEaMtO (ORCPT + 99 others); Wed, 31 May 2023 08:49:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235969AbjEaMs6 (ORCPT ); Wed, 31 May 2023 08:48:58 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B928D11F for ; Wed, 31 May 2023 05:48:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=ooCzPCYSqVr6QyMDeQ079tUQl3VStjnWctZzZdJhV4U=; b=LJZTEnixaPT+VF/Dg/xCQ2sZ/+ CHTZJbJ/bdU8Shcdezf0cjmAH5E9lbIE1iClq02huaoxKsC2yIjzD05cJmCJd/7d0ihQxbClm7aWq TlYqv3Sp04YAVPY6rG9j6hGWV+zFCfjTYaXQcwaCvTABtZkaicNq0F5UoDYd10lak4zh2zdlj2bBN 9msf75dU3pSVfh6H48W1mm+l3vYAoT491Ut2eI9Sk2cQcELw7tyePjSmHjhmM6p4cMHwoduiEH7uV H4wSdYJRwQCgI9kg8O5zf9Ec3BfY651JGrvIR3f1wkjanOZRUX1gdOT2OojxkQAdJDEi/lgjs8sEN 61meYq8g==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEp-00FSLE-36; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 838CA300C4B; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id F098A22BA6451; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124604.000198861@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:45 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 06/15] sched: Commit to lag based placement References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414443449036834?= X-GMAIL-MSGID: =?utf-8?q?1767414443449036834?= Removes the FAIR_SLEEPERS code in favour of the new LAG based placement. Specifically, the whole FAIR_SLEEPER thing was a very crud approximation to make up for the lack of lag based placement, specifically the 'service owed' part. This is important for things like 'starve' and 'hackbench'. One side effect of FAIR_SLEEPER is that is caused 'small' unfairness, specifically, by always ignoring up-to 'thresh' sleeptime it would have a 50%/50% time distribution for a 50% sleeper vs a 100% runner, while strictly speaking this should (of course) result in a 33%/67% split (as CFS will also do if the sleep period exceeds 'thresh'). Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 59 ------------------------------------------------ kernel/sched/features.h | 8 ------ 2 files changed, 1 insertion(+), 66 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5068,29 +5068,6 @@ static void check_spread(struct cfs_rq * #endif } -static inline bool entity_is_long_sleeper(struct sched_entity *se) -{ - struct cfs_rq *cfs_rq; - u64 sleep_time; - - if (se->exec_start == 0) - return false; - - cfs_rq = cfs_rq_of(se); - - sleep_time = rq_clock_task(rq_of(cfs_rq)); - - /* Happen while migrating because of clock task divergence */ - if (sleep_time <= se->exec_start) - return false; - - sleep_time -= se->exec_start; - if (sleep_time > ((1ULL << 63) / scale_load_down(NICE_0_LOAD))) - return true; - - return false; -} - static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { @@ -5172,43 +5149,9 @@ place_entity(struct cfs_rq *cfs_rq, stru if (WARN_ON_ONCE(!load)) load = 1; lag = div_s64(lag, load); - - vruntime -= lag; - } - - if (sched_feat(FAIR_SLEEPERS)) { - - /* sleeps up to a single latency don't count. */ - if (!initial) { - unsigned long thresh; - - if (se_is_idle(se)) - thresh = sysctl_sched_min_granularity; - else - thresh = sysctl_sched_latency; - - /* - * Halve their sleep time's effect, to allow - * for a gentler effect of sleepers: - */ - if (sched_feat(GENTLE_FAIR_SLEEPERS)) - thresh >>= 1; - - vruntime -= thresh; - } - - /* - * Pull vruntime of the entity being placed to the base level of - * cfs_rq, to prevent boosting it if placed backwards. If the entity - * slept for a long time, don't even try to compare its vruntime with - * the base as it may be too far off and the comparison may get - * inversed due to s64 overflow. - */ - if (!entity_is_long_sleeper(se)) - vruntime = max_vruntime(se->vruntime, vruntime); } - se->vruntime = vruntime; + se->vruntime = vruntime - lag; /* * When joining the competition; the exisiting tasks will be, --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -1,14 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Only give sleepers 50% of their service deficit. This allows - * them to run sooner, but does not allow tons of sleepers to - * rip the spread apart. - */ -SCHED_FEAT(FAIR_SLEEPERS, false) -SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true) - -/* * Using the avg_vruntime, do the right thing and preserve lag across * sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled. */ From patchwork Wed May 31 11:58:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101456 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2872186vqr; Wed, 31 May 2023 06:16:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5IOFP1jw++/jd4ylaiQRFt2Kts9CJc2ISI8SYU039R46WXkLE8/xfxqONK/oIjl2OpvNmk X-Received: by 2002:a17:90a:a383:b0:256:4eda:dfcd with SMTP id x3-20020a17090aa38300b002564edadfcdmr4508504pjp.26.1685538989784; Wed, 31 May 2023 06:16:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685538989; cv=none; d=google.com; s=arc-20160816; b=VQ0yXSydnEhlRa44KzQMYcZ7hh7PjZ0M33KKb5E1kQXcYkp1mZwS7XlRt8i20VcX+H yTolx5E7/caMrbj8zDtR2D9ifc0ELZMfBePaFnPVvSt/5d9nfH43kY6OMkZ2VyuUZR7c Xd9j+LFT5umls7X4NtGFQX2aymSRCI93nRcWrdA8ehzRDC7K/vic17ENhrty4Blu5KUL z4sQlHLS61SJ3eOmNERR9xGKJcvOTcMm9MtZ2nCnLoB62ivaMfWLiOdtg0PVIZGFJvwC 80W4/yP62C5NWR+V8ChVgthpsIV70qf3yQImpN/Ds4LELIPXaR7INC0Pk8BWpQpo2MkO RJJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=7c6h7GZfdJ+T9RjYs8/xyMIsRGXNP+akeErf4CwdC0Y=; b=MVKBxhse0WNVGs6QkaUP4Bq0c1wuKutThgHeGj2rtiysom9BJx8q/qt7g385Zo3MjT DX2E4JKzktZnC6jR24qrAW3vGaaqWvolMBCuYEgks+XvNaKASML6p/zm5L6OpqIPQGCY E6tzApi9hzvCA1FcWYH72OvGqY1Eg5bdk2N5Tl0Wu1ZqotXC4uxkXmC/8JCbBEheUVu3 mXb01oE4vIC3vGxDBha7qTIMPoCqc4t6EbEVcpFEv6mZlpToQxdmiNekeEiqHzUqibDM GahywzssQoqrWGt04kJYkewGTpEnAmouPbINkNPiYwcbLFj3pOQaizguhHJqL0j6k8Ek 0kAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=D7Zx31l8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y2-20020a17090a134200b002478bba4da2si930110pjf.127.2023.05.31.06.16.17; Wed, 31 May 2023 06:16:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=D7Zx31l8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236088AbjEaMtd (ORCPT + 99 others); Wed, 31 May 2023 08:49:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233059AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AEB6E47 for ; Wed, 31 May 2023 05:48:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=7c6h7GZfdJ+T9RjYs8/xyMIsRGXNP+akeErf4CwdC0Y=; b=D7Zx31l8iu9t90D4/DQ67QHEdv wNiOfwEEaHbVCg/SzU/UZ3Jc0Nz5oTIO2Yz9Dlw6ZB9KDA4BjndrYzYfFGAh59kE/Nh3SPYjL2Mwl yCSERkqENYxsFFggKCf6PUgYADYRyuWDNnhgZY1GA5Gsfd0zOz9/paiAEMt1fM7fhTtl13uVeeXF5 +9Ezv0WXysDu7TCGSq8/8EmwJRYg/Or77a5sM0Ka95UqqxH/UqCCQShGjihCYBmv+n3EA821k2jdH HeBB+3fjzpoZo0xqX9jkswX9Mys6zHn2ZBV0vXLEjHjyO6TrZTUkkNu+BN3zSw+RMkM3emFs4L5En LImozRIQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEp-007GRb-Tu; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 838083003E1; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 00C0D22BA6459; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124604.068911180@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:46 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 07/15] sched/smp: Use lag to simplify cross-runqueue placement References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767415731977730162?= X-GMAIL-MSGID: =?utf-8?q?1767415731977730162?= Using lag is both more correct and simpler when moving between runqueues. Notable, min_vruntime() was invented as a cheap approximation of avg_vruntime() for this very purpose (SMP migration). Since we now have the real thing; use it. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 145 ++++++---------------------------------------------- 1 file changed, 19 insertions(+), 126 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5083,7 +5083,7 @@ place_entity(struct cfs_rq *cfs_rq, stru * * EEVDF: placement strategy #1 / #2 */ - if (sched_feat(PLACE_LAG) && cfs_rq->nr_running > 1) { + if (sched_feat(PLACE_LAG) && cfs_rq->nr_running) { struct sched_entity *curr = cfs_rq->curr; unsigned long load; @@ -5171,61 +5171,21 @@ static void check_enqueue_throttle(struc static inline bool cfs_bandwidth_used(void); -/* - * MIGRATION - * - * dequeue - * update_curr() - * update_min_vruntime() - * vruntime -= min_vruntime - * - * enqueue - * update_curr() - * update_min_vruntime() - * vruntime += min_vruntime - * - * this way the vruntime transition between RQs is done when both - * min_vruntime are up-to-date. - * - * WAKEUP (remote) - * - * ->migrate_task_rq_fair() (p->state == TASK_WAKING) - * vruntime -= min_vruntime - * - * enqueue - * update_curr() - * update_min_vruntime() - * vruntime += min_vruntime - * - * this way we don't have the most up-to-date min_vruntime on the originating - * CPU and an up-to-date min_vruntime on the destination CPU. - */ - static void enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { - bool renorm = !(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_MIGRATED); bool curr = cfs_rq->curr == se; /* * If we're the current task, we must renormalise before calling * update_curr(). */ - if (renorm && curr) - se->vruntime += cfs_rq->min_vruntime; + if (curr) + place_entity(cfs_rq, se, 0); update_curr(cfs_rq); /* - * Otherwise, renormalise after, such that we're placed at the current - * moment in time, instead of some random moment in the past. Being - * placed in the past could significantly boost this task to the - * fairness detriment of existing tasks. - */ - if (renorm && !curr) - se->vruntime += cfs_rq->min_vruntime; - - /* * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new @@ -5236,11 +5196,22 @@ enqueue_entity(struct cfs_rq *cfs_rq, st */ update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH); se_update_runnable(se); + /* + * XXX update_load_avg() above will have attached us to the pelt sum; + * but update_cfs_group() here will re-adjust the weight and have to + * undo/redo all that. Seems wasteful. + */ update_cfs_group(se); - account_entity_enqueue(cfs_rq, se); - if (flags & ENQUEUE_WAKEUP) + /* + * XXX now that the entity has been re-weighted, and it's lag adjusted, + * we can place the entity. + */ + if (!curr) place_entity(cfs_rq, se, 0); + + account_entity_enqueue(cfs_rq, se); + /* Entity has migrated, no longer consider this task hot */ if (flags & ENQUEUE_MIGRATED) se->exec_start = 0; @@ -5335,23 +5306,12 @@ dequeue_entity(struct cfs_rq *cfs_rq, st clear_buddies(cfs_rq, se); - if (flags & DEQUEUE_SLEEP) - update_entity_lag(cfs_rq, se); - + update_entity_lag(cfs_rq, se); if (se != cfs_rq->curr) __dequeue_entity(cfs_rq, se); se->on_rq = 0; account_entity_dequeue(cfs_rq, se); - /* - * Normalize after update_curr(); which will also have moved - * min_vruntime if @se is the one holding it back. But before doing - * update_min_vruntime() again, which will discount @se's position and - * can move min_vruntime forward still more. - */ - if (!(flags & DEQUEUE_SLEEP)) - se->vruntime -= cfs_rq->min_vruntime; - /* return excess runtime on last dequeue */ return_cfs_rq_runtime(cfs_rq); @@ -8102,18 +8062,6 @@ static void migrate_task_rq_fair(struct { struct sched_entity *se = &p->se; - /* - * As blocked tasks retain absolute vruntime the migration needs to - * deal with this by subtracting the old and adding the new - * min_vruntime -- the latter is done by enqueue_entity() when placing - * the task on the new runqueue. - */ - if (READ_ONCE(p->__state) == TASK_WAKING) { - struct cfs_rq *cfs_rq = cfs_rq_of(se); - - se->vruntime -= u64_u32_load(cfs_rq->min_vruntime); - } - if (!task_on_rq_migrating(p)) { remove_entity_load_avg(se); @@ -12482,8 +12430,8 @@ static void task_tick_fair(struct rq *rq */ static void task_fork_fair(struct task_struct *p) { - struct cfs_rq *cfs_rq; struct sched_entity *se = &p->se, *curr; + struct cfs_rq *cfs_rq; struct rq *rq = this_rq(); struct rq_flags rf; @@ -12492,22 +12440,9 @@ static void task_fork_fair(struct task_s cfs_rq = task_cfs_rq(current); curr = cfs_rq->curr; - if (curr) { + if (curr) update_curr(cfs_rq); - se->vruntime = curr->vruntime; - } place_entity(cfs_rq, se, 1); - - if (sysctl_sched_child_runs_first && curr && entity_before(curr, se)) { - /* - * Upon rescheduling, sched_class::put_prev_task() will place - * 'current' within the tree based on its new key value. - */ - swap(curr->vruntime, se->vruntime); - resched_curr(rq); - } - - se->vruntime -= cfs_rq->min_vruntime; rq_unlock(rq, &rf); } @@ -12536,34 +12471,6 @@ prio_changed_fair(struct rq *rq, struct check_preempt_curr(rq, p, 0); } -static inline bool vruntime_normalized(struct task_struct *p) -{ - struct sched_entity *se = &p->se; - - /* - * In both the TASK_ON_RQ_QUEUED and TASK_ON_RQ_MIGRATING cases, - * the dequeue_entity(.flags=0) will already have normalized the - * vruntime. - */ - if (p->on_rq) - return true; - - /* - * When !on_rq, vruntime of the task has usually NOT been normalized. - * But there are some cases where it has already been normalized: - * - * - A forked child which is waiting for being woken up by - * wake_up_new_task(). - * - A task which has been woken up by try_to_wake_up() and - * waiting for actually being woken up by sched_ttwu_pending(). - */ - if (!se->sum_exec_runtime || - (READ_ONCE(p->__state) == TASK_WAKING && p->sched_remote_wakeup)) - return true; - - return false; -} - #ifdef CONFIG_FAIR_GROUP_SCHED /* * Propagate the changes of the sched_entity across the tg tree to make it @@ -12634,16 +12541,6 @@ static void attach_entity_cfs_rq(struct static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); - - if (!vruntime_normalized(p)) { - /* - * Fix up our vruntime so that the current sleep doesn't - * cause 'unlimited' sleep bonus. - */ - place_entity(cfs_rq, se, 0); - se->vruntime -= cfs_rq->min_vruntime; - } detach_entity_cfs_rq(se); } @@ -12651,12 +12548,8 @@ static void detach_task_cfs_rq(struct ta static void attach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); attach_entity_cfs_rq(se); - - if (!vruntime_normalized(p)) - se->vruntime += cfs_rq->min_vruntime; } static void switched_from_fair(struct rq *rq, struct task_struct *p) From patchwork Wed May 31 11:58:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101462 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2875884vqr; Wed, 31 May 2023 06:21:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4pY51XWgQO61006CVhjIGuFKhvj3/Rfak0bmwLHCETEpknFM8uYgmMg3BknJQ01v1U+9Mc X-Received: by 2002:a05:6a20:4416:b0:10b:6f14:32d7 with SMTP id ce22-20020a056a20441600b0010b6f1432d7mr5437360pzb.31.1685539301948; Wed, 31 May 2023 06:21:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685539301; cv=none; d=google.com; s=arc-20160816; b=vYejTmDto6YMS2FobWhBuAidgqapHseUJPK+do7z1o4OMkCO4mkshWViNdRlR37FN6 QiWO8ERu4DW0cuB51XKvgGk0fS1su9DKaZJFwSpO8IJNeBFzYGcAeC0grf5rS6zj9Ln4 FhTM34cpxUWhAvcYHPi/DDNX772bcXmgL+ey1FOOt+4kfu2WHWKJc4SY0bbj6/EbaeOC 8QImRYseWDOwnGPgwMoVAyvc/hkAJaBl5ruW5FQ6XohnqBsPusC2Ngol7jGz5Zc3XdWd K6mpMcmxU6xX3gcb+Z06MEPBtO3zMFGmS77McdjJO7/9+GNXWlQ3o1n1bcZRxyLgPvgE XGPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=i3Jc3X0qqsWXpDgBU31PMBGvMF6Gom6a3AhiCBIdnXs=; b=hWSBhkmPqPIi+b1q8Zrharf29ornMi7SzSMCCSr9Zhqwkss5yPRdV6oJLB7i1NlV9y 2KshUsNbvXG5m+aNmMpVtcBcCK6sASmBFQ702/6Yw4pIlOSejF03jBUnYm8EzRqU8WfE 4Vtn8r20Vlz7uYCwbNLstIEbwraumQG6NuuUP1mM4BKZxuCMHOeNl1C0nXhjSj6WiVeO wTX5l9OF53VXAmC9VcA4TeS37gOy1kX+GALjV55LD2zM8LDXBCS85p6U6VomQUgGBusp QRINf3T6MBj2B6lOLRTfndCbdu4KvnnXSKiEIzhLuoBTjNIs1So1lHEu5Co1JsW8ro+8 IGOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=gTT0AISE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x24-20020a63b218000000b0053f73e787fcsi895899pge.555.2023.05.31.06.21.28; Wed, 31 May 2023 06:21:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=gTT0AISE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235688AbjEaMtq (ORCPT + 99 others); Wed, 31 May 2023 08:49:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236001AbjEaMtB (ORCPT ); Wed, 31 May 2023 08:49:01 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52DA5E4E for ; Wed, 31 May 2023 05:48:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=i3Jc3X0qqsWXpDgBU31PMBGvMF6Gom6a3AhiCBIdnXs=; b=gTT0AISEooSh2zxjPsV3tvvguP CXuZwgwr4hBfHGdkcb/t246rV9FiTj7hKB5mWfEsota1pM6nfM3yy5tWrv9ywUTUL8lt6/lZa8DJn YF82mlxAyuq/eYkJd7Ztp5jMTDnycB7LFRaOH4lwkq0HQPVddP2+TW3WGxQWAAdBTGThNc3p09HqW jkuUnl7ZvTG6ZuPvygU47iBoZdMEB5+Ow/y7SpXXSpij9rVbSLA0buY3V+CkDs+DgnZskkqS8yxhk ycyLl01CENUacoYS4L039TK+bab2D4zkcNB8TQpnEZDWUViX3yjla7bxF/51FSP8gEbUDAJCjMapF 6n8pKL7A==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEp-007GRa-Go; Wed, 31 May 2023 12:47:39 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 83900300CB5; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 0707922BA645B; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.137187212@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:47 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 08/15] sched: Commit to EEVDF References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767416059205386720?= X-GMAIL-MSGID: =?utf-8?q?1767416059205386720?= EEVDF is a better defined scheduling policy, as a result it has less heuristics/tunables. There is no compelling reason to keep CFS around. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/debug.c | 6 kernel/sched/fair.c | 465 +++--------------------------------------------- kernel/sched/features.h | 12 - kernel/sched/sched.h | 5 4 files changed, 38 insertions(+), 450 deletions(-) --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -347,10 +347,7 @@ static __init int sched_init_debug(void) debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops); #endif - debugfs_create_u32("latency_ns", 0644, debugfs_sched, &sysctl_sched_latency); debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity); - debugfs_create_u32("idle_min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_idle_min_granularity); - debugfs_create_u32("wakeup_granularity_ns", 0644, debugfs_sched, &sysctl_sched_wakeup_granularity); debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms); debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once); @@ -865,10 +862,7 @@ static void sched_debug_header(struct se SEQ_printf(m, " .%-40s: %Ld\n", #x, (long long)(x)) #define PN(x) \ SEQ_printf(m, " .%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x)) - PN(sysctl_sched_latency); PN(sysctl_sched_min_granularity); - PN(sysctl_sched_idle_min_granularity); - PN(sysctl_sched_wakeup_granularity); P(sysctl_sched_child_runs_first); P(sysctl_sched_features); #undef PN --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -58,22 +58,6 @@ #include "autogroup.h" /* - * Targeted preemption latency for CPU-bound tasks: - * - * NOTE: this latency value is not the same as the concept of - * 'timeslice length' - timeslices in CFS are of variable length - * and have no persistent notion like in traditional, time-slice - * based scheduling concepts. - * - * (to see the precise effective timeslice length of your workload, - * run vmstat and monitor the context-switches (cs) field) - * - * (default: 6ms * (1 + ilog(ncpus)), units: nanoseconds) - */ -unsigned int sysctl_sched_latency = 6000000ULL; -static unsigned int normalized_sysctl_sched_latency = 6000000ULL; - -/* * The initial- and re-scaling of tunables is configurable * * Options are: @@ -95,36 +79,11 @@ unsigned int sysctl_sched_min_granularit static unsigned int normalized_sysctl_sched_min_granularity = 750000ULL; /* - * Minimal preemption granularity for CPU-bound SCHED_IDLE tasks. - * Applies only when SCHED_IDLE tasks compete with normal tasks. - * - * (default: 0.75 msec) - */ -unsigned int sysctl_sched_idle_min_granularity = 750000ULL; - -/* - * This value is kept at sysctl_sched_latency/sysctl_sched_min_granularity - */ -static unsigned int sched_nr_latency = 8; - -/* * After fork, child runs first. If set to 0 (default) then * parent will (try to) run first. */ unsigned int sysctl_sched_child_runs_first __read_mostly; -/* - * SCHED_OTHER wake-up granularity. - * - * This option delays the preemption effects of decoupled workloads - * and reduces their over-scheduling. Synchronous workloads will still - * have immediate wakeup/sleep latencies. - * - * (default: 1 msec * (1 + ilog(ncpus)), units: nanoseconds) - */ -unsigned int sysctl_sched_wakeup_granularity = 1000000UL; -static unsigned int normalized_sysctl_sched_wakeup_granularity = 1000000UL; - const_debug unsigned int sysctl_sched_migration_cost = 500000UL; int sched_thermal_decay_shift; @@ -279,8 +238,6 @@ static void update_sysctl(void) #define SET_SYSCTL(name) \ (sysctl_##name = (factor) * normalized_sysctl_##name) SET_SYSCTL(sched_min_granularity); - SET_SYSCTL(sched_latency); - SET_SYSCTL(sched_wakeup_granularity); #undef SET_SYSCTL } @@ -888,30 +845,6 @@ struct sched_entity *__pick_first_entity return __node_2_se(left); } -static struct sched_entity *__pick_next_entity(struct sched_entity *se) -{ - struct rb_node *next = rb_next(&se->run_node); - - if (!next) - return NULL; - - return __node_2_se(next); -} - -static struct sched_entity *pick_cfs(struct cfs_rq *cfs_rq, struct sched_entity *curr) -{ - struct sched_entity *left = __pick_first_entity(cfs_rq); - - /* - * If curr is set we have to see if its left of the leftmost entity - * still in the tree, provided there was anything in the tree at all. - */ - if (!left || (curr && entity_before(curr, left))) - left = curr; - - return left; -} - /* * Earliest Eligible Virtual Deadline First * @@ -1008,85 +941,15 @@ int sched_update_scaling(void) { unsigned int factor = get_update_sysctl_factor(); - sched_nr_latency = DIV_ROUND_UP(sysctl_sched_latency, - sysctl_sched_min_granularity); - #define WRT_SYSCTL(name) \ (normalized_sysctl_##name = sysctl_##name / (factor)) WRT_SYSCTL(sched_min_granularity); - WRT_SYSCTL(sched_latency); - WRT_SYSCTL(sched_wakeup_granularity); #undef WRT_SYSCTL return 0; } #endif -/* - * The idea is to set a period in which each task runs once. - * - * When there are too many tasks (sched_nr_latency) we have to stretch - * this period because otherwise the slices get too small. - * - * p = (nr <= nl) ? l : l*nr/nl - */ -static u64 __sched_period(unsigned long nr_running) -{ - if (unlikely(nr_running > sched_nr_latency)) - return nr_running * sysctl_sched_min_granularity; - else - return sysctl_sched_latency; -} - -static bool sched_idle_cfs_rq(struct cfs_rq *cfs_rq); - -/* - * We calculate the wall-time slice from the period by taking a part - * proportional to the weight. - * - * s = p*P[w/rw] - */ -static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) -{ - unsigned int nr_running = cfs_rq->nr_running; - struct sched_entity *init_se = se; - unsigned int min_gran; - u64 slice; - - if (sched_feat(ALT_PERIOD)) - nr_running = rq_of(cfs_rq)->cfs.h_nr_running; - - slice = __sched_period(nr_running + !se->on_rq); - - for_each_sched_entity(se) { - struct load_weight *load; - struct load_weight lw; - struct cfs_rq *qcfs_rq; - - qcfs_rq = cfs_rq_of(se); - load = &qcfs_rq->load; - - if (unlikely(!se->on_rq)) { - lw = qcfs_rq->load; - - update_load_add(&lw, se->load.weight); - load = &lw; - } - slice = __calc_delta(slice, se->load.weight, load); - } - - if (sched_feat(BASE_SLICE)) { - if (se_is_idle(init_se) && !sched_idle_cfs_rq(cfs_rq)) - min_gran = sysctl_sched_idle_min_granularity; - else - min_gran = sysctl_sched_min_granularity; - - slice = max_t(u64, slice, min_gran); - } - - return slice; -} - static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se); /* @@ -1098,35 +961,25 @@ static void update_deadline(struct cfs_r if ((s64)(se->vruntime - se->deadline) < 0) return; - if (sched_feat(EEVDF)) { - /* - * For EEVDF the virtual time slope is determined by w_i (iow. - * nice) while the request time r_i is determined by - * sysctl_sched_min_granularity. - */ - se->slice = sysctl_sched_min_granularity; - - /* - * The task has consumed its request, reschedule. - */ - if (cfs_rq->nr_running > 1) { - resched_curr(rq_of(cfs_rq)); - clear_buddies(cfs_rq, se); - } - } else { - /* - * When many tasks blow up the sched_period; it is possible - * that sched_slice() reports unusually large results (when - * many tasks are very light for example). Therefore impose a - * maximum. - */ - se->slice = min_t(u64, sched_slice(cfs_rq, se), sysctl_sched_latency); - } + /* + * For EEVDF the virtual time slope is determined by w_i (iow. + * nice) while the request time r_i is determined by + * sysctl_sched_min_granularity. + */ + se->slice = sysctl_sched_min_granularity; /* * EEVDF: vd_i = ve_i + r_i / w_i */ se->deadline = se->vruntime + calc_delta_fair(se->slice, se); + + /* + * The task has consumed its request, reschedule. + */ + if (cfs_rq->nr_running > 1) { + resched_curr(rq_of(cfs_rq)); + clear_buddies(cfs_rq, se); + } } #include "pelt.h" @@ -5055,19 +4908,6 @@ static inline void update_misfit_status( #endif /* CONFIG_SMP */ -static void check_spread(struct cfs_rq *cfs_rq, struct sched_entity *se) -{ -#ifdef CONFIG_SCHED_DEBUG - s64 d = se->vruntime - cfs_rq->min_vruntime; - - if (d < 0) - d = -d; - - if (d > 3*sysctl_sched_latency) - schedstat_inc(cfs_rq->nr_spread_over); -#endif -} - static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { @@ -5218,7 +5058,6 @@ enqueue_entity(struct cfs_rq *cfs_rq, st check_schedstat_required(); update_stats_enqueue_fair(cfs_rq, se, flags); - check_spread(cfs_rq, se); if (!curr) __enqueue_entity(cfs_rq, se); se->on_rq = 1; @@ -5230,17 +5069,6 @@ enqueue_entity(struct cfs_rq *cfs_rq, st } } -static void __clear_buddies_last(struct sched_entity *se) -{ - for_each_sched_entity(se) { - struct cfs_rq *cfs_rq = cfs_rq_of(se); - if (cfs_rq->last != se) - break; - - cfs_rq->last = NULL; - } -} - static void __clear_buddies_next(struct sched_entity *se) { for_each_sched_entity(se) { @@ -5252,27 +5080,10 @@ static void __clear_buddies_next(struct } } -static void __clear_buddies_skip(struct sched_entity *se) -{ - for_each_sched_entity(se) { - struct cfs_rq *cfs_rq = cfs_rq_of(se); - if (cfs_rq->skip != se) - break; - - cfs_rq->skip = NULL; - } -} - static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se) { - if (cfs_rq->last == se) - __clear_buddies_last(se); - if (cfs_rq->next == se) __clear_buddies_next(se); - - if (cfs_rq->skip == se) - __clear_buddies_skip(se); } static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq); @@ -5330,45 +5141,6 @@ dequeue_entity(struct cfs_rq *cfs_rq, st update_idle_cfs_rq_clock_pelt(cfs_rq); } -/* - * Preempt the current task with a newly woken task if needed: - */ -static void -check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) -{ - unsigned long delta_exec; - struct sched_entity *se; - s64 delta; - - delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime; - if (delta_exec > curr->slice) { - resched_curr(rq_of(cfs_rq)); - /* - * The current task ran long enough, ensure it doesn't get - * re-elected due to buddy favours. - */ - clear_buddies(cfs_rq, curr); - return; - } - - /* - * Ensure that a task that missed wakeup preemption by a - * narrow margin doesn't have to wait for a full slice. - * This also mitigates buddy induced latencies under load. - */ - if (delta_exec < sysctl_sched_min_granularity) - return; - - se = __pick_first_entity(cfs_rq); - delta = curr->vruntime - se->vruntime; - - if (delta < 0) - return; - - if (delta > curr->slice) - resched_curr(rq_of(cfs_rq)); -} - static void set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) { @@ -5407,9 +5179,6 @@ set_next_entity(struct cfs_rq *cfs_rq, s se->prev_sum_exec_runtime = se->sum_exec_runtime; } -static int -wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se); - /* * Pick the next process, keeping these things in mind, in this order: * 1) keep things fair between processes/task groups @@ -5420,53 +5189,14 @@ wakeup_preempt_entity(struct sched_entit static struct sched_entity * pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - struct sched_entity *left, *se; - - if (sched_feat(EEVDF)) { - /* - * Enabling NEXT_BUDDY will affect latency but not fairness. - */ - if (sched_feat(NEXT_BUDDY) && - cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) - return cfs_rq->next; - - return pick_eevdf(cfs_rq); - } - - se = left = pick_cfs(cfs_rq, curr); - /* - * Avoid running the skip buddy, if running something else can - * be done without getting too unfair. + * Enabling NEXT_BUDDY will affect latency but not fairness. */ - if (cfs_rq->skip && cfs_rq->skip == se) { - struct sched_entity *second; + if (sched_feat(NEXT_BUDDY) && + cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) + return cfs_rq->next; - if (se == curr) { - second = __pick_first_entity(cfs_rq); - } else { - second = __pick_next_entity(se); - if (!second || (curr && entity_before(curr, second))) - second = curr; - } - - if (second && wakeup_preempt_entity(second, left) < 1) - se = second; - } - - if (cfs_rq->next && wakeup_preempt_entity(cfs_rq->next, left) < 1) { - /* - * Someone really wants this to run. If it's not unfair, run it. - */ - se = cfs_rq->next; - } else if (cfs_rq->last && wakeup_preempt_entity(cfs_rq->last, left) < 1) { - /* - * Prefer last buddy, try to return the CPU to a preempted task. - */ - se = cfs_rq->last; - } - - return se; + return pick_eevdf(cfs_rq); } static bool check_cfs_rq_runtime(struct cfs_rq *cfs_rq); @@ -5483,8 +5213,6 @@ static void put_prev_entity(struct cfs_r /* throttle cfs_rqs exceeding runtime */ check_cfs_rq_runtime(cfs_rq); - check_spread(cfs_rq, prev); - if (prev->on_rq) { update_stats_wait_start_fair(cfs_rq, prev); /* Put 'current' back into the tree. */ @@ -5525,9 +5253,6 @@ entity_tick(struct cfs_rq *cfs_rq, struc hrtimer_active(&rq_of(cfs_rq)->hrtick_timer)) return; #endif - - if (!sched_feat(EEVDF) && cfs_rq->nr_running > 1) - check_preempt_tick(cfs_rq, curr); } @@ -6561,8 +6286,7 @@ static void hrtick_update(struct rq *rq) if (!hrtick_enabled_fair(rq) || curr->sched_class != &fair_sched_class) return; - if (cfs_rq_of(&curr->se)->nr_running < sched_nr_latency) - hrtick_start_fair(rq, curr); + hrtick_start_fair(rq, curr); } #else /* !CONFIG_SCHED_HRTICK */ static inline void @@ -6603,17 +6327,6 @@ static int sched_idle_rq(struct rq *rq) rq->nr_running); } -/* - * Returns true if cfs_rq only has SCHED_IDLE entities enqueued. Note the use - * of idle_nr_running, which does not consider idle descendants of normal - * entities. - */ -static bool sched_idle_cfs_rq(struct cfs_rq *cfs_rq) -{ - return cfs_rq->nr_running && - cfs_rq->nr_running == cfs_rq->idle_nr_running; -} - #ifdef CONFIG_SMP static int sched_idle_cpu(int cpu) { @@ -8099,66 +7812,6 @@ balance_fair(struct rq *rq, struct task_ } #endif /* CONFIG_SMP */ -static unsigned long wakeup_gran(struct sched_entity *se) -{ - unsigned long gran = sysctl_sched_wakeup_granularity; - - /* - * Since its curr running now, convert the gran from real-time - * to virtual-time in his units. - * - * By using 'se' instead of 'curr' we penalize light tasks, so - * they get preempted easier. That is, if 'se' < 'curr' then - * the resulting gran will be larger, therefore penalizing the - * lighter, if otoh 'se' > 'curr' then the resulting gran will - * be smaller, again penalizing the lighter task. - * - * This is especially important for buddies when the leftmost - * task is higher priority than the buddy. - */ - return calc_delta_fair(gran, se); -} - -/* - * Should 'se' preempt 'curr'. - * - * |s1 - * |s2 - * |s3 - * g - * |<--->|c - * - * w(c, s1) = -1 - * w(c, s2) = 0 - * w(c, s3) = 1 - * - */ -static int -wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se) -{ - s64 gran, vdiff = curr->vruntime - se->vruntime; - - if (vdiff <= 0) - return -1; - - gran = wakeup_gran(se); - if (vdiff > gran) - return 1; - - return 0; -} - -static void set_last_buddy(struct sched_entity *se) -{ - for_each_sched_entity(se) { - if (SCHED_WARN_ON(!se->on_rq)) - return; - if (se_is_idle(se)) - return; - cfs_rq_of(se)->last = se; - } -} - static void set_next_buddy(struct sched_entity *se) { for_each_sched_entity(se) { @@ -8170,12 +7823,6 @@ static void set_next_buddy(struct sched_ } } -static void set_skip_buddy(struct sched_entity *se) -{ - for_each_sched_entity(se) - cfs_rq_of(se)->skip = se; -} - /* * Preempt the current task with a newly woken task if needed: */ @@ -8184,7 +7831,6 @@ static void check_preempt_wakeup(struct struct task_struct *curr = rq->curr; struct sched_entity *se = &curr->se, *pse = &p->se; struct cfs_rq *cfs_rq = task_cfs_rq(curr); - int scale = cfs_rq->nr_running >= sched_nr_latency; int next_buddy_marked = 0; int cse_is_idle, pse_is_idle; @@ -8200,7 +7846,7 @@ static void check_preempt_wakeup(struct if (unlikely(throttled_hierarchy(cfs_rq_of(pse)))) return; - if (sched_feat(NEXT_BUDDY) && scale && !(wake_flags & WF_FORK)) { + if (sched_feat(NEXT_BUDDY) && !(wake_flags & WF_FORK)) { set_next_buddy(pse); next_buddy_marked = 1; } @@ -8248,44 +7894,16 @@ static void check_preempt_wakeup(struct cfs_rq = cfs_rq_of(se); update_curr(cfs_rq); - if (sched_feat(EEVDF)) { - /* - * XXX pick_eevdf(cfs_rq) != se ? - */ - if (pick_eevdf(cfs_rq) == pse) - goto preempt; - - return; - } - - if (wakeup_preempt_entity(se, pse) == 1) { - /* - * Bias pick_next to pick the sched entity that is - * triggering this preemption. - */ - if (!next_buddy_marked) - set_next_buddy(pse); + /* + * XXX pick_eevdf(cfs_rq) != se ? + */ + if (pick_eevdf(cfs_rq) == pse) goto preempt; - } return; preempt: resched_curr(rq); - /* - * Only set the backward buddy when the current task is still - * on the rq. This can happen when a wakeup gets interleaved - * with schedule on the ->pre_schedule() or idle_balance() - * point, either of which can * drop the rq lock. - * - * Also, during early boot the idle thread is in the fair class, - * for obvious reasons its a bad idea to schedule back to it. - */ - if (unlikely(!se->on_rq || curr == rq->idle)) - return; - - if (sched_feat(LAST_BUDDY) && scale && entity_is_task(se)) - set_last_buddy(se); } #ifdef CONFIG_SMP @@ -8486,8 +8104,6 @@ static void put_prev_task_fair(struct rq /* * sched_yield() is very simple - * - * The magic of dealing with the ->skip buddy is in pick_next_entity. */ static void yield_task_fair(struct rq *rq) { @@ -8503,23 +8119,19 @@ static void yield_task_fair(struct rq *r clear_buddies(cfs_rq, se); - if (sched_feat(EEVDF) || curr->policy != SCHED_BATCH) { - update_rq_clock(rq); - /* - * Update run-time statistics of the 'current'. - */ - update_curr(cfs_rq); - /* - * Tell update_rq_clock() that we've just updated, - * so we don't do microscopic update in schedule() - * and double the fastpath cost. - */ - rq_clock_skip_update(rq); - } - if (sched_feat(EEVDF)) - se->deadline += calc_delta_fair(se->slice, se); + update_rq_clock(rq); + /* + * Update run-time statistics of the 'current'. + */ + update_curr(cfs_rq); + /* + * Tell update_rq_clock() that we've just updated, + * so we don't do microscopic update in schedule() + * and double the fastpath cost. + */ + rq_clock_skip_update(rq); - set_skip_buddy(se); + se->deadline += calc_delta_fair(se->slice, se); } static bool yield_to_task_fair(struct rq *rq, struct task_struct *p) @@ -8762,8 +8374,7 @@ static int task_hot(struct task_struct * * Buddy candidates are cache hot: */ if (sched_feat(CACHE_HOT_BUDDY) && env->dst_rq->nr_running && - (&p->se == cfs_rq_of(&p->se)->next || - &p->se == cfs_rq_of(&p->se)->last)) + (&p->se == cfs_rq_of(&p->se)->next)) return 1; if (sysctl_sched_migration_cost == -1) --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -15,13 +15,6 @@ SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) SCHED_FEAT(NEXT_BUDDY, false) /* - * Prefer to schedule the task that ran last (when we did - * wake-preempt) as that likely will touch the same data, increases - * cache locality. - */ -SCHED_FEAT(LAST_BUDDY, true) - -/* * Consider buddies to be cache hot, decreases the likeliness of a * cache buddy being migrated away, increases cache locality. */ @@ -93,8 +86,3 @@ SCHED_FEAT(UTIL_EST, true) SCHED_FEAT(UTIL_EST_FASTUP, true) SCHED_FEAT(LATENCY_WARN, false) - -SCHED_FEAT(ALT_PERIOD, true) -SCHED_FEAT(BASE_SLICE, true) - -SCHED_FEAT(EEVDF, true) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -576,8 +576,6 @@ struct cfs_rq { */ struct sched_entity *curr; struct sched_entity *next; - struct sched_entity *last; - struct sched_entity *skip; #ifdef CONFIG_SCHED_DEBUG unsigned int nr_spread_over; @@ -2484,9 +2482,6 @@ extern const_debug unsigned int sysctl_s extern unsigned int sysctl_sched_min_granularity; #ifdef CONFIG_SCHED_DEBUG -extern unsigned int sysctl_sched_latency; -extern unsigned int sysctl_sched_idle_min_granularity; -extern unsigned int sysctl_sched_wakeup_granularity; extern int sysctl_resched_latency_warn_ms; extern int sysctl_resched_latency_warn_once; From patchwork Wed May 31 11:58:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101429 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2856685vqr; Wed, 31 May 2023 05:57:58 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Ojk48RGLSZLZoyQlZyla3o6s+8BY+Nz08+SjVLTdpHncBUVp1jOloOxlbT7cx62qPxJcq X-Received: by 2002:a05:6a20:244f:b0:10b:397c:b1bf with SMTP id t15-20020a056a20244f00b0010b397cb1bfmr6548198pzc.15.1685537878339; Wed, 31 May 2023 05:57:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537878; cv=none; d=google.com; s=arc-20160816; b=0dyB9PQ7hhtZsUr0C/w1CF4yNb+ppwvCoyECOEbsdbwMWa3pPbAiPsydfMoJShKBKR IwEKWduHGoHCqg6mLaGRiUqiTLJcEZfksWKbtwUDyg/uT2yPe08OeDEEwhRu0wDRUom1 04t4pcBroN4WnSk9/4K3v+mDxdYkdBxwRAP26aVGG2DyTbDUeeUrO3ZzbCYxG8ltJkDV M7tTEQrwprDhpBuSRJ38oDWqLEsOzbPipcP6Jp1xbIclg7hFTxG5NSEsTEdsvTSQmnC4 iG7kwZz/wWTNV4/wXew4BP5X1HqqOqALcR5lRXFy7QKQsrThKcrttsFhMbRvB+zSkt3x rB2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=u3VdHODagUcuhecPEM096ZQQiQ+UGylL9VNtBc1yvAs=; b=rzt2yijdnXd18AOetQd7e24wQGObXsm8bF+E/FXKfOAgkpgNXzRGBbnoz9OLFYW5Yd 65aMQqT3X7YcrQy7FWD4zymxR7HLGpfIgj0chHXwE6/mUabU9tK4LhcETR3uFwn84uTx xG7dLAB0Qj2TfvBnZMaQIOsQVAqrJONZyNSQKlWTZAhaG8kHm5sLHCiia8MOQDq5qepT DU1UMMzWm1Njse3VTjArd/RrP0BSr9/5LmfuGKRjznDocD5bUprNwBCJ90LjUXBqFbdM 0AbfFOl8E4VuasBOIAP2szA/8kaBi+E31TTmlavEnWCpPviHhFspXdyn0mlZWcUzY1kv rQfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b="Q+A/Qhlx"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y22-20020a637d16000000b0053f327d0321si974089pgc.323.2023.05.31.05.57.41; Wed, 31 May 2023 05:57:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b="Q+A/Qhlx"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236051AbjEaMtU (ORCPT + 99 others); Wed, 31 May 2023 08:49:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235962AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EC56E41 for ; Wed, 31 May 2023 05:48:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=u3VdHODagUcuhecPEM096ZQQiQ+UGylL9VNtBc1yvAs=; b=Q+A/Qhlx4B3kLfqcrDZlOjwFq8 O8gkyzEhIEd8ORhr2Whn3Fqe4UY0syT85g2dKRIViHBw88XiFbqawPJVzdMibqupkUe4hpxh1eO/U V17hEgz+zzQ0k8jwq9ZVkPlJojvW2BJeDamjRirFA5w1aKZ8dhFBHkTFWIpXbqE5o+okMFeSDRxL5 NWdYCxzQb/Evx26YWpLjZBW+RcktcrrF1HzzUlfCClL/F6qFr3vHPh+OlRkh4iOJ0C8vjBS+oKmm/ A5cXmyWJu6TTReOhv4b9zNsjLbYa/j0MPra8o4rxAhQLxRwvsCCJx3WtuBn39IPJhHzJ0DFm/erGS KZh5P6pQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEp-00FSLD-2z; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 838AC300C0E; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 0AF2B22BA645A; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.205287511@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:48 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 09/15] sched/debug: Rename min_granularity to base_slice References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414566232776891?= X-GMAIL-MSGID: =?utf-8?q?1767414566232776891?= EEVDF uses this tunable as the base request/slice -- make sure the name reflects this. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 2 +- kernel/sched/debug.c | 4 ++-- kernel/sched/fair.c | 12 ++++++------ kernel/sched/sched.h | 2 +- 4 files changed, 10 insertions(+), 10 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4464,7 +4464,7 @@ static void __sched_fork(unsigned long c p->se.nr_migrations = 0; p->se.vruntime = 0; p->se.vlag = 0; - p->se.slice = sysctl_sched_min_granularity; + p->se.slice = sysctl_sched_base_slice; INIT_LIST_HEAD(&p->se.group_node); #ifdef CONFIG_FAIR_GROUP_SCHED --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -347,7 +347,7 @@ static __init int sched_init_debug(void) debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops); #endif - debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity); + debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice); debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms); debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once); @@ -862,7 +862,7 @@ static void sched_debug_header(struct se SEQ_printf(m, " .%-40s: %Ld\n", #x, (long long)(x)) #define PN(x) \ SEQ_printf(m, " .%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x)) - PN(sysctl_sched_min_granularity); + PN(sysctl_sched_base_slice); P(sysctl_sched_child_runs_first); P(sysctl_sched_features); #undef PN --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -75,8 +75,8 @@ unsigned int sysctl_sched_tunable_scalin * * (default: 0.75 msec * (1 + ilog(ncpus)), units: nanoseconds) */ -unsigned int sysctl_sched_min_granularity = 750000ULL; -static unsigned int normalized_sysctl_sched_min_granularity = 750000ULL; +unsigned int sysctl_sched_base_slice = 750000ULL; +static unsigned int normalized_sysctl_sched_base_slice = 750000ULL; /* * After fork, child runs first. If set to 0 (default) then @@ -237,7 +237,7 @@ static void update_sysctl(void) #define SET_SYSCTL(name) \ (sysctl_##name = (factor) * normalized_sysctl_##name) - SET_SYSCTL(sched_min_granularity); + SET_SYSCTL(sched_base_slice); #undef SET_SYSCTL } @@ -943,7 +943,7 @@ int sched_update_scaling(void) #define WRT_SYSCTL(name) \ (normalized_sysctl_##name = sysctl_##name / (factor)) - WRT_SYSCTL(sched_min_granularity); + WRT_SYSCTL(sched_base_slice); #undef WRT_SYSCTL return 0; @@ -964,9 +964,9 @@ static void update_deadline(struct cfs_r /* * For EEVDF the virtual time slope is determined by w_i (iow. * nice) while the request time r_i is determined by - * sysctl_sched_min_granularity. + * sysctl_sched_base_slice. */ - se->slice = sysctl_sched_min_granularity; + se->slice = sysctl_sched_base_slice; /* * EEVDF: vd_i = ve_i + r_i / w_i --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2479,7 +2479,7 @@ extern void check_preempt_curr(struct rq extern const_debug unsigned int sysctl_sched_nr_migrate; extern const_debug unsigned int sysctl_sched_migration_cost; -extern unsigned int sysctl_sched_min_granularity; +extern unsigned int sysctl_sched_base_slice; #ifdef CONFIG_SCHED_DEBUG extern int sysctl_resched_latency_warn_ms; From patchwork Wed May 31 11:58:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101425 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2856018vqr; Wed, 31 May 2023 05:56:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6tP0iR8NNOp+zF6Ob52SbxQEz61iW5fTpmjeeqDlCkAa0Yh5ONkpZJe9+dEmbYJQcMRn0r X-Received: by 2002:a05:6a00:1303:b0:64d:1185:243c with SMTP id j3-20020a056a00130300b0064d1185243cmr5023622pfu.5.1685537806155; Wed, 31 May 2023 05:56:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537806; cv=none; d=google.com; s=arc-20160816; b=Bq26MoSI8JCwdyj6MJVkMBJ7WNQRw7W+OMlRe6dGtv7iUe2hUlq9nxlQt/+XaVgX0X OEpdMzAukpmxP0huQ7p7mCZr3+THZze93p6/YzwJ088l+L67PDfWoG0IUCka0fjm3Sc8 YpG8kUIXn0M3yrU6ED+brI9kQ8dHXJixKN3cyRfJoB6qBI4LrLQCGkFzAdOnB0wh/8fj hV2ESTzibCMMlruAzCxW4wYFUGN6KSSirvbOH8LO2k/C/Nccndjv9J07G8aJ/4gb2k8W aCEMKmIZw3MgUoNammWu5+1VYUNkslMM4VZHw4ZPaIXiEdlayQkM9w+dzra8px+P5/JV /dhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=5rlW/Hj0raApbJ4gn/QrplWF37O1ajo4J98jOFm33f4=; b=quRrEuTKj56GXFQLhlOCQVD1QexE7eOAmvv5165NUSQFZ9vUiEnMo/2Z70sY9EKkWT vC3G5AsZimGjMnW+f/zc1QvJLYEujUDkOMP7od0Tpogm77RyI9XI4KDv+KJ4K6xA5hOg LDTDWmk+Q4FgbfPvaG9a7U6R+OXMXvlgs8XP9PTk/CvBQok8wTk2ZFcdQn7o9hE8unqh CPyuw3f9dPowMhHuSQ6uKldDeIiNJnCBVok+yKyKD5btEGhWDeBd/4WU9q+OGPvbYpXv SbcEd6GTMjw04/J7b8UreIbG4dtTkk99mDPU17r+n4/scBl1SthBUKkLjAtsy3UWU1Pm sNew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=AKXfB9vo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g5-20020aa79dc5000000b0064cecf7b983si2132063pfq.143.2023.05.31.05.56.32; Wed, 31 May 2023 05:56:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=AKXfB9vo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235996AbjEaMtR (ORCPT + 99 others); Wed, 31 May 2023 08:49:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235982AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 457161BE for ; Wed, 31 May 2023 05:48:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=5rlW/Hj0raApbJ4gn/QrplWF37O1ajo4J98jOFm33f4=; b=AKXfB9voxqdnfjL6anU/gtnbVL zF/18FvMhanSXylI3B+Atq9nG+3b/f2XFXB0kQ6RK6z1luaxQCaQYrye3EjFaE2EgsR7bwn8jv5h5 4kD6YPEDFyYT8d/odLF1WdKeppuhO6vOgfAtLU2AHbtvRt7h3fkMpKaeyNtAxEMXHjD7w7UWrhgp1 mvEQdiwZsIn6Xf/qXFgMGEYnFGdo6j7SymvLQkpmgninfM2NcjSsAX18CS/bQzg587Kfqb5QEem0w zxNmkksbBZ0lvD/2dQ33sS43S0B4ZP3ZamVfGAX5A0cGsWQ24CF0JRkgkAbECG9ue+KP7e66ureKp ngDtVIuA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEp-00FSLF-3C; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 8C6FD300DDC; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 0E68522BA645C; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.274010996@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:49 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 10/15] sched/fair: Propagate enqueue flags into place_entity() References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414490374786339?= X-GMAIL-MSGID: =?utf-8?q?1767414490374786339?= This allows place_entity() to consider ENQUEUE_WAKEUP and ENQUEUE_MIGRATED. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 10 +++++----- kernel/sched/sched.h | 1 + 2 files changed, 6 insertions(+), 5 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4909,7 +4909,7 @@ static inline void update_misfit_status( #endif /* CONFIG_SMP */ static void -place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) +place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { u64 vslice = calc_delta_fair(se->slice, se); u64 vruntime = avg_vruntime(cfs_rq); @@ -4998,7 +4998,7 @@ place_entity(struct cfs_rq *cfs_rq, stru * on average, halfway through their slice, as such start tasks * off with half a slice to ease into the competition. */ - if (sched_feat(PLACE_DEADLINE_INITIAL) && initial) + if (sched_feat(PLACE_DEADLINE_INITIAL) && (flags & ENQUEUE_INITIAL)) vslice /= 2; /* @@ -5021,7 +5021,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, st * update_curr(). */ if (curr) - place_entity(cfs_rq, se, 0); + place_entity(cfs_rq, se, flags); update_curr(cfs_rq); @@ -5048,7 +5048,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, st * we can place the entity. */ if (!curr) - place_entity(cfs_rq, se, 0); + place_entity(cfs_rq, se, flags); account_entity_enqueue(cfs_rq, se); @@ -12053,7 +12053,7 @@ static void task_fork_fair(struct task_s curr = cfs_rq->curr; if (curr) update_curr(cfs_rq); - place_entity(cfs_rq, se, 1); + place_entity(cfs_rq, se, ENQUEUE_INITIAL); rq_unlock(rq, &rf); } --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2174,6 +2174,7 @@ extern const u32 sched_prio_to_wmult[40 #else #define ENQUEUE_MIGRATED 0x00 #endif +#define ENQUEUE_INITIAL 0x80 #define RETRY_TASK ((void *)-1UL) From patchwork Wed May 31 11:58:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101448 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2868233vqr; Wed, 31 May 2023 06:11:12 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4xUDmPjqqy+dHZQJFdGANxHSAnLxQ17PavBbXEw1mtqz0BUo15H6TpHsqwjIg3aA9M7zGO X-Received: by 2002:a05:6a20:7d96:b0:10c:ef9f:ddbd with SMTP id v22-20020a056a207d9600b0010cef9fddbdmr5416629pzj.8.1685538672600; Wed, 31 May 2023 06:11:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685538672; cv=none; d=google.com; s=arc-20160816; b=v7lcDZiTf509SPrfHkTGrgxNFE1lAL6BQFbhoJ0ZslYEqAh2Donz9303nFeBtt+v1K CUUj9OM2y004i5Lqda5AEvSIulkfX+nXUWCm33oL/Mcb9OnU2dS9Lc55ztoA3BD5VVZQ WjDNQde90RguWrnf1Vhbllkv8BHefxC5WxPhk+qQnyhQqu51L6W90OX2UiFuAsEkMjq5 CxP5E/h4ACYT8Xm1JWW0HAbDiKPbzh0HPaQslENk/1Yzgk1mUhKiVFed4NvhZYeUrnx5 I6khCvMMtVbXoBQJMecUMTDggaQVdCl1hZwu4XAJjTarDr4cWOOSKQ1NZxWkW2mYJJbT drvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=tv6sCKoE79EpD05Hd2YDu+LAbWtn1Gj3QHTavbmMia4=; b=VQiFrWCrMjIhSoEhEgq46GLYr72FmkzvDAJPRQhBrnIKmk0Tc5wL23G0Y4P3fb8vZM yUNfGRuThGIbQxYy/JGNeUPoidfu0+C7i3KflwRU04BT9iyh9Nvv0RscQILysy73HVlK MO/wXrKAV9NVd4EnhvfHZ3VTH0VNTULjC1inGXTG+VxxCyrOyJYlkW5LwZPz8WMKZe4H j5AAOQZGGJFm7LARNhKrjzU97lkbk8O99DakfH9Ktg8Fr/7fJZKaQuD9iMnZHieSsxTh WW1F9AjNHKjuM31kHmyIqyU/E3r0LLgdJJL5T83UvxDBzQsUOKMRWncko8wqQKRjiB4x gzAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=EReReZwa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w5-20020aa79545000000b0064d632aeecbsi402318pfq.246.2023.05.31.06.10.58; Wed, 31 May 2023 06:11:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=EReReZwa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236018AbjEaMta (ORCPT + 99 others); Wed, 31 May 2023 08:49:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235988AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E0BBE48 for ; Wed, 31 May 2023 05:48:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=tv6sCKoE79EpD05Hd2YDu+LAbWtn1Gj3QHTavbmMia4=; b=EReReZwapFqWEFIy7DpROKSIH9 q1jJLDzgpDvkhe1XTxxvXBhQAoPtKBjTYzafnLkXptHn9yqT3StYSxUJGwGMih3QA2JfnKLgl/UPh U2P3Tc2Qp/0IMoj9cMCowiKz2IbB3cowRjpzGyozNaGheTUHkPWB99Ie6m3WH4SKAojBrdhhbRGGN WGMtY5Ao/JhLv7PN4EKTg9fhdp/2/XjPvjzG3RMiTqETbI6HD9o3tqqyuI+XH7LQ5Z9bLgkX+9q6i 0KOdr9JR01Nz5IS2nFjXPxv3J6A0AaQ/E+D91T8OBCtruT6ZVsln4kpjwN+ZUyqbJov1fGX8Uu+wO feaq7k6w==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEp-007GRc-WB; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 8CFCB300E25; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 1486922BA645E; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.341527144@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:50 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 11/15] sched/eevdf: Better handle mixed slice length References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767415398853996371?= X-GMAIL-MSGID: =?utf-8?q?1767415398853996371?= In the case where (due to latency-nice) there are different request sizes in the tree, the smaller requests tend to be dominated by the larger. Also note how the EEVDF lag limits are based on r_max. Therefore; add a heuristic that for the mixed request size case, moves smaller requests to placement strategy #2 which ensures they're immidiately eligible and and due to their smaller (virtual) deadline will cause preemption. NOTE: this relies on update_entity_lag() to impose lag limits above a single slice. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 30 ++++++++++++++++++++++++++++++ kernel/sched/features.h | 1 + kernel/sched/sched.h | 1 + 3 files changed, 32 insertions(+) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -642,6 +642,7 @@ avg_vruntime_add(struct cfs_rq *cfs_rq, s64 key = entity_key(cfs_rq, se); cfs_rq->avg_vruntime += key * weight; + cfs_rq->avg_slice += se->slice * weight; cfs_rq->avg_load += weight; } @@ -652,6 +653,7 @@ avg_vruntime_sub(struct cfs_rq *cfs_rq, s64 key = entity_key(cfs_rq, se); cfs_rq->avg_vruntime -= key * weight; + cfs_rq->avg_slice -= se->slice * weight; cfs_rq->avg_load -= weight; } @@ -4908,6 +4910,21 @@ static inline void update_misfit_status( #endif /* CONFIG_SMP */ +static inline bool +entity_has_slept(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) +{ + u64 now; + + if (!(flags & ENQUEUE_WAKEUP)) + return false; + + if (flags & ENQUEUE_MIGRATED) + return true; + + now = rq_clock_task(rq_of(cfs_rq)); + return (s64)(se->exec_start - now) >= se->slice; +} + static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { @@ -4930,6 +4947,19 @@ place_entity(struct cfs_rq *cfs_rq, stru lag = se->vlag; /* + * For latency sensitive tasks; those that have a shorter than + * average slice and do not fully consume the slice, transition + * to EEVDF placement strategy #2. + */ + if (sched_feat(PLACE_FUDGE) && + (cfs_rq->avg_slice > se->slice * cfs_rq->avg_load) && + entity_has_slept(cfs_rq, se, flags)) { + lag += vslice; + if (lag > 0) + lag = 0; + } + + /* * If we want to place a task and preserve lag, we have to * consider the effect of the new entity on the weighted * average and compensate for this, otherwise lag can quickly --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -5,6 +5,7 @@ * sleep+wake cycles. EEVDF placement strategy #1, #2 if disabled. */ SCHED_FEAT(PLACE_LAG, true) +SCHED_FEAT(PLACE_FUDGE, true) SCHED_FEAT(PLACE_DEADLINE_INITIAL, true) /* --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -555,6 +555,7 @@ struct cfs_rq { unsigned int idle_h_nr_running; /* SCHED_IDLE */ s64 avg_vruntime; + u64 avg_slice; u64 avg_load; u64 exec_clock; From patchwork Wed May 31 11:58:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101427 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2856505vqr; Wed, 31 May 2023 05:57:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ508OWJUFAr07TWq+mcouyKNW8cabdOegh5IODNv3a6qOaWtBaH7YlU96NgmndIHLi8wsqJ X-Received: by 2002:a05:6a20:394b:b0:101:1e75:78e with SMTP id r11-20020a056a20394b00b001011e75078emr14418076pzg.14.1685537855871; Wed, 31 May 2023 05:57:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537855; cv=none; d=google.com; s=arc-20160816; b=JIUtjIoTdMB0/z0iJPweXTIgYJFxaJB9tDF12ppDQnKLrwGHXEEHYpsWy320zMJpxq m8BLlh/KSBLMXeMwlU+E53kDvWcLMxBNr9Q0zlkO+Tc56EvfX/2IkYmwqZJsAhJmEljy RB5Jcy967j5ojN9pLJsIX9kLPnPZOmG7hFzdSVjt1J9bynChjf+8Ove32L05DfPSlsNp cKfr7O4Z+sYP9VzTEyzjPC+gfjYcRTFbksKNUP9rTNSzsgTiIMxazo7y/tGZtajDNlT+ pZf9+IQm/fd6/XC94uqe8qlnR/xqD+wD2FiUx2Ya8/ACKGUZDVcuDijNe46tZOAVYPaK 91RQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=2eMIcUWTYRxHYeCUlMm6R4u+dyrFJONAkwQnAOCUvFo=; b=M4Hc6US5mChU4Xb05Uf7kj7fRMVNlj6C6VNGJ1UZGDIntWAalNewtab6gbt1+Fey0b s2dITbNVZHo1kv/MwsB54KMMw34vZENobPSal0XhqSrgbB/3twFlpL6GVWDZF8EAVSly yfZVQRVJIG6c7GcXcp/cfk9eQ/skwCGZKtNd7+aGev08jsc/ATWT4I+8HsmMZvaWyKhQ GqC08P2gT2TcNpe6Hwjg4rbxRb29iwgvMT3DiTpykrJoWTeSBHmONB72Y1louygi/eDh JZnpK8dLWaaPwkQOVCLx0PVTuC0UCAh3ZE82oCF0T64TdAXkUl6jW3LsqGlzM66rXkt1 uWGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=IAexiwHh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u190-20020a6379c7000000b0051b0d0897c4si923828pgc.818.2023.05.31.05.57.18; Wed, 31 May 2023 05:57:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=IAexiwHh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236114AbjEaMtl (ORCPT + 99 others); Wed, 31 May 2023 08:49:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235953AbjEaMtA (ORCPT ); Wed, 31 May 2023 08:49:00 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A680128 for ; Wed, 31 May 2023 05:48:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=2eMIcUWTYRxHYeCUlMm6R4u+dyrFJONAkwQnAOCUvFo=; b=IAexiwHhA6DsahLOAZx44wTyAI htsLvETjs7HgIeYzogrv3uVeL0nbYfwKHMSe2Ie/LSiyhRv0398pKkirB6YvC5L7KD5/UO0q9f6pT uYo6rfkQ8WSLvFmVgxw6kfwmrgNFFj5IA/+Nvxe5YDZD6mhoZ5Ybe7vo4EaYL0M4GRHKEMia38ULQ CjFgKr4YMEv0gYHHR+3/1k5udY9e6N6xTu5FPZMYp9wWJ9d7JjVvls9r9s1VeByY5EsH3b7eMuDef FP4lh+FQs1QldUrCIUjEYyYJTpP47Tx7e/e7A8+7gGDMOlQLFU9KJMy6O7PJX4Z3GAajlB/77E2H/ F0vtY5qg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEq-00FSLI-0V; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 91570300E3F; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 17AEC22BA645D; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.410388887@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:51 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de, Parth Shah Subject: [RFC][PATCH 12/15] sched: Introduce latency-nice as a per-task attribute References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414542621137127?= X-GMAIL-MSGID: =?utf-8?q?1767414542621137127?= From: Parth Shah Latency-nice indicates the latency requirements of a task with respect to the other tasks in the system. The value of the attribute can be within the range of [-20, 19] both inclusive to be in-line with the values just like task nice values. Just like task nice, -20 is the 'highest' priority and conveys this task should get minimal latency, conversely 19 is the lowest priority and conveys this task will get the least consideration and will thus receive maximal latency. [peterz: rebase, squash] Signed-off-by: Parth Shah Signed-off-by: Peter Zijlstra (Intel) --- include/linux/sched.h | 1 + include/uapi/linux/sched.h | 4 +++- include/uapi/linux/sched/types.h | 19 +++++++++++++++++++ init/init_task.c | 3 ++- kernel/sched/core.c | 27 ++++++++++++++++++++++++++- kernel/sched/debug.c | 1 + tools/include/uapi/linux/sched.h | 4 +++- 7 files changed, 55 insertions(+), 4 deletions(-) --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -791,6 +791,7 @@ struct task_struct { int static_prio; int normal_prio; unsigned int rt_priority; + int latency_prio; struct sched_entity se; struct sched_rt_entity rt; --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) #endif /* _UAPI_LINUX_SCHED_H */ --- a/include/uapi/linux/sched/types.h +++ b/include/uapi/linux/sched/types.h @@ -10,6 +10,7 @@ struct sched_param { #define SCHED_ATTR_SIZE_VER0 48 /* sizeof first published struct */ #define SCHED_ATTR_SIZE_VER1 56 /* add: util_{min,max} */ +#define SCHED_ATTR_SIZE_VER2 60 /* add: latency_nice */ /* * Extended scheduling parameters data structure. @@ -98,6 +99,22 @@ struct sched_param { * scheduled on a CPU with no more capacity than the specified value. * * A task utilization boundary can be reset by setting the attribute to -1. + * + * Latency Tolerance Attributes + * =========================== + * + * A subset of sched_attr attributes allows to specify the relative latency + * requirements of a task with respect to the other tasks running/queued in the + * system. + * + * @ sched_latency_nice task's latency_nice value + * + * The latency_nice of a task can have any value in a range of + * [MIN_LATENCY_NICE..MAX_LATENCY_NICE]. + * + * A task with latency_nice with the value of LATENCY_NICE_MIN can be + * taken for a task requiring a lower latency as opposed to the task with + * higher latency_nice. */ struct sched_attr { __u32 size; @@ -120,6 +137,8 @@ struct sched_attr { __u32 sched_util_min; __u32 sched_util_max; + /* latency requirement hints */ + __s32 sched_latency_nice; }; #endif /* _UAPI_LINUX_SCHED_TYPES_H */ --- a/init/init_task.c +++ b/init/init_task.c @@ -78,6 +78,7 @@ struct task_struct init_task .prio = MAX_PRIO - 20, .static_prio = MAX_PRIO - 20, .normal_prio = MAX_PRIO - 20, + .latency_prio = DEFAULT_PRIO, .policy = SCHED_NORMAL, .cpus_ptr = &init_task.cpus_mask, .user_cpus_ptr = NULL, @@ -89,7 +90,7 @@ struct task_struct init_task .fn = do_no_restart_syscall, }, .se = { - .group_node = LIST_HEAD_INIT(init_task.se.group_node), + .group_node = LIST_HEAD_INIT(init_task.se.group_node), }, .rt = { .run_list = LIST_HEAD_INIT(init_task.rt.run_list), --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4719,6 +4719,8 @@ int sched_fork(unsigned long clone_flags p->prio = p->normal_prio = p->static_prio; set_load_weight(p, false); + p->latency_prio = NICE_TO_PRIO(0); + /* * We don't need the reset flag anymore after the fork. It has * fulfilled its duty: @@ -7477,7 +7479,7 @@ static struct task_struct *find_process_ #define SETPARAM_POLICY -1 static void __setscheduler_params(struct task_struct *p, - const struct sched_attr *attr) + const struct sched_attr *attr) { int policy = attr->sched_policy; @@ -7501,6 +7503,13 @@ static void __setscheduler_params(struct set_load_weight(p, true); } +static void __setscheduler_latency(struct task_struct *p, + const struct sched_attr *attr) +{ + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) + p->latency_prio = NICE_TO_PRIO(attr->sched_latency_nice); +} + /* * Check the target process has a UID that matches the current process's: */ @@ -7641,6 +7650,13 @@ static int __sched_setscheduler(struct t return retval; } + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) { + if (attr->sched_latency_nice > MAX_NICE) + return -EINVAL; + if (attr->sched_latency_nice < MIN_NICE) + return -EINVAL; + } + if (pi) cpuset_read_lock(); @@ -7675,6 +7691,9 @@ static int __sched_setscheduler(struct t goto change; if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) goto change; + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE && + attr->sched_latency_nice != PRIO_TO_NICE(p->latency_prio)) + goto change; p->sched_reset_on_fork = reset_on_fork; retval = 0; @@ -7763,6 +7782,7 @@ static int __sched_setscheduler(struct t __setscheduler_params(p, attr); __setscheduler_prio(p, newprio); } + __setscheduler_latency(p, attr); __setscheduler_uclamp(p, attr); if (queued) { @@ -7973,6 +7993,9 @@ static int sched_copy_attr(struct sched_ size < SCHED_ATTR_SIZE_VER1) return -EINVAL; + if ((attr->sched_flags & SCHED_FLAG_LATENCY_NICE) && + size < SCHED_ATTR_SIZE_VER2) + return -EINVAL; /* * XXX: Do we want to be lenient like existing syscalls; or do we want * to be strict and return an error on out-of-bounds values? @@ -8210,6 +8233,8 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pi get_params(p, &kattr); kattr.sched_flags &= SCHED_FLAG_ALL; + kattr.sched_latency_nice = PRIO_TO_NICE(p->latency_prio); + #ifdef CONFIG_UCLAMP_TASK /* * This could race with another potential updater, but this is fine --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1085,6 +1085,7 @@ void proc_sched_show_task(struct task_st #endif P(policy); P(prio); + P(latency_prio); if (task_has_dl_policy(p)) { P(dl.runtime); P(dl.deadline); --- a/tools/include/uapi/linux/sched.h +++ b/tools/include/uapi/linux/sched.h @@ -132,6 +132,7 @@ struct clone_args { #define SCHED_FLAG_KEEP_PARAMS 0x10 #define SCHED_FLAG_UTIL_CLAMP_MIN 0x20 #define SCHED_FLAG_UTIL_CLAMP_MAX 0x40 +#define SCHED_FLAG_LATENCY_NICE 0x80 #define SCHED_FLAG_KEEP_ALL (SCHED_FLAG_KEEP_POLICY | \ SCHED_FLAG_KEEP_PARAMS) @@ -143,6 +144,7 @@ struct clone_args { SCHED_FLAG_RECLAIM | \ SCHED_FLAG_DL_OVERRUN | \ SCHED_FLAG_KEEP_ALL | \ - SCHED_FLAG_UTIL_CLAMP) + SCHED_FLAG_UTIL_CLAMP | \ + SCHED_FLAG_LATENCY_NICE) #endif /* _UAPI_LINUX_SCHED_H */ From patchwork Wed May 31 11:58:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101454 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2871880vqr; Wed, 31 May 2023 06:16:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6qv434qFjrA25RC9s+u5nQWDeb1alj4f4vHK1+rIivAAR1gidghSard8Vp4fWu0dFKmPdg X-Received: by 2002:a05:6358:7241:b0:125:908e:22ce with SMTP id i1-20020a056358724100b00125908e22cemr2778052rwa.9.1685538967815; Wed, 31 May 2023 06:16:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685538967; cv=none; d=google.com; s=arc-20160816; b=qjnfc9r0Hhwevf6YyUr1gPfvfDmT0Jd9vMneiqNTBYyQkF+/BQ66lKfwC+qT/JU6Fq qbRbii1694m732Q+pGM4UCg1/7u+vmDHFfHw6Bjakknh7MjM4XfRD/sJhdhUBy0kGkUq V2pu9s1fJFKVgKct51aaSRlNeaOQfpa9/83aVA6i/lwU3H1KH8pAPMEBMgrfvjkLTLnj uNfwuwLY9h9S6lMZRXpDp028nqV0s4b73vyqDf2XLcupWDuJO2g6WzM+thr11YY2odJZ 9MnKqBUgS6aiOp+ClbnZOSAo9y0P6e1sP309RWQ5sJfjp1bKSbnYLa8KaP9m5yVfGcvY d7LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=PBSWmj9vFVjjLqcL4+nxKoAvy2PS0YCtwS6oEGVZY9I=; b=o7xIy/lyjD/m8e7LitytyG3TrNRXV5nQXCmTuItnYIwlOwOIUeFmFWLmI2MEcK4hSH QkoeZyTLQks71ONBI7SEKVaZCvwNK1Lt3BcAv8GgfsQT/RuRP3cMxyPhFjJgjgSU2QAT chiS3MAbCKv3dn+Fu7xN/1cIjQvvmsjv2o6uFHF5OHW0LFZKijBngLtYE4TAuRS7frVe +VsaZdbHG20uLpeXuy3DuKRIxk/arsNJYCnf0YHGQkLNTOb98wMUbdrz9hoSIvgYyFGL ofhFgnBfyiPKUzIezVkYY+Y+YzbA2i8L7XKSoB2FeB2Bls9g1qGMNEE883HW7lSSA2Y1 /3CA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HK+Caj9r; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z19-20020a17090abd9300b00256bef4bae7si931746pjr.105.2023.05.31.06.15.55; Wed, 31 May 2023 06:16:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=HK+Caj9r; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236047AbjEaMtj (ORCPT + 99 others); Wed, 31 May 2023 08:49:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235992AbjEaMtA (ORCPT ); Wed, 31 May 2023 08:49:00 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5396BE4D for ; Wed, 31 May 2023 05:48:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=PBSWmj9vFVjjLqcL4+nxKoAvy2PS0YCtwS6oEGVZY9I=; b=HK+Caj9r6h1z6KIgMujfd8PAKB ZyhYVnYFBeMYPB7WX57TqP32+u0yeT7U7Nhea18FLdt0lWvyIVCWR10Ieg0LEd6hAA+uWLqLRw8Yx zozyJIqLEgDwspR4NESvHqogFaIjwHmgSDuvnmgxYI84l6stgVybvmcgQfr9L5fNtFiRobXeI0Pax hZ08j6NEAkXdCYkYP6RpYrs7oqJi31Kf1knppewryVJd9c2hRTS7k685Fpke3uQJUcNAMq9naeJnz ZCQoFempJQHna20ZC4wyKBSRyL2SrypVYUc2VlkgmGIRFYDU8gasck+Z7Pj9WnGMuT2yl86Giyrff YeBji3fA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEq-007GRd-0U; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 956C9300F2D; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 1F0CD22BA6460; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.477939524@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:52 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [RFC][PATCH 13/15] sched/fair: Implement latency-nice References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767415708926781041?= X-GMAIL-MSGID: =?utf-8?q?1767415708926781041?= Implement latency-nice as a modulation of the EEVDF r_i parameter, specifically apply the inverse sched_prio_to_weight[] relation on base_slice. Given a base slice of 3 [ms], this gives a range of: latency-nice 19: 3*1024 / 15 ~= 204.8 [ms] latency-nice -20: 3*1024 / 88761 ~= 0.034 [ms] (which might not make sense) Signed-off-by: Peter Zijlstra (Intel) Tested-by: K Prateek Nayak --- kernel/sched/core.c | 14 ++++++++++---- kernel/sched/fair.c | 22 +++++++++++++++------- kernel/sched/sched.h | 2 ++ 3 files changed, 27 insertions(+), 11 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1305,6 +1305,12 @@ static void set_load_weight(struct task_ } } +static inline void set_latency_prio(struct task_struct *p, int prio) +{ + p->latency_prio = prio; + set_latency_fair(&p->se, prio - MAX_RT_PRIO); +} + #ifdef CONFIG_UCLAMP_TASK /* * Serializes updates of utilization clamp values @@ -4464,9 +4470,10 @@ static void __sched_fork(unsigned long c p->se.nr_migrations = 0; p->se.vruntime = 0; p->se.vlag = 0; - p->se.slice = sysctl_sched_base_slice; INIT_LIST_HEAD(&p->se.group_node); + set_latency_prio(p, p->latency_prio); + #ifdef CONFIG_FAIR_GROUP_SCHED p->se.cfs_rq = NULL; #endif @@ -4718,8 +4725,7 @@ int sched_fork(unsigned long clone_flags p->prio = p->normal_prio = p->static_prio; set_load_weight(p, false); - - p->latency_prio = NICE_TO_PRIO(0); + set_latency_prio(p, NICE_TO_PRIO(0)); /* * We don't need the reset flag anymore after the fork. It has @@ -7507,7 +7513,7 @@ static void __setscheduler_latency(struc const struct sched_attr *attr) { if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) - p->latency_prio = NICE_TO_PRIO(attr->sched_latency_nice); + set_latency_prio(p, NICE_TO_PRIO(attr->sched_latency_nice)); } /* --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -952,6 +952,21 @@ int sched_update_scaling(void) } #endif +void set_latency_fair(struct sched_entity *se, int prio) +{ + u32 weight = sched_prio_to_weight[prio]; + u64 base = sysctl_sched_base_slice; + + /* + * For EEVDF the virtual time slope is determined by w_i (iow. + * nice) while the request time r_i is determined by + * latency-nice. + * + * Smaller request gets better latency. + */ + se->slice = div_u64(base << SCHED_FIXEDPOINT_SHIFT, weight); +} + static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se); /* @@ -964,13 +979,6 @@ static void update_deadline(struct cfs_r return; /* - * For EEVDF the virtual time slope is determined by w_i (iow. - * nice) while the request time r_i is determined by - * sysctl_sched_base_slice. - */ - se->slice = sysctl_sched_base_slice; - - /* * EEVDF: vd_i = ve_i + r_i / w_i */ se->deadline = se->vruntime + calc_delta_fair(se->slice, se); --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2495,6 +2495,8 @@ extern unsigned int sysctl_numa_balancin extern unsigned int sysctl_numa_balancing_hot_threshold; #endif +extern void set_latency_fair(struct sched_entity *se, int prio); + #ifdef CONFIG_SCHED_HRTICK /* From patchwork Wed May 31 11:58:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101449 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2868335vqr; Wed, 31 May 2023 06:11:20 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ64CNNej/lnztQi2mQnRTQRxvpS26mW6IBl2E7XuJBVAJvJrLqMw+XYRGW5I/Vx4TDOUqGe X-Received: by 2002:a05:6a00:b84:b0:64d:4a94:1a60 with SMTP id g4-20020a056a000b8400b0064d4a941a60mr7462630pfj.18.1685538680145; Wed, 31 May 2023 06:11:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685538680; cv=none; d=google.com; s=arc-20160816; b=oD1T5fn63fnJ45oJf/5zUsAslfKxygUfW9I+DydAzwWwK8D3Knl5CMntlWa/x0owgO +NLem40H78bBAFjVado0b3piEL0biGHZI7BZ4zeggwzxB/1FuyTtGKX470L6U65IbJ0v pZIhU/2iJimk5k/rv6sr+kPgFm7+FzeTQ9MPg3mkLEZv1OZLpZGTB6lPhOkawHNqo0c4 NTrSBzjweMNGdbRKwbqedfUvlJ78BYApV/e3Gi+vMRoboanPCh7yTGA0hDMiys4TnxOp KzJvIwR7G597Sak2+lKS9tgndmG7Xzh55mZ5xgNC8hzjORJY4aaV194xjzSMCcPECXYa VPSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=cgla+rvT0nVnQI4wEDomn+b4ABDX9Oku+7rZVELb/vE=; b=SpgDdCObq1ZSSjGCTiFN6aNYRC5AChQVO5ZWFqx1VcLzSF1o9a0B7FCGS/QLMbKzl6 1aTASqEe5jj9rZplzkfrf4nYOi9iFuuy23dwO5OAHbj4AcfUuQGs4+O1ppAH5zWQQyIm D7pq4yUiLNCmW6POR2AadcPiO5UKvtlFpHr4UsBBhWVdeV+j4Gb0Cy4GzlEw/NuK67OY h3mXYHfpDlU+2yDN0f9hLRqYsB4g6bQjwt5DJq2TORz1x4y2cC8w6AGasmJWBJyK81cA +Zqzqs+wBXpmP7bVJPAAGpKk37vGBel4MQf8XgxT5VG2fLs2KwwHjTIYZSEyID5cBvNE zo9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=T0EdiQhH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z128-20020a626586000000b0064354230c2asi1762175pfb.367.2023.05.31.06.11.06; Wed, 31 May 2023 06:11:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=T0EdiQhH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236071AbjEaMt1 (ORCPT + 99 others); Wed, 31 May 2023 08:49:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235989AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A156E45 for ; Wed, 31 May 2023 05:48:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=cgla+rvT0nVnQI4wEDomn+b4ABDX9Oku+7rZVELb/vE=; b=T0EdiQhHcgLKDjJpGdLZIb7wfg d0msWmAZJDo20YEDR09tl6xcly2DUMDSLNV06g2klNG8mzGPIf5AVkCCsBpuJIS/bdzQ2gD1ie14L R041esemRi5hK/uYgGYivHDQ93XzW4RQ5POReVj9JfhGYdUCFLHdGpkbuXRQEZ7MlxmjfxiZvGZ8G iTa7pAxSpYtkLWRgHkOCh7ldlCfhfOpzj8QC+mkIyPXRbxPgCfCwUI4djfQ3//Yr803OnRRWd28LS GfItLfMz2CNb5ndU/Ntwz7vEcfLb0vLzAiyAUylxaBPCxpk5KMAlnauYc/6sClx7xIOyqK6llHuV5 OszLAf6g==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEq-007GRg-41; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9A2B33012A7; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 2486F22BA6461; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.546980086@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:53 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [RFC][PATCH 14/15] sched/fair: Add sched group latency support References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767415406803046482?= X-GMAIL-MSGID: =?utf-8?q?1767415406803046482?= From: Vincent Guittot Task can set its latency priority with sched_setattr(), which is then used to set the latency offset of its sched_enity, but sched group entities still have the default latency offset value. Add a latency.nice field in cpu cgroup controller to set the latency priority of the group similarly to sched_setattr(). The latency priority is then used to set the offset of the sched_entities of the group. Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Tested-by: K Prateek Nayak Link: https://lkml.kernel.org/r/20230224093454.956298-7-vincent.guittot@linaro.org --- Documentation/admin-guide/cgroup-v2.rst | 10 ++++++++++ kernel/sched/core.c | 30 ++++++++++++++++++++++++++++++ kernel/sched/fair.c | 27 +++++++++++++++++++++++++++ kernel/sched/sched.h | 4 ++++ 4 files changed, 71 insertions(+) --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1121,6 +1121,16 @@ All time durations are in microseconds. values similar to the sched_setattr(2). This maximum utilization value is used to clamp the task specific maximum utilization clamp. + cpu.latency.nice + A read-write single value file which exists on non-root + cgroups. The default is "0". + + The nice value is in the range [-20, 19]. + + This interface file allows reading and setting latency using the + same values used by sched_setattr(2). The latency_nice of a group is + used to limit the impact of the latency_nice of a task outside the + group. Memory --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11177,6 +11177,25 @@ static int cpu_idle_write_s64(struct cgr { return sched_group_set_idle(css_tg(css), idle); } + +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return PRIO_TO_NICE(css_tg(css)->latency_prio); +} + +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css, + struct cftype *cft, s64 nice) +{ + int prio; + + if (nice < MIN_NICE || nice > MAX_NICE) + return -ERANGE; + + prio = NICE_TO_PRIO(nice); + + return sched_group_set_latency(css_tg(css), prio); +} #endif static struct cftype cpu_legacy_files[] = { @@ -11191,6 +11210,11 @@ static struct cftype cpu_legacy_files[] .read_s64 = cpu_idle_read_s64, .write_s64 = cpu_idle_write_s64, }, + { + .name = "latency.nice", + .read_s64 = cpu_latency_nice_read_s64, + .write_s64 = cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { @@ -11408,6 +11432,12 @@ static struct cftype cpu_files[] = { .read_s64 = cpu_idle_read_s64, .write_s64 = cpu_idle_write_s64, }, + { + .name = "latency.nice", + .flags = CFTYPE_NOT_ON_ROOT, + .read_s64 = cpu_latency_nice_read_s64, + .write_s64 = cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12293,6 +12293,7 @@ int alloc_fair_sched_group(struct task_g goto err; tg->shares = NICE_0_LOAD; + tg->latency_prio = DEFAULT_PRIO; init_cfs_bandwidth(tg_cfs_bandwidth(tg)); @@ -12391,6 +12392,9 @@ void init_tg_cfs_entry(struct task_group } se->my_q = cfs_rq; + + set_latency_fair(se, tg->latency_prio - MAX_RT_PRIO); + /* guarantee group entities always have weight */ update_load_set(&se->load, NICE_0_LOAD); se->parent = parent; @@ -12519,6 +12523,29 @@ int sched_group_set_idle(struct task_gro mutex_unlock(&shares_mutex); return 0; +} + +int sched_group_set_latency(struct task_group *tg, int prio) +{ + int i; + + if (tg == &root_task_group) + return -EINVAL; + + mutex_lock(&shares_mutex); + + if (tg->latency_prio == prio) { + mutex_unlock(&shares_mutex); + return 0; + } + + tg->latency_prio = prio; + + for_each_possible_cpu(i) + set_latency_fair(tg->se[i], prio - MAX_RT_PRIO); + + mutex_unlock(&shares_mutex); + return 0; } #else /* CONFIG_FAIR_GROUP_SCHED */ --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -378,6 +378,8 @@ struct task_group { /* A positive value indicates that this is a SCHED_IDLE group. */ int idle; + /* latency priority of the group. */ + int latency_prio; #ifdef CONFIG_SMP /* @@ -488,6 +490,8 @@ extern int sched_group_set_shares(struct extern int sched_group_set_idle(struct task_group *tg, long idle); +extern int sched_group_set_latency(struct task_group *tg, int prio); + #ifdef CONFIG_SMP extern void set_task_rq_fair(struct sched_entity *se, struct cfs_rq *prev, struct cfs_rq *next); From patchwork Wed May 31 11:58:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101422 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2854987vqr; Wed, 31 May 2023 05:54:50 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Nol8eClau4QtNy9SAnjYwlklTf8BYI17bsnwcQ2MUdP4xKBPpbotYYOm/3R7bURo8vIsP X-Received: by 2002:a17:902:c20d:b0:1aa:ff24:f8f0 with SMTP id 13-20020a170902c20d00b001aaff24f8f0mr4212897pll.4.1685537689772; Wed, 31 May 2023 05:54:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685537689; cv=none; d=google.com; s=arc-20160816; b=UDjxY+VK6NgJpq4oa0SuKulcWy0rWoBZqVx8rWRz/QPSS4EsYuBZztjetLMEQq2VUE b/gooRybPIS6F3b3F4bbWDm8kGqRU9LzeQb50nRoPAdOnlL2vcjOig6KqNdNrZjIp4LE prLuvJ/GqNz663xN+xjG+OTU/FN4uzwqhmfrl7R0iUiLZEPdi6z0sKDcZdjvEC3lndMM LTQcROK8cqcRiPVm23oODPlLZUqSuVMCak5GEWIm/b5/b34LmXImFGSdlI57g/QczsJX E7HlWmrv196C8QhbUgNwhfUXKdRpuvkErjn03OlDnYHhJuH1lSh+npGqXXtfl2VlHr9M L7Fw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=R1+1PylE05YHMapFN57/a7eEiVmU95jtvPhJv8WUe5M=; b=exBP+DLtHHcd3bXOl+9vZ6tdBNKsUM0LY0BGPNvRdlMX3BiGWFr6D8RgY+uxzeYbAI ExMttsvxc7rnYwURRrZIQvXMmdXwaR5tQuFwrQyNCMh3tVSfBC5QTzBxcQ+z8FygXQTO E1uYoH7J/aI2b/AysZgTyjwnvZkZ91cDT6CXvGcbxu8R5Lue1bkc4RZM82BKjle1ageP OYtnYsRH6kiW1sx2W9YLcqawrbxO8mAQgl039YIEUp8qcUfqcGdPrXcAWis3vDs4T3Np G/TGR4cKeS2GKdWScMXEZtmTww7Pd/OxdcNx8mCbf03Zm2mCSV2SirZuyRQjs7nvzcQM s+Fg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=Ay5zZXGO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ju2-20020a170903428200b001b042d1eccfsi775507plb.530.2023.05.31.05.54.32; Wed, 31 May 2023 05:54:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=Ay5zZXGO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235976AbjEaMtY (ORCPT + 99 others); Wed, 31 May 2023 08:49:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235972AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9676AE43 for ; Wed, 31 May 2023 05:48:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=R1+1PylE05YHMapFN57/a7eEiVmU95jtvPhJv8WUe5M=; b=Ay5zZXGO21ULUt+43kfg0VDjJN UgV5SrHVxn4hAmrFaQ4kYArs+AJC8z5VYKmZ8KLzuE+mvANohLcC5guj8YfevTOe9dtq4mt9CzJl4 jiPG9fz3A+mjIaSA2hrUYI/1JohWk9t4OZqvJrgBTMRmDvxlrOmX/DkNQ0s07gXDm4Pe+4dSWIguo ANU6sF6AXgsYm87kDlD5H2oLO4E/G73fKyV4xFx4CIKfY6LRC/yYPXBkr+9NxNO8hw7Lou0D/8i6J 4KYBkZHqUCxYC1WR6D9PIdtjmEFwSqBavkiWSktmu5xkdovrLbba9gtYFVZHDRAlV3lBBwKZ2RcB8 vQghUzFg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4LEq-00FSLH-0M; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9B7603015B5; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 29BA422BA6462; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.615053451@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:54 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [RFC][PATCH 15/15] sched/eevdf: Use sched_attr::sched_runtime to set request/slice References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767414368664400613?= X-GMAIL-MSGID: =?utf-8?q?1767414368664400613?= As an alternative to the latency-nice interface; allow applications to directly set the request/slice using sched_attr::sched_runtime. The implementation clamps the value to: 0.1[ms] <= slice <= 100[ms] which is 1/10 the size of HZ=1000 and 10 times the size of HZ=100. Applications should strive to use their periodic runtime at a high confidence interval (95%+) as the target slice. Using a smaller slice will introduce undue preemptions, while using a larger value will reduce latency. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7494,10 +7494,18 @@ static void __setscheduler_params(struct p->policy = policy; - if (dl_policy(policy)) + if (dl_policy(policy)) { __setparam_dl(p, attr); - else if (fair_policy(policy)) + } else if (fair_policy(policy)) { p->static_prio = NICE_TO_PRIO(attr->sched_nice); + if (attr->sched_runtime) { + p->se.slice = clamp_t(u64, attr->sched_runtime, + NSEC_PER_MSEC/10, /* HZ=1000 * 10 */ + NSEC_PER_MSEC*100); /* HZ=100 / 10 */ + } else { + p->se.slice = sysctl_sched_base_slice; + } + } /* * __sched_setscheduler() ensures attr->sched_priority == 0 when @@ -7689,7 +7697,9 @@ static int __sched_setscheduler(struct t * but store a possible modification of reset_on_fork. */ if (unlikely(policy == p->policy)) { - if (fair_policy(policy) && attr->sched_nice != task_nice(p)) + if (fair_policy(policy) && + (attr->sched_nice != task_nice(p) || + (attr->sched_runtime && attr->sched_runtime != p->se.slice))) goto change; if (rt_policy(policy) && attr->sched_priority != p->rt_priority) goto change; @@ -8017,12 +8027,14 @@ static int sched_copy_attr(struct sched_ static void get_params(struct task_struct *p, struct sched_attr *attr) { - if (task_has_dl_policy(p)) + if (task_has_dl_policy(p)) { __getparam_dl(p, attr); - else if (task_has_rt_policy(p)) + } else if (task_has_rt_policy(p)) { attr->sched_priority = p->rt_priority; - else + } else { attr->sched_nice = task_nice(p); + attr->sched_runtime = p->se.slice; + } } /**