From patchwork Wed May 31 11:58:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101456 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2872186vqr; Wed, 31 May 2023 06:16:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5IOFP1jw++/jd4ylaiQRFt2Kts9CJc2ISI8SYU039R46WXkLE8/xfxqONK/oIjl2OpvNmk X-Received: by 2002:a17:90a:a383:b0:256:4eda:dfcd with SMTP id x3-20020a17090aa38300b002564edadfcdmr4508504pjp.26.1685538989784; Wed, 31 May 2023 06:16:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685538989; cv=none; d=google.com; s=arc-20160816; b=VQ0yXSydnEhlRa44KzQMYcZ7hh7PjZ0M33KKb5E1kQXcYkp1mZwS7XlRt8i20VcX+H yTolx5E7/caMrbj8zDtR2D9ifc0ELZMfBePaFnPVvSt/5d9nfH43kY6OMkZ2VyuUZR7c Xd9j+LFT5umls7X4NtGFQX2aymSRCI93nRcWrdA8ehzRDC7K/vic17ENhrty4Blu5KUL z4sQlHLS61SJ3eOmNERR9xGKJcvOTcMm9MtZ2nCnLoB62ivaMfWLiOdtg0PVIZGFJvwC 80W4/yP62C5NWR+V8ChVgthpsIV70qf3yQImpN/Ds4LELIPXaR7INC0Pk8BWpQpo2MkO RJJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=7c6h7GZfdJ+T9RjYs8/xyMIsRGXNP+akeErf4CwdC0Y=; b=MVKBxhse0WNVGs6QkaUP4Bq0c1wuKutThgHeGj2rtiysom9BJx8q/qt7g385Zo3MjT DX2E4JKzktZnC6jR24qrAW3vGaaqWvolMBCuYEgks+XvNaKASML6p/zm5L6OpqIPQGCY E6tzApi9hzvCA1FcWYH72OvGqY1Eg5bdk2N5Tl0Wu1ZqotXC4uxkXmC/8JCbBEheUVu3 mXb01oE4vIC3vGxDBha7qTIMPoCqc4t6EbEVcpFEv6mZlpToQxdmiNekeEiqHzUqibDM GahywzssQoqrWGt04kJYkewGTpEnAmouPbINkNPiYwcbLFj3pOQaizguhHJqL0j6k8Ek 0kAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=D7Zx31l8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y2-20020a17090a134200b002478bba4da2si930110pjf.127.2023.05.31.06.16.17; Wed, 31 May 2023 06:16:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=D7Zx31l8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236088AbjEaMtd (ORCPT + 99 others); Wed, 31 May 2023 08:49:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233059AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AEB6E47 for ; Wed, 31 May 2023 05:48:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=7c6h7GZfdJ+T9RjYs8/xyMIsRGXNP+akeErf4CwdC0Y=; b=D7Zx31l8iu9t90D4/DQ67QHEdv wNiOfwEEaHbVCg/SzU/UZ3Jc0Nz5oTIO2Yz9Dlw6ZB9KDA4BjndrYzYfFGAh59kE/Nh3SPYjL2Mwl yCSERkqENYxsFFggKCf6PUgYADYRyuWDNnhgZY1GA5Gsfd0zOz9/paiAEMt1fM7fhTtl13uVeeXF5 +9Ezv0WXysDu7TCGSq8/8EmwJRYg/Or77a5sM0Ka95UqqxH/UqCCQShGjihCYBmv+n3EA821k2jdH HeBB+3fjzpoZo0xqX9jkswX9Mys6zHn2ZBV0vXLEjHjyO6TrZTUkkNu+BN3zSw+RMkM3emFs4L5En LImozRIQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEp-007GRb-Tu; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 838083003E1; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 00C0D22BA6459; Wed, 31 May 2023 14:47:33 +0200 (CEST) Message-ID: <20230531124604.068911180@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:46 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [PATCH 07/15] sched/smp: Use lag to simplify cross-runqueue placement References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767415731977730162?= X-GMAIL-MSGID: =?utf-8?q?1767415731977730162?= Using lag is both more correct and simpler when moving between runqueues. Notable, min_vruntime() was invented as a cheap approximation of avg_vruntime() for this very purpose (SMP migration). Since we now have the real thing; use it. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 145 ++++++---------------------------------------------- 1 file changed, 19 insertions(+), 126 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5083,7 +5083,7 @@ place_entity(struct cfs_rq *cfs_rq, stru * * EEVDF: placement strategy #1 / #2 */ - if (sched_feat(PLACE_LAG) && cfs_rq->nr_running > 1) { + if (sched_feat(PLACE_LAG) && cfs_rq->nr_running) { struct sched_entity *curr = cfs_rq->curr; unsigned long load; @@ -5171,61 +5171,21 @@ static void check_enqueue_throttle(struc static inline bool cfs_bandwidth_used(void); -/* - * MIGRATION - * - * dequeue - * update_curr() - * update_min_vruntime() - * vruntime -= min_vruntime - * - * enqueue - * update_curr() - * update_min_vruntime() - * vruntime += min_vruntime - * - * this way the vruntime transition between RQs is done when both - * min_vruntime are up-to-date. - * - * WAKEUP (remote) - * - * ->migrate_task_rq_fair() (p->state == TASK_WAKING) - * vruntime -= min_vruntime - * - * enqueue - * update_curr() - * update_min_vruntime() - * vruntime += min_vruntime - * - * this way we don't have the most up-to-date min_vruntime on the originating - * CPU and an up-to-date min_vruntime on the destination CPU. - */ - static void enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { - bool renorm = !(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_MIGRATED); bool curr = cfs_rq->curr == se; /* * If we're the current task, we must renormalise before calling * update_curr(). */ - if (renorm && curr) - se->vruntime += cfs_rq->min_vruntime; + if (curr) + place_entity(cfs_rq, se, 0); update_curr(cfs_rq); /* - * Otherwise, renormalise after, such that we're placed at the current - * moment in time, instead of some random moment in the past. Being - * placed in the past could significantly boost this task to the - * fairness detriment of existing tasks. - */ - if (renorm && !curr) - se->vruntime += cfs_rq->min_vruntime; - - /* * When enqueuing a sched_entity, we must: * - Update loads to have both entity and cfs_rq synced with now. * - For group_entity, update its runnable_weight to reflect the new @@ -5236,11 +5196,22 @@ enqueue_entity(struct cfs_rq *cfs_rq, st */ update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH); se_update_runnable(se); + /* + * XXX update_load_avg() above will have attached us to the pelt sum; + * but update_cfs_group() here will re-adjust the weight and have to + * undo/redo all that. Seems wasteful. + */ update_cfs_group(se); - account_entity_enqueue(cfs_rq, se); - if (flags & ENQUEUE_WAKEUP) + /* + * XXX now that the entity has been re-weighted, and it's lag adjusted, + * we can place the entity. + */ + if (!curr) place_entity(cfs_rq, se, 0); + + account_entity_enqueue(cfs_rq, se); + /* Entity has migrated, no longer consider this task hot */ if (flags & ENQUEUE_MIGRATED) se->exec_start = 0; @@ -5335,23 +5306,12 @@ dequeue_entity(struct cfs_rq *cfs_rq, st clear_buddies(cfs_rq, se); - if (flags & DEQUEUE_SLEEP) - update_entity_lag(cfs_rq, se); - + update_entity_lag(cfs_rq, se); if (se != cfs_rq->curr) __dequeue_entity(cfs_rq, se); se->on_rq = 0; account_entity_dequeue(cfs_rq, se); - /* - * Normalize after update_curr(); which will also have moved - * min_vruntime if @se is the one holding it back. But before doing - * update_min_vruntime() again, which will discount @se's position and - * can move min_vruntime forward still more. - */ - if (!(flags & DEQUEUE_SLEEP)) - se->vruntime -= cfs_rq->min_vruntime; - /* return excess runtime on last dequeue */ return_cfs_rq_runtime(cfs_rq); @@ -8102,18 +8062,6 @@ static void migrate_task_rq_fair(struct { struct sched_entity *se = &p->se; - /* - * As blocked tasks retain absolute vruntime the migration needs to - * deal with this by subtracting the old and adding the new - * min_vruntime -- the latter is done by enqueue_entity() when placing - * the task on the new runqueue. - */ - if (READ_ONCE(p->__state) == TASK_WAKING) { - struct cfs_rq *cfs_rq = cfs_rq_of(se); - - se->vruntime -= u64_u32_load(cfs_rq->min_vruntime); - } - if (!task_on_rq_migrating(p)) { remove_entity_load_avg(se); @@ -12482,8 +12430,8 @@ static void task_tick_fair(struct rq *rq */ static void task_fork_fair(struct task_struct *p) { - struct cfs_rq *cfs_rq; struct sched_entity *se = &p->se, *curr; + struct cfs_rq *cfs_rq; struct rq *rq = this_rq(); struct rq_flags rf; @@ -12492,22 +12440,9 @@ static void task_fork_fair(struct task_s cfs_rq = task_cfs_rq(current); curr = cfs_rq->curr; - if (curr) { + if (curr) update_curr(cfs_rq); - se->vruntime = curr->vruntime; - } place_entity(cfs_rq, se, 1); - - if (sysctl_sched_child_runs_first && curr && entity_before(curr, se)) { - /* - * Upon rescheduling, sched_class::put_prev_task() will place - * 'current' within the tree based on its new key value. - */ - swap(curr->vruntime, se->vruntime); - resched_curr(rq); - } - - se->vruntime -= cfs_rq->min_vruntime; rq_unlock(rq, &rf); } @@ -12536,34 +12471,6 @@ prio_changed_fair(struct rq *rq, struct check_preempt_curr(rq, p, 0); } -static inline bool vruntime_normalized(struct task_struct *p) -{ - struct sched_entity *se = &p->se; - - /* - * In both the TASK_ON_RQ_QUEUED and TASK_ON_RQ_MIGRATING cases, - * the dequeue_entity(.flags=0) will already have normalized the - * vruntime. - */ - if (p->on_rq) - return true; - - /* - * When !on_rq, vruntime of the task has usually NOT been normalized. - * But there are some cases where it has already been normalized: - * - * - A forked child which is waiting for being woken up by - * wake_up_new_task(). - * - A task which has been woken up by try_to_wake_up() and - * waiting for actually being woken up by sched_ttwu_pending(). - */ - if (!se->sum_exec_runtime || - (READ_ONCE(p->__state) == TASK_WAKING && p->sched_remote_wakeup)) - return true; - - return false; -} - #ifdef CONFIG_FAIR_GROUP_SCHED /* * Propagate the changes of the sched_entity across the tg tree to make it @@ -12634,16 +12541,6 @@ static void attach_entity_cfs_rq(struct static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); - - if (!vruntime_normalized(p)) { - /* - * Fix up our vruntime so that the current sleep doesn't - * cause 'unlimited' sleep bonus. - */ - place_entity(cfs_rq, se, 0); - se->vruntime -= cfs_rq->min_vruntime; - } detach_entity_cfs_rq(se); } @@ -12651,12 +12548,8 @@ static void detach_task_cfs_rq(struct ta static void attach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; - struct cfs_rq *cfs_rq = cfs_rq_of(se); attach_entity_cfs_rq(se); - - if (!vruntime_normalized(p)) - se->vruntime += cfs_rq->min_vruntime; } static void switched_from_fair(struct rq *rq, struct task_struct *p)