From patchwork Thu Jun 8 15:58:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Bristot de Oliveira X-Patchwork-Id: 105027 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp384118vqr; Thu, 8 Jun 2023 09:09:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4wFjeDz8XG98WiKwPlgADMU4Y2mi7316rEaxHjCjXVOANTnYpTLTpyfcLEG7aWAR1WZKxK X-Received: by 2002:a17:902:ee91:b0:1b3:76e2:657c with SMTP id a17-20020a170902ee9100b001b376e2657cmr814716pld.53.1686240557905; Thu, 08 Jun 2023 09:09:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686240557; cv=none; d=google.com; s=arc-20160816; b=m01HKyykY1muEL2netS6WpOISus4fG5RTyFDTKktZGSedRDqQ3Ikzo8Exinep4URJB YoEfWum8F1j15Wuh4EOOqZjgEBrJtdr3tF3AE61HZUIA9kJ0Sijb/5L9cBYGLXw2l7Co 11qp6KZZilAeu+n8gl6EWb5MU8gIWuZwHoPMpnYfKYLX12glfz1ndlyrVfRwVuZ7d0Y5 bfoiasBmtjtpxJmj8RHtr2CRxFgZr03abbsbSH9x3eQGpgTonA5Rwd8qS0Bt2XAndo6J de78cPuUkt33b0McR8szajntR4/2ZHw2m9KgdPIR3JnRaImB6NVKpsKQE3v6MaKn6M8g JxmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nKaEmg4d3LMrPG00c1d2KahoSKqaq7mX19uVDEg+EWg=; b=zWWYMs+gTyLJ2pvaKlo/48MxW3jpXJErjpDp9ASz+oHHc/mVy5lFQUG5ReWM13WS3u PlOosmBLZVJMwLIvmr8AnrOlKeYoVrAKQZ/cilSwylQl4dI+TTYmhabonvrc/fyGPqp9 yEtQqSjf3ShaQguf7nBGvWTquXgh1Rpuq76lfVMqL5voHn2nKQO6A0S81WfyQfbVVGcd jjdZWVa1vxK4Otw0488yj/GsU/bJtDtouvYGHRvoAlfxYDdZAZzQIb59wpx0kp5Te7b6 +RnfkMulkba5KgkOpMnGY33ZvRw0U4kykHkzB4biCcfJOlU57LhQEJ4d94nDZ6RxVK0O Xa3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ggbDQH2x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k3-20020a170902d58300b001a64fba3382si1256856plh.148.2023.06.08.09.09.01; Thu, 08 Jun 2023 09:09:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ggbDQH2x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237381AbjFHP6j (ORCPT + 99 others); Thu, 8 Jun 2023 11:58:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237141AbjFHP6h (ORCPT ); Thu, 8 Jun 2023 11:58:37 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E0461AE for ; Thu, 8 Jun 2023 08:58:36 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 93D3664E9E for ; Thu, 8 Jun 2023 15:58:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 533E7C4339C; Thu, 8 Jun 2023 15:58:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686239915; bh=uDc3wiW+UB9ErrGr1OLqcQm1CPFTg15pwVckTZIQXa8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ggbDQH2xNVx05uwKB1tOsrbwWoAlYqFUP4vU8OghtMYkR3uIysJp60gmihDPV7IkZ e9gWrMbdB4GCDKedNrUKPxzTf8hmdmRPLKScGN7Z4zYYYZy/99S8/fEt2KRjRkYsjJ cLjhwD7mzpYTZfqP0F2E8rJrL74zaa51gqEezCKoZSJ4ygWTfv5Vm+226VGs5kNfdd tudWXSUccEJE985UoHni4tKKK94x3Otx96cb5dZ/+QKn8LeE6tb5MwKOJ7/+ntAOKE NnHNirALGXViDp2i7qRMDAOuhh9DvIe9mxZg5qmAzFXXz4jZVbFMLDr0fBDGOKMyN/ 0jyhoFuIgqj0g== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , Daniel Bristot de Oliveira Subject: [RFC PATCH V3 1/6] sched: Unify runtime accounting across classes Date: Thu, 8 Jun 2023 17:58:13 +0200 Message-Id: <51ad657375206dac0f2609224babafa1c1486d4b.1686239016.git.bristot@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768151379299397819?= X-GMAIL-MSGID: =?utf-8?q?1768151379299397819?= From: Peter Zijlstra All classes use sched_entity::exec_start to track runtime and have copies of the exact same code around to compute runtime. Collapse all that. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Bristot de Oliveira Reviewed-by: Phil Auld Reviewed-by: Valentin Schneider --- include/linux/sched.h | 2 +- kernel/sched/deadline.c | 15 +++-------- kernel/sched/fair.c | 57 ++++++++++++++++++++++++++++++---------- kernel/sched/rt.c | 15 +++-------- kernel/sched/sched.h | 12 ++------- kernel/sched/stop_task.c | 13 +-------- 6 files changed, 53 insertions(+), 61 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 1292d38d66cc..26b1925a702a 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -521,7 +521,7 @@ struct sched_statistics { u64 block_max; s64 sum_block_runtime; - u64 exec_max; + s64 exec_max; u64 slice_max; u64 nr_migrations_cold; diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index f827067ad03b..030e7c11607f 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1301,9 +1301,8 @@ static void update_curr_dl(struct rq *rq) { struct task_struct *curr = rq->curr; struct sched_dl_entity *dl_se = &curr->dl; - u64 delta_exec, scaled_delta_exec; + s64 delta_exec, scaled_delta_exec; int cpu = cpu_of(rq); - u64 now; if (!dl_task(curr) || !on_dl_rq(dl_se)) return; @@ -1316,21 +1315,13 @@ static void update_curr_dl(struct rq *rq) * natural solution, but the full ramifications of this * approach need further study. */ - now = rq_clock_task(rq); - delta_exec = now - curr->se.exec_start; - if (unlikely((s64)delta_exec <= 0)) { + delta_exec = update_curr_common(rq); + if (unlikely(delta_exec <= 0)) { if (unlikely(dl_se->dl_yielded)) goto throttle; return; } - schedstat_set(curr->stats.exec_max, - max(curr->stats.exec_max, delta_exec)); - - trace_sched_stat_runtime(curr, delta_exec, 0); - - update_current_exec_runtime(curr, now, delta_exec); - if (dl_entity_is_special(dl_se)) return; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6189d1a45635..fda67f05190d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -891,23 +891,17 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq) } #endif /* CONFIG_SMP */ -/* - * Update the current task's runtime statistics. - */ -static void update_curr(struct cfs_rq *cfs_rq) +static s64 update_curr_se(struct rq *rq, struct sched_entity *curr) { - struct sched_entity *curr = cfs_rq->curr; - u64 now = rq_clock_task(rq_of(cfs_rq)); - u64 delta_exec; - - if (unlikely(!curr)) - return; + u64 now = rq_clock_task(rq); + s64 delta_exec; delta_exec = now - curr->exec_start; - if (unlikely((s64)delta_exec <= 0)) - return; + if (unlikely(delta_exec <= 0)) + return delta_exec; curr->exec_start = now; + curr->sum_exec_runtime += delta_exec; if (schedstat_enabled()) { struct sched_statistics *stats; @@ -917,8 +911,43 @@ static void update_curr(struct cfs_rq *cfs_rq) max(delta_exec, stats->exec_max)); } - curr->sum_exec_runtime += delta_exec; - schedstat_add(cfs_rq->exec_clock, delta_exec); + return delta_exec; +} + +/* + * Used by other classes to account runtime. + */ +s64 update_curr_common(struct rq *rq) +{ + struct task_struct *curr = rq->curr; + s64 delta_exec; + + delta_exec = update_curr_se(rq, &curr->se); + if (unlikely(delta_exec <= 0)) + return delta_exec; + + trace_sched_stat_runtime(curr, delta_exec, 0); + + account_group_exec_runtime(curr, delta_exec); + cgroup_account_cputime(curr, delta_exec); + + return delta_exec; +} + +/* + * Update the current task's runtime statistics. + */ +static void update_curr(struct cfs_rq *cfs_rq) +{ + struct sched_entity *curr = cfs_rq->curr; + s64 delta_exec; + + if (unlikely(!curr)) + return; + + delta_exec = update_curr_se(rq_of(cfs_rq), curr); + if (unlikely(delta_exec <= 0)) + return; curr->vruntime += calc_delta_fair(delta_exec, curr); update_min_vruntime(cfs_rq); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 00e0e5074115..efec4f3fef83 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1046,24 +1046,15 @@ static void update_curr_rt(struct rq *rq) { struct task_struct *curr = rq->curr; struct sched_rt_entity *rt_se = &curr->rt; - u64 delta_exec; - u64 now; + s64 delta_exec; if (curr->sched_class != &rt_sched_class) return; - now = rq_clock_task(rq); - delta_exec = now - curr->se.exec_start; - if (unlikely((s64)delta_exec <= 0)) + delta_exec = update_curr_common(rq); + if (unlikely(delta_exec <= 0)) return; - schedstat_set(curr->stats.exec_max, - max(curr->stats.exec_max, delta_exec)); - - trace_sched_stat_runtime(curr, delta_exec, 0); - - update_current_exec_runtime(curr, now, delta_exec); - if (!rt_bandwidth_enabled()) return; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 556496c77dc2..da0cec2fc63a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2176,6 +2176,8 @@ struct affinity_context { unsigned int flags; }; +extern s64 update_curr_common(struct rq *rq); + struct sched_class { #ifdef CONFIG_UCLAMP_TASK @@ -3207,16 +3209,6 @@ extern int sched_dynamic_mode(const char *str); extern void sched_dynamic_update(int mode); #endif -static inline void update_current_exec_runtime(struct task_struct *curr, - u64 now, u64 delta_exec) -{ - curr->se.sum_exec_runtime += delta_exec; - account_group_exec_runtime(curr, delta_exec); - - curr->se.exec_start = now; - cgroup_account_cputime(curr, delta_exec); -} - #ifdef CONFIG_SCHED_MM_CID #define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */ diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c index 85590599b4d6..7595494ceb6d 100644 --- a/kernel/sched/stop_task.c +++ b/kernel/sched/stop_task.c @@ -70,18 +70,7 @@ static void yield_task_stop(struct rq *rq) static void put_prev_task_stop(struct rq *rq, struct task_struct *prev) { - struct task_struct *curr = rq->curr; - u64 now, delta_exec; - - now = rq_clock_task(rq); - delta_exec = now - curr->se.exec_start; - if (unlikely((s64)delta_exec < 0)) - delta_exec = 0; - - schedstat_set(curr->stats.exec_max, - max(curr->stats.exec_max, delta_exec)); - - update_current_exec_runtime(curr, now, delta_exec); + update_curr_common(rq); } /* From patchwork Thu Jun 8 15:58:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Bristot de Oliveira X-Patchwork-Id: 105029 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp385045vqr; Thu, 8 Jun 2023 09:10:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4vxJt7H6ygD6MkLSG9RH5hnkazyodRrciEEGdRHxRaWKkBgszdTRqpa4Lbimdo0oWV1Q0M X-Received: by 2002:a05:6a21:6710:b0:10b:fb01:a72a with SMTP id wh16-20020a056a21671000b0010bfb01a72amr3838974pzb.60.1686240631641; Thu, 08 Jun 2023 09:10:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686240631; cv=none; d=google.com; s=arc-20160816; b=N7JcETAUqtGh7Dit06jaAi5oyijk+7tY5Ez86M7DfI15CEIAgJ6vjDDvGG/FhOu/E2 IRtGvb+3YW43r1uYnvdWvLSEhFn6fp0Zidw+Ks1/FGNeS4GQjxmBjMn7/+HMrifVJr+E j++IT412c+AMkNjfI2RmoHITE2/yWDV9CJ12P2EHIpaEuOsxtjrP5vbxK5SfTD0egkEM QB5SvSLwsTcU8a4fcb2P8KBfKNk+XINklUnE5FNKQRY2TukX9szQii/hN81JbjxscKB5 nkas1jPJnFXbjWHFQlLzkfN71bp/qX66K7fXEVUtPD24HiCRO81stct6L9SpbzWrvoew AXVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=f8NmgcbNjqc7YAnFv/+Mw9KMVErxZATqU6JmF3OLPws=; b=KTpGzq5Q64si4xSG3Ag0J8NuuoszJZ1xG8ZI8G69j+NYSfL9/iNW6pVG2E4otkwKNx wsdCg/Z3XMkuIzrU1CGuxm1QSt13UGpFsF1RvPKqYXOVokbtmYtOJIdWgXaJu6pGBm2H T/uGZuk4eBceXOIe0oQw4rj8ggPwfv+ucOTN1hlzPIYKwN7yj2/kbXIjq1BNDCPxKN52 4s6dlMpeL4SCoADuQXOOsMahi15WP2r5YbVZJHtTy4wRxlEeJRIfH/6ZbYv4yxyHKUBc STAh5R+xFhJWzZ2S31Chb9nkhsVVIXG/5w2Xzw8lKM4PpuKzDB3ebOoHM18Y2OFKuYmR v/dw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=BhiX8l8x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x27-20020aa79a5b000000b0064553929dbdsi948812pfj.394.2023.06.08.09.10.16; Thu, 08 Jun 2023 09:10:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=BhiX8l8x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236448AbjFHP7K (ORCPT + 99 others); Thu, 8 Jun 2023 11:59:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237141AbjFHP6l (ORCPT ); Thu, 8 Jun 2023 11:58:41 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 247D2E57 for ; Thu, 8 Jun 2023 08:58:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ABC4E64EC5 for ; Thu, 8 Jun 2023 15:58:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6F821C4339B; Thu, 8 Jun 2023 15:58:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686239919; bh=hZr0IFnTxWMviMRdkn01XGuFZNJYi9czfgwV8AAhCIQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BhiX8l8xRhpmhLR1xlVVX6jYb98Vl/3OF39Laq3v/ZvN+0bhYmuPZqXbUkHvoSGuX Zsh6dttXsdWQ23uITRcwXoC3Uw1eeU2NpS/qoYRNCyUQJjUkkP6xPmgbIA01yeoS+b BQ6Up46wBqs2r+dFfPYYAnxzXf04d+yvaI29ZZ2VZBvwdQtKJgxHcGFOM9qVBtVH1x b6NIdce73xXchVxz81yoyYPGAJK13FZ89TS0FdaK+wuESDf8F7kKYlXHwvidIiUipK tFH0YgjMWgCh4qDFDcRaQGbt5JyPYhJveO+ZNHrkyDk9ya4e4+GJOtl3/C1Lm4ro0+ 4mT/b9hDoUqIg== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , Daniel Bristot de Oliveira Subject: [RFC PATCH V3 2/6] sched/deadline: Collect sched_dl_entity initialization Date: Thu, 8 Jun 2023 17:58:14 +0200 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768151456371885372?= X-GMAIL-MSGID: =?utf-8?q?1768151456371885372?= From: Peter Zijlstra Create a single function that initializes a sched_dl_entity. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Bristot de Oliveira Reviewed-by: Phil Auld Reviewed-by: Valentin Schneider --- kernel/sched/core.c | 5 +---- kernel/sched/deadline.c | 22 +++++++++++++++------- kernel/sched/sched.h | 5 +---- 3 files changed, 17 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ac38225e6d09..e34b02cbe41f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4511,10 +4511,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) memset(&p->stats, 0, sizeof(p->stats)); #endif - RB_CLEAR_NODE(&p->dl.rb_node); - init_dl_task_timer(&p->dl); - init_dl_inactive_task_timer(&p->dl); - __dl_clear_params(p); + init_dl_entity(&p->dl); INIT_LIST_HEAD(&p->rt.run_list); p->rt.timeout = 0; diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 030e7c11607f..22e5e64812c9 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -333,6 +333,8 @@ static void dl_change_utilization(struct task_struct *p, u64 new_bw) __add_rq_bw(new_bw, &rq->dl); } +static void __dl_clear_params(struct sched_dl_entity *dl_se); + /* * The utilization of a task cannot be immediately removed from * the rq active utilization (running_bw) when the task blocks. @@ -432,7 +434,7 @@ static void task_non_contending(struct task_struct *p) raw_spin_lock(&dl_b->lock); __dl_sub(dl_b, p->dl.dl_bw, dl_bw_cpus(task_cpu(p))); raw_spin_unlock(&dl_b->lock); - __dl_clear_params(p); + __dl_clear_params(dl_se); } return; @@ -1205,7 +1207,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) return HRTIMER_NORESTART; } -void init_dl_task_timer(struct sched_dl_entity *dl_se) +static void init_dl_task_timer(struct sched_dl_entity *dl_se) { struct hrtimer *timer = &dl_se->dl_timer; @@ -1415,7 +1417,7 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) raw_spin_lock(&dl_b->lock); __dl_sub(dl_b, p->dl.dl_bw, dl_bw_cpus(task_cpu(p))); raw_spin_unlock(&dl_b->lock); - __dl_clear_params(p); + __dl_clear_params(dl_se); goto unlock; } @@ -1431,7 +1433,7 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) return HRTIMER_NORESTART; } -void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se) +static void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se) { struct hrtimer *timer = &dl_se->inactive_timer; @@ -2974,10 +2976,8 @@ bool __checkparam_dl(const struct sched_attr *attr) /* * This function clears the sched_dl_entity static params. */ -void __dl_clear_params(struct task_struct *p) +static void __dl_clear_params(struct sched_dl_entity *dl_se) { - struct sched_dl_entity *dl_se = &p->dl; - dl_se->dl_runtime = 0; dl_se->dl_deadline = 0; dl_se->dl_period = 0; @@ -2995,6 +2995,14 @@ void __dl_clear_params(struct task_struct *p) #endif } +void init_dl_entity(struct sched_dl_entity *dl_se) +{ + RB_CLEAR_NODE(&dl_se->rb_node); + init_dl_task_timer(dl_se); + init_dl_inactive_task_timer(dl_se); + __dl_clear_params(dl_se); +} + bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr) { struct sched_dl_entity *dl_se = &p->dl; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index da0cec2fc63a..fa6512070fa7 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -284,8 +284,6 @@ struct rt_bandwidth { unsigned int rt_period_active; }; -void __dl_clear_params(struct task_struct *p); - static inline int dl_bandwidth_enabled(void) { return sysctl_sched_rt_runtime >= 0; @@ -2390,8 +2388,7 @@ extern struct rt_bandwidth def_rt_bandwidth; extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime); extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq); -extern void init_dl_task_timer(struct sched_dl_entity *dl_se); -extern void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se); +extern void init_dl_entity(struct sched_dl_entity *dl_se); #define BW_SHIFT 20 #define BW_UNIT (1 << BW_SHIFT) From patchwork Thu Jun 8 15:58:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Bristot de Oliveira X-Patchwork-Id: 105030 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp385363vqr; Thu, 8 Jun 2023 09:10:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7ojJKWOWJLHHtV6bCr0XqeU859uJhDSVqnkGI3bD+iQjoO3oLBHB8zN12SIv+8w3NJYSV9 X-Received: by 2002:a17:902:e5d0:b0:1a6:b23c:3bf2 with SMTP id u16-20020a170902e5d000b001a6b23c3bf2mr6220089plf.10.1686240654660; Thu, 08 Jun 2023 09:10:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686240654; cv=none; d=google.com; s=arc-20160816; b=QiuWLUr7pNvMPKLYNZp44yC5aFw95iN2wfi9iQz+eb/BUG6SEbOBx+PFFi8qz8NMzE DJ4Cb22pezFN0rzZt2iAssaUtLvZ37y6zaCBtC/a+XuR4XJDrC+eAQCPAgCctAiHwnLm D0by9Fa1jEYHJ6QPC2Z1PqP4430XwfW7UdEvkCZy3xjDHy8oNsDGSdh7eMjNOfk49j7L zoe2UDfAhGdZz7T/CQZGKMT1AQMmHNPIxzy3PF3r60ns/Ok5tcHpIP0X+zzhX/+/EDVU 1D4/htRVwHE2xxP04IJX7jBb2eEQOgvdYp1jIM0PXPinbxU1l0UQTTSW0ScWX2ZLaA0L bl0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=BtQQLEoJjnrfUsaETS21AWyzVXjb/JY995d7HEHrDKQ=; b=hPuEFag/y5m/+uBLNsNXfXQXhXO09I5OPP3/eTk8MAWR/zRUzrNHVHqAjB598qjRbZ stFz+it1FOR/HwSs9a2oBO237iae9zJyFSKIlG3lShg4+5ioiOpzL1yxYsqTIMovuCd2 Vwx01W9rdFK7wf56HV4yV2ksYpH3lOjvhSuoerSt4VvdzDsZd0LLr1BWSXQ25yL3CuDR A8uHPCzHZLObM+NVej2hbmjxji63Dl1IjMqkj/xkvYv9P08nm20XMyR7vrR8yjQtvZdA tYUpCxlFlWItn1TTt0f0J54ACJ6JmVFvNYT2BS6bnThdDv3T3XnTTlG8LnE3S+24jGJX Nx3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=bsaBqqkq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n7-20020a170902e54700b001adf26a9390si1287986plf.191.2023.06.08.09.10.39; Thu, 08 Jun 2023 09:10:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=bsaBqqkq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237524AbjFHP7O (ORCPT + 99 others); Thu, 8 Jun 2023 11:59:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237535AbjFHP6r (ORCPT ); Thu, 8 Jun 2023 11:58:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B04BE4A for ; Thu, 8 Jun 2023 08:58:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C84A664EC0 for ; Thu, 8 Jun 2023 15:58:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88584C43445; Thu, 8 Jun 2023 15:58:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686239923; bh=eM4rKufkrBaZ85ApOpEM0Qv1MV87+X43zeDVcmV91Do=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bsaBqqkqrTI8Q+pdFbDmjfCTz9f9rkx7DkiMuHUmQS8Vnb+ziSm+VxDHiwT/bBklB 9cnNxSnQARZ0jAtX8Pgdb83cSy82FutP/wg1b1k/DYcJ9hy6k4iEUMMql0njwEikrH C4RGYfmeAitPe5DfcUYZfkUQV8Wmmv+umalZQCuWaNNX16qeWnNycJOfGM7DLNEyqg TFlUGIf6ZOZV1+xVLOH5tsVgwRbwEx5BjjCjl+17pri29xUrorGkwuupteDB55t5Km yJwdr79MWisgWD/GLuETvSlYpgpTVqQ9NwkqDAXmba/Mdkl3xbObVqcepW7ir8U4fi 3EjEAuu88s7Ww== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , Daniel Bristot de Oliveira Subject: [RFC PATCH V3 3/6] sched/deadline: Move bandwidth accounting into {en,de}queue_dl_entity Date: Thu, 8 Jun 2023 17:58:15 +0200 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768151480403540601?= X-GMAIL-MSGID: =?utf-8?q?1768151480403540601?= From: Peter Zijlstra In preparation of introducing !task sched_dl_entity; move the bandwidth accounting into {en.de}queue_dl_entity(). Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Bristot de Oliveira Reviewed-by: Valentin Schneider --- kernel/sched/deadline.c | 130 ++++++++++++++++++++++------------------ kernel/sched/sched.h | 6 ++ 2 files changed, 78 insertions(+), 58 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 22e5e64812c9..869734eecb2c 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -389,12 +389,12 @@ static void __dl_clear_params(struct sched_dl_entity *dl_se); * up, and checks if the task is still in the "ACTIVE non contending" * state or not (in the second case, it updates running_bw). */ -static void task_non_contending(struct task_struct *p) +static void task_non_contending(struct sched_dl_entity *dl_se) { - struct sched_dl_entity *dl_se = &p->dl; struct hrtimer *timer = &dl_se->inactive_timer; struct dl_rq *dl_rq = dl_rq_of_se(dl_se); struct rq *rq = rq_of_dl_rq(dl_rq); + struct task_struct *p = dl_task_of(dl_se); s64 zerolag_time; /* @@ -426,13 +426,14 @@ static void task_non_contending(struct task_struct *p) if ((zerolag_time < 0) || hrtimer_active(&dl_se->inactive_timer)) { if (dl_task(p)) sub_running_bw(dl_se, dl_rq); + if (!dl_task(p) || READ_ONCE(p->__state) == TASK_DEAD) { struct dl_bw *dl_b = dl_bw_of(task_cpu(p)); if (READ_ONCE(p->__state) == TASK_DEAD) - sub_rq_bw(&p->dl, &rq->dl); + sub_rq_bw(dl_se, &rq->dl); raw_spin_lock(&dl_b->lock); - __dl_sub(dl_b, p->dl.dl_bw, dl_bw_cpus(task_cpu(p))); + __dl_sub(dl_b, dl_se->dl_bw, dl_bw_cpus(task_cpu(p))); raw_spin_unlock(&dl_b->lock); __dl_clear_params(dl_se); } @@ -1629,6 +1630,41 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se, int flags) update_stats_enqueue_dl(dl_rq_of_se(dl_se), dl_se, flags); + /* + * Check if a constrained deadline task was activated + * after the deadline but before the next period. + * If that is the case, the task will be throttled and + * the replenishment timer will be set to the next period. + */ + if (!dl_se->dl_throttled && !dl_is_implicit(dl_se)) + dl_check_constrained_dl(dl_se); + + if (flags & (ENQUEUE_RESTORE|ENQUEUE_MIGRATING)) { + struct dl_rq *dl_rq = dl_rq_of_se(dl_se); + + add_rq_bw(dl_se, dl_rq); + add_running_bw(dl_se, dl_rq); + } + + /* + * If p is throttled, we do not enqueue it. In fact, if it exhausted + * its budget it needs a replenishment and, since it now is on + * its rq, the bandwidth timer callback (which clearly has not + * run yet) will take care of this. + * However, the active utilization does not depend on the fact + * that the task is on the runqueue or not (but depends on the + * task's state - in GRUB parlance, "inactive" vs "active contending"). + * In other words, even if a task is throttled its utilization must + * be counted in the active utilization; hence, we need to call + * add_running_bw(). + */ + if (dl_se->dl_throttled && !(flags & ENQUEUE_REPLENISH)) { + if (flags & ENQUEUE_WAKEUP) + task_contending(dl_se, flags); + + return; + } + /* * If this is a wakeup or a new instance, the scheduling * parameters of the task might need updating. Otherwise, @@ -1648,9 +1684,28 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se, int flags) __enqueue_dl_entity(dl_se); } -static void dequeue_dl_entity(struct sched_dl_entity *dl_se) +static void dequeue_dl_entity(struct sched_dl_entity *dl_se, int flags) { __dequeue_dl_entity(dl_se); + + if (flags & (DEQUEUE_SAVE|DEQUEUE_MIGRATING)) { + struct dl_rq *dl_rq = dl_rq_of_se(dl_se); + + sub_running_bw(dl_se, dl_rq); + sub_rq_bw(dl_se, dl_rq); + } + + /* + * This check allows to start the inactive timer (or to immediately + * decrease the active utilization, if needed) in two cases: + * when the task blocks and when it is terminating + * (p->state == TASK_DEAD). We can handle the two cases in the same + * way, because from GRUB's point of view the same thing is happening + * (the task moves from "active contending" to "active non contending" + * or "inactive") + */ + if (flags & DEQUEUE_SLEEP) + task_non_contending(dl_se); } static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags) @@ -1695,76 +1750,35 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags) return; } - /* - * Check if a constrained deadline task was activated - * after the deadline but before the next period. - * If that is the case, the task will be throttled and - * the replenishment timer will be set to the next period. - */ - if (!p->dl.dl_throttled && !dl_is_implicit(&p->dl)) - dl_check_constrained_dl(&p->dl); - - if (p->on_rq == TASK_ON_RQ_MIGRATING || flags & ENQUEUE_RESTORE) { - add_rq_bw(&p->dl, &rq->dl); - add_running_bw(&p->dl, &rq->dl); - } - - /* - * If p is throttled, we do not enqueue it. In fact, if it exhausted - * its budget it needs a replenishment and, since it now is on - * its rq, the bandwidth timer callback (which clearly has not - * run yet) will take care of this. - * However, the active utilization does not depend on the fact - * that the task is on the runqueue or not (but depends on the - * task's state - in GRUB parlance, "inactive" vs "active contending"). - * In other words, even if a task is throttled its utilization must - * be counted in the active utilization; hence, we need to call - * add_running_bw(). - */ - if (p->dl.dl_throttled && !(flags & ENQUEUE_REPLENISH)) { - if (flags & ENQUEUE_WAKEUP) - task_contending(&p->dl, flags); - - return; - } - check_schedstat_required(); update_stats_wait_start_dl(dl_rq_of_se(&p->dl), &p->dl); + if (p->on_rq == TASK_ON_RQ_MIGRATING) + flags |= ENQUEUE_MIGRATING; + enqueue_dl_entity(&p->dl, flags); - if (!task_current(rq, p) && p->nr_cpus_allowed > 1) + if (!task_current(rq, p) && !p->dl.dl_throttled && p->nr_cpus_allowed > 1) enqueue_pushable_dl_task(rq, p); } static void __dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags) { update_stats_dequeue_dl(&rq->dl, &p->dl, flags); - dequeue_dl_entity(&p->dl); - dequeue_pushable_dl_task(rq, p); + dequeue_dl_entity(&p->dl, flags); + + if (!p->dl.dl_throttled) + dequeue_pushable_dl_task(rq, p); } static void dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags) { update_curr_dl(rq); - __dequeue_task_dl(rq, p, flags); - if (p->on_rq == TASK_ON_RQ_MIGRATING || flags & DEQUEUE_SAVE) { - sub_running_bw(&p->dl, &rq->dl); - sub_rq_bw(&p->dl, &rq->dl); - } + if (p->on_rq == TASK_ON_RQ_MIGRATING) + flags |= DEQUEUE_MIGRATING; - /* - * This check allows to start the inactive timer (or to immediately - * decrease the active utilization, if needed) in two cases: - * when the task blocks and when it is terminating - * (p->state == TASK_DEAD). We can handle the two cases in the same - * way, because from GRUB's point of view the same thing is happening - * (the task moves from "active contending" to "active non contending" - * or "inactive") - */ - if (flags & DEQUEUE_SLEEP) - task_non_contending(p); + __dequeue_task_dl(rq, p, flags); } /* @@ -2580,7 +2594,7 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p) * will reset the task parameters. */ if (task_on_rq_queued(p) && p->dl.dl_runtime) - task_non_contending(p); + task_non_contending(&p->dl); if (!task_on_rq_queued(p)) { /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index fa6512070fa7..aaf163695c2e 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2142,6 +2142,10 @@ extern const u32 sched_prio_to_wmult[40]; * MOVE - paired with SAVE/RESTORE, explicitly does not preserve the location * in the runqueue. * + * NOCLOCK - skip the update_rq_clock() (avoids double updates) + * + * MIGRATION - p->on_rq == TASK_ON_RQ_MIGRATING (used for DEADLINE) + * * ENQUEUE_HEAD - place at front of runqueue (tail if not specified) * ENQUEUE_REPLENISH - CBS (replenish runtime and postpone deadline) * ENQUEUE_MIGRATED - the task was migrated during wakeup @@ -2152,6 +2156,7 @@ extern const u32 sched_prio_to_wmult[40]; #define DEQUEUE_SAVE 0x02 /* Matches ENQUEUE_RESTORE */ #define DEQUEUE_MOVE 0x04 /* Matches ENQUEUE_MOVE */ #define DEQUEUE_NOCLOCK 0x08 /* Matches ENQUEUE_NOCLOCK */ +#define DEQUEUE_MIGRATING 0x80 /* Matches ENQUEUE_MIGRATING */ #define ENQUEUE_WAKEUP 0x01 #define ENQUEUE_RESTORE 0x02 @@ -2165,6 +2170,7 @@ extern const u32 sched_prio_to_wmult[40]; #else #define ENQUEUE_MIGRATED 0x00 #endif +#define ENQUEUE_MIGRATING 0x80 #define RETRY_TASK ((void *)-1UL) From patchwork Thu Jun 8 15:58:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Bristot de Oliveira X-Patchwork-Id: 105028 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp384635vqr; Thu, 8 Jun 2023 09:10:03 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Wj0wumokS5YRO21bd1xWvitLHShyvhJgfqAKM56nUS+D2Drc8vF6Rgli+kys9xU8B7rs8 X-Received: by 2002:a17:90b:1006:b0:253:360a:f6b with SMTP id gm6-20020a17090b100600b00253360a0f6bmr4804624pjb.13.1686240602298; Thu, 08 Jun 2023 09:10:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686240602; cv=none; d=google.com; s=arc-20160816; b=JdZdp7vjaWT4NiiJwFsUi490FQ/q1ZieRjWDjXrSZjF1u1dRgi6oe0+NJmV2onWTlV mQ7gLrU9HVT7UZTXXpZeUnulJP6GKyH4dNE7dTbenM3bIrrrhdUArMyvU3zh9qEql9j6 aHh5oJcvRYnlHovZlRfgLRjAdrzCfNBIZWK07ta9hBEA4lvcWO1snRD4y/T3sSnUbqaM fy18ZNnn3reVVc0qx8Kq6y1BCOqyrYfM6rdU9Hi3qI9UVmZgfrt7WqqxRZfgbxol8M0N P0lnjGSBx3UkiXxXGj+AnFjEQmR8XGCCv0eipowRndFe2cB3kze/EO9BwIql4FcWk/8n m9Kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0T2r/n0TiZKbZxnjdqJn9I/eXsSubxRv1d02PohN0AQ=; b=Qfd5wgZufrm2BL6yBNEyCjIb6+mbtdKwdwzg/NkIp4ZgTQuDT4GfyznMN2383fvVVu Vi85VVvnR1keEvt+SKb2I3S8q4znjF+RqC7o59oz277VA/oFSxIBGQuPXPinbZhUfZCL IHsa3AhHvmpbN8SVFnXmCVWLOlWW9I+BPxfpQVyP44GafY/w8nrksjJ5rV1JvfEyJo6v 6+sBoJhws65pN3+5xp5/BfeFrItSn1g7T0Cmjb5SIYO4Y6ajL7JqyUzop9QpO18jKuqF rxGoZIz22NJMxJBkBr/NV+1YQSbqqRh3PixfClvN06ohjGZnWd0NXprNHR1pt02YAm9r 5fEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=mW7iD2PO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id pi2-20020a17090b1e4200b0022c9cb7662csi3044498pjb.159.2023.06.08.09.09.47; Thu, 08 Jun 2023 09:10:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=mW7iD2PO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237560AbjFHP7R (ORCPT + 99 others); Thu, 8 Jun 2023 11:59:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237553AbjFHP6v (ORCPT ); Thu, 8 Jun 2023 11:58:51 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B6AA1BD6 for ; Thu, 8 Jun 2023 08:58:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 16A8063188 for ; Thu, 8 Jun 2023 15:58:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5A97C433EF; Thu, 8 Jun 2023 15:58:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686239927; bh=irhFt8hXdNrn8MEpsIEQtWZRpMbwo45bMx05+TZBRkI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mW7iD2POyS7C4511DMT1Rrfvc0vlnmMQq1+A8XpM3tc1zXPnk39XOCk648/ka+bws F1W+m113EZqA52oGAJArpqnYPJoV4P7L4rO6U1Mtu8tXjkv+WL6OIhKJW8VgSLcbuM M0AYHHwNbsv+kYbThELeJRdnHXl9N1nr4NYRBvQP4bM5EQbo7i3wt4QimXNPE02mUT Nhsi9dcmi2l9sJEhQNX/eSBJtQwurc7JLGXGK9o3dQbALsH31raO4pxzOfpe1rLfgq R4vFURedUrcV9ebC0B5oeRmFuwcgO9t3vIBVsRO9DWC46wqLkOJeQ4tDMDOxGx+RMl 7rxhINoBSJMEw== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , Daniel Bristot de Oliveira Subject: [RFC PATCH V3 4/6] sched/deadline: Introduce deadline servers Date: Thu, 8 Jun 2023 17:58:16 +0200 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768151425618029651?= X-GMAIL-MSGID: =?utf-8?q?1768151425618029651?= From: Peter Zijlstra Low priority tasks (e.g., SCHED_OTHER) can suffer starvation if tasks with higher priority (e.g., SCHED_FIFO) monopolize CPU(s). RT Throttling has been introduced a while ago as a (mostly debug) countermeasure one can utilize to reserve some CPU time for low priority tasks (usually background type of work, e.g. workqueues, timers, etc.). It however has its own problems (see documentation) and the undesired effect of unconditionally throttling FIFO tasks even when no lower priority activity needs to run (there are mechanisms to fix this issue as well, but, again, with their own problems). Introduce deadline servers to service low priority tasks needs under starvation conditions. Deadline servers are built extending SCHED_DEADLINE implementation to allow 2-level scheduling (a sched_deadline entity becomes a container for lower priority scheduling entities). Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Bristot de Oliveira --- include/linux/sched.h | 22 ++- kernel/sched/core.c | 17 ++ kernel/sched/deadline.c | 350 +++++++++++++++++++++++++++------------- kernel/sched/fair.c | 4 + kernel/sched/sched.h | 29 ++++ 5 files changed, 309 insertions(+), 113 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 26b1925a702a..4c90d7693a75 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -64,12 +64,14 @@ struct robust_list_head; struct root_domain; struct rq; struct sched_attr; +struct sched_dl_entity; struct sched_param; struct seq_file; struct sighand_struct; struct signal_struct; struct task_delay_info; struct task_group; +struct task_struct; struct user_event_mm; /* @@ -600,6 +602,9 @@ struct sched_rt_entity { #endif } __randomize_layout; +typedef bool (*dl_server_has_tasks_f)(struct sched_dl_entity *); +typedef struct task_struct *(*dl_server_pick_f)(struct sched_dl_entity *); + struct sched_dl_entity { struct rb_node rb_node; @@ -647,6 +652,7 @@ struct sched_dl_entity { unsigned int dl_yielded : 1; unsigned int dl_non_contending : 1; unsigned int dl_overrun : 1; + unsigned int dl_server : 1; /* * Bandwidth enforcement timer. Each -deadline task has its @@ -661,7 +667,20 @@ struct sched_dl_entity { * timer is needed to decrease the active utilization at the correct * time. */ - struct hrtimer inactive_timer; + struct hrtimer inactive_timer; + + /* + * Bits for DL-server functionality. Also see the comment near + * dl_server_update(). + * + * @rq the runqueue this server is for + * + * @server_has_tasks() returns true if @server_pick return a + * runnable task. + */ + struct rq *rq; + dl_server_has_tasks_f server_has_tasks; + dl_server_pick_f server_pick; #ifdef CONFIG_RT_MUTEXES /* @@ -790,6 +809,7 @@ struct task_struct { struct sched_entity se; struct sched_rt_entity rt; struct sched_dl_entity dl; + struct sched_dl_entity *server; const struct sched_class *sched_class; #ifdef CONFIG_SCHED_CORE diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e34b02cbe41f..5b88b822ec89 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3803,6 +3803,8 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags, rq->idle_stamp = 0; } #endif + + p->server = NULL; } /* @@ -6013,12 +6015,27 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) p = pick_next_task_idle(rq); } + /* + * This is the fast path; it cannot be a DL server pick; + * therefore even if @p == @prev, ->server must be NULL. + */ + if (p->server) + p->server = NULL; + return p; } restart: put_prev_task_balance(rq, prev, rf); + /* + * We've updated @prev and no longer need the server link, clear it. + * Must be done before ->pick_next_task() because that can (re)set + * ->server. + */ + if (prev->server) + prev->server = NULL; + for_each_class(class) { p = class->pick_next_task(rq); if (p) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 869734eecb2c..c67056ff5749 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -52,8 +52,14 @@ static int __init sched_dl_sysctl_init(void) late_initcall(sched_dl_sysctl_init); #endif +static bool dl_server(struct sched_dl_entity *dl_se) +{ + return dl_se->dl_server; +} + static inline struct task_struct *dl_task_of(struct sched_dl_entity *dl_se) { + BUG_ON(dl_server(dl_se)); return container_of(dl_se, struct task_struct, dl); } @@ -62,14 +68,22 @@ static inline struct rq *rq_of_dl_rq(struct dl_rq *dl_rq) return container_of(dl_rq, struct rq, dl); } -static inline struct dl_rq *dl_rq_of_se(struct sched_dl_entity *dl_se) +static inline struct rq *rq_of_dl_se(struct sched_dl_entity *dl_se) { - struct task_struct *p = dl_task_of(dl_se); - struct rq *rq = task_rq(p); + struct rq *rq = dl_se->rq; + + if (!dl_server(dl_se)) + rq = task_rq(dl_task_of(dl_se)); - return &rq->dl; + return rq; } +static inline struct dl_rq *dl_rq_of_se(struct sched_dl_entity *dl_se) +{ + return &rq_of_dl_se(dl_se)->dl; +} + + static inline int on_dl_rq(struct sched_dl_entity *dl_se) { return !RB_EMPTY_NODE(&dl_se->rb_node); @@ -392,9 +406,8 @@ static void __dl_clear_params(struct sched_dl_entity *dl_se); static void task_non_contending(struct sched_dl_entity *dl_se) { struct hrtimer *timer = &dl_se->inactive_timer; - struct dl_rq *dl_rq = dl_rq_of_se(dl_se); - struct rq *rq = rq_of_dl_rq(dl_rq); - struct task_struct *p = dl_task_of(dl_se); + struct rq *rq = rq_of_dl_se(dl_se); + struct dl_rq *dl_rq = &rq->dl; s64 zerolag_time; /* @@ -424,25 +437,33 @@ static void task_non_contending(struct sched_dl_entity *dl_se) * utilization now, instead of starting a timer */ if ((zerolag_time < 0) || hrtimer_active(&dl_se->inactive_timer)) { - if (dl_task(p)) + if (dl_server(dl_se)) { sub_running_bw(dl_se, dl_rq); + } else { + struct task_struct *p = dl_task_of(dl_se); + + if (dl_task(p)) + sub_running_bw(dl_se, dl_rq); - if (!dl_task(p) || READ_ONCE(p->__state) == TASK_DEAD) { - struct dl_bw *dl_b = dl_bw_of(task_cpu(p)); + if (!dl_task(p) || READ_ONCE(p->__state) == TASK_DEAD) { + struct dl_bw *dl_b = dl_bw_of(task_cpu(p)); - if (READ_ONCE(p->__state) == TASK_DEAD) - sub_rq_bw(dl_se, &rq->dl); - raw_spin_lock(&dl_b->lock); - __dl_sub(dl_b, dl_se->dl_bw, dl_bw_cpus(task_cpu(p))); - raw_spin_unlock(&dl_b->lock); - __dl_clear_params(dl_se); + if (READ_ONCE(p->__state) == TASK_DEAD) + sub_rq_bw(dl_se, &rq->dl); + raw_spin_lock(&dl_b->lock); + __dl_sub(dl_b, dl_se->dl_bw, dl_bw_cpus(task_cpu(p))); + raw_spin_unlock(&dl_b->lock); + __dl_clear_params(dl_se); + } } return; } dl_se->dl_non_contending = 1; - get_task_struct(p); + if (!dl_server(dl_se)) + get_task_struct(dl_task_of(dl_se)); + hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL_HARD); } @@ -469,8 +490,10 @@ static void task_contending(struct sched_dl_entity *dl_se, int flags) * will not touch the rq's active utilization, * so we are still safe. */ - if (hrtimer_try_to_cancel(&dl_se->inactive_timer) == 1) - put_task_struct(dl_task_of(dl_se)); + if (hrtimer_try_to_cancel(&dl_se->inactive_timer) == 1) { + if (!dl_server(dl_se)) + put_task_struct(dl_task_of(dl_se)); + } } else { /* * Since "dl_non_contending" is not set, the @@ -483,10 +506,8 @@ static void task_contending(struct sched_dl_entity *dl_se, int flags) } } -static inline int is_leftmost(struct task_struct *p, struct dl_rq *dl_rq) +static inline int is_leftmost(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) { - struct sched_dl_entity *dl_se = &p->dl; - return rb_first_cached(&dl_rq->root) == &dl_se->rb_node; } @@ -573,8 +594,6 @@ static void inc_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) if (p->nr_cpus_allowed > 1) dl_rq->dl_nr_migratory++; - - update_dl_migration(dl_rq); } static void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) @@ -583,8 +602,6 @@ static void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) if (p->nr_cpus_allowed > 1) dl_rq->dl_nr_migratory--; - - update_dl_migration(dl_rq); } #define __node_2_pdl(node) \ @@ -762,8 +779,10 @@ static inline void deadline_queue_pull_task(struct rq *rq) } #endif /* CONFIG_SMP */ +static void +enqueue_dl_entity(struct sched_dl_entity *dl_se, int flags); static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags); -static void __dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags); +static void dequeue_dl_entity(struct sched_dl_entity *dl_se, int flags); static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, int flags); static inline void replenish_dl_new_period(struct sched_dl_entity *dl_se, @@ -1011,8 +1030,7 @@ static inline bool dl_is_implicit(struct sched_dl_entity *dl_se) */ static void update_dl_entity(struct sched_dl_entity *dl_se) { - struct dl_rq *dl_rq = dl_rq_of_se(dl_se); - struct rq *rq = rq_of_dl_rq(dl_rq); + struct rq *rq = rq_of_dl_se(dl_se); if (dl_time_before(dl_se->deadline, rq_clock(rq)) || dl_entity_overflow(dl_se, rq_clock(rq))) { @@ -1043,11 +1061,11 @@ static inline u64 dl_next_period(struct sched_dl_entity *dl_se) * actually started or not (i.e., the replenishment instant is in * the future or in the past). */ -static int start_dl_timer(struct task_struct *p) +static int start_dl_timer(struct sched_dl_entity *dl_se) { - struct sched_dl_entity *dl_se = &p->dl; struct hrtimer *timer = &dl_se->dl_timer; - struct rq *rq = task_rq(p); + struct dl_rq *dl_rq = dl_rq_of_se(dl_se); + struct rq *rq = rq_of_dl_rq(dl_rq); ktime_t now, act; s64 delta; @@ -1081,13 +1099,33 @@ static int start_dl_timer(struct task_struct *p) * and observe our state. */ if (!hrtimer_is_queued(timer)) { - get_task_struct(p); + if (!dl_server(dl_se)) + get_task_struct(dl_task_of(dl_se)); hrtimer_start(timer, act, HRTIMER_MODE_ABS_HARD); } return 1; } +static void __push_dl_task(struct rq *rq, struct rq_flags *rf) +{ +#ifdef CONFIG_SMP + /* + * Queueing this task back might have overloaded rq, check if we need + * to kick someone away. + */ + if (has_pushable_dl_tasks(rq)) { + /* + * Nothing relies on rq->lock after this, so its safe to drop + * rq->lock. + */ + rq_unpin_lock(rq, rf); + push_dl_task(rq); + rq_repin_lock(rq, rf); + } +#endif +} + /* * This is the bandwidth enforcement timer callback. If here, we know * a task is not on its dl_rq, since the fact that the timer was running @@ -1106,10 +1144,34 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) struct sched_dl_entity *dl_se = container_of(timer, struct sched_dl_entity, dl_timer); - struct task_struct *p = dl_task_of(dl_se); + struct task_struct *p; struct rq_flags rf; struct rq *rq; + if (dl_server(dl_se)) { + struct rq *rq = rq_of_dl_se(dl_se); + struct rq_flags rf; + + rq_lock(rq, &rf); + if (dl_se->dl_throttled) { + sched_clock_tick(); + update_rq_clock(rq); + + if (dl_se->server_has_tasks(dl_se)) { + enqueue_dl_entity(dl_se, ENQUEUE_REPLENISH); + resched_curr(rq); + __push_dl_task(rq, &rf); + } else { + replenish_dl_entity(dl_se); + } + + } + rq_unlock(rq, &rf); + + return HRTIMER_NORESTART; + } + + p = dl_task_of(dl_se); rq = task_rq_lock(p, &rf); /* @@ -1180,21 +1242,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) else resched_curr(rq); -#ifdef CONFIG_SMP - /* - * Queueing this task back might have overloaded rq, check if we need - * to kick someone away. - */ - if (has_pushable_dl_tasks(rq)) { - /* - * Nothing relies on rq->lock after this, so its safe to drop - * rq->lock. - */ - rq_unpin_lock(rq, &rf); - push_dl_task(rq); - rq_repin_lock(rq, &rf); - } -#endif + __push_dl_task(rq, &rf); unlock: task_rq_unlock(rq, p, &rf); @@ -1236,12 +1284,11 @@ static void init_dl_task_timer(struct sched_dl_entity *dl_se) */ static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se) { - struct task_struct *p = dl_task_of(dl_se); - struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se)); + struct rq *rq = rq_of_dl_se(dl_se); if (dl_time_before(dl_se->deadline, rq_clock(rq)) && dl_time_before(rq_clock(rq), dl_next_period(dl_se))) { - if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(p))) + if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(dl_se))) return; dl_se->dl_throttled = 1; if (dl_se->runtime > 0) @@ -1296,29 +1343,13 @@ static u64 grub_reclaim(u64 delta, struct rq *rq, struct sched_dl_entity *dl_se) return (delta * u_act) >> BW_SHIFT; } -/* - * Update the current task's runtime statistics (provided it is still - * a -deadline task and has not been removed from the dl_rq). - */ -static void update_curr_dl(struct rq *rq) +static inline void +update_stats_dequeue_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se, + int flags); +static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64 delta_exec) { - struct task_struct *curr = rq->curr; - struct sched_dl_entity *dl_se = &curr->dl; - s64 delta_exec, scaled_delta_exec; - int cpu = cpu_of(rq); - - if (!dl_task(curr) || !on_dl_rq(dl_se)) - return; + s64 scaled_delta_exec; - /* - * Consumed budget is computed considering the time as - * observed by schedulable tasks (excluding time spent - * in hardirq context, etc.). Deadlines are instead - * computed using hard walltime. This seems to be the more - * natural solution, but the full ramifications of this - * approach need further study. - */ - delta_exec = update_curr_common(rq); if (unlikely(delta_exec <= 0)) { if (unlikely(dl_se->dl_yielded)) goto throttle; @@ -1336,10 +1367,9 @@ static void update_curr_dl(struct rq *rq) * according to current frequency and CPU maximum capacity. */ if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) { - scaled_delta_exec = grub_reclaim(delta_exec, - rq, - &curr->dl); + scaled_delta_exec = grub_reclaim(delta_exec, rq, dl_se); } else { + int cpu = cpu_of(rq); unsigned long scale_freq = arch_scale_freq_capacity(cpu); unsigned long scale_cpu = arch_scale_cpu_capacity(cpu); @@ -1358,11 +1388,21 @@ static void update_curr_dl(struct rq *rq) (dl_se->flags & SCHED_FLAG_DL_OVERRUN)) dl_se->dl_overrun = 1; - __dequeue_task_dl(rq, curr, 0); - if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(curr))) - enqueue_task_dl(rq, curr, ENQUEUE_REPLENISH); + dequeue_dl_entity(dl_se, 0); + if (!dl_server(dl_se)) { + /* XXX: After v2, from __dequeue_task_dl() */ + update_stats_dequeue_dl(&rq->dl, dl_se, 0); + dequeue_pushable_dl_task(rq, dl_task_of(dl_se)); + } + + if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(dl_se))) { + if (dl_server(dl_se)) + enqueue_dl_entity(dl_se, ENQUEUE_REPLENISH); + else + enqueue_task_dl(rq, dl_task_of(dl_se), ENQUEUE_REPLENISH); + } - if (!is_leftmost(curr, &rq->dl)) + if (!is_leftmost(dl_se, &rq->dl)) resched_curr(rq); } @@ -1392,20 +1432,82 @@ static void update_curr_dl(struct rq *rq) } } +void dl_server_update(struct sched_dl_entity *dl_se, s64 delta_exec) +{ + update_curr_dl_se(dl_se->rq, dl_se, delta_exec); +} + +void dl_server_start(struct sched_dl_entity *dl_se) +{ + if (!dl_server(dl_se)) { + dl_se->dl_server = 1; + setup_new_dl_entity(dl_se); + } + enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP); +} + +void dl_server_stop(struct sched_dl_entity *dl_se) +{ + dequeue_dl_entity(dl_se, DEQUEUE_SLEEP); +} + +void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, + dl_server_has_tasks_f has_tasks, + dl_server_pick_f pick) +{ + dl_se->rq = rq; + dl_se->server_has_tasks = has_tasks; + dl_se->server_pick = pick; +} + +/* + * Update the current task's runtime statistics (provided it is still + * a -deadline task and has not been removed from the dl_rq). + */ +static void update_curr_dl(struct rq *rq) +{ + struct task_struct *curr = rq->curr; + struct sched_dl_entity *dl_se = &curr->dl; + s64 delta_exec; + + if (!dl_task(curr) || !on_dl_rq(dl_se)) + return; + + /* + * Consumed budget is computed considering the time as + * observed by schedulable tasks (excluding time spent + * in hardirq context, etc.). Deadlines are instead + * computed using hard walltime. This seems to be the more + * natural solution, but the full ramifications of this + * approach need further study. + */ + delta_exec = update_curr_common(rq); + update_curr_dl_se(rq, dl_se, delta_exec); +} + static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) { struct sched_dl_entity *dl_se = container_of(timer, struct sched_dl_entity, inactive_timer); - struct task_struct *p = dl_task_of(dl_se); + struct task_struct *p = NULL; struct rq_flags rf; struct rq *rq; - rq = task_rq_lock(p, &rf); + if (!dl_server(dl_se)) { + p = dl_task_of(dl_se); + rq = task_rq_lock(p, &rf); + } else { + rq = dl_se->rq; + rq_lock(rq, &rf); + } sched_clock_tick(); update_rq_clock(rq); + if (dl_server(dl_se)) + goto no_task; + if (!dl_task(p) || READ_ONCE(p->__state) == TASK_DEAD) { struct dl_bw *dl_b = dl_bw_of(task_cpu(p)); @@ -1422,14 +1524,21 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) goto unlock; } + +no_task: if (dl_se->dl_non_contending == 0) goto unlock; sub_running_bw(dl_se, &rq->dl); dl_se->dl_non_contending = 0; unlock: - task_rq_unlock(rq, p, &rf); - put_task_struct(p); + + if (!dl_server(dl_se)) { + task_rq_unlock(rq, p, &rf); + put_task_struct(p); + } else { + rq_unlock(rq, &rf); + } return HRTIMER_NORESTART; } @@ -1487,34 +1596,35 @@ static void dec_dl_deadline(struct dl_rq *dl_rq, u64 deadline) static inline void inc_dl_deadline(struct dl_rq *dl_rq, u64 deadline) {} static inline void dec_dl_deadline(struct dl_rq *dl_rq, u64 deadline) {} +static inline void update_dl_migration(struct dl_rq *dl_rq) {} + #endif /* CONFIG_SMP */ static inline void inc_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) { - int prio = dl_task_of(dl_se)->prio; u64 deadline = dl_se->deadline; - WARN_ON(!dl_prio(prio)); dl_rq->dl_nr_running++; add_nr_running(rq_of_dl_rq(dl_rq), 1); inc_dl_deadline(dl_rq, deadline); - inc_dl_migration(dl_se, dl_rq); + if (!dl_server(dl_se)) + inc_dl_migration(dl_se, dl_rq); + update_dl_migration(dl_rq); } static inline void dec_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) { - int prio = dl_task_of(dl_se)->prio; - - WARN_ON(!dl_prio(prio)); WARN_ON(!dl_rq->dl_nr_running); dl_rq->dl_nr_running--; sub_nr_running(rq_of_dl_rq(dl_rq), 1); dec_dl_deadline(dl_rq, dl_se->deadline); - dec_dl_migration(dl_se, dl_rq); + if (!dl_server(dl_se)) + dec_dl_migration(dl_se, dl_rq); + update_dl_migration(dl_rq); } static inline bool __dl_less(struct rb_node *a, const struct rb_node *b) @@ -1676,8 +1786,7 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se, int flags) } else if (flags & ENQUEUE_REPLENISH) { replenish_dl_entity(dl_se); } else if ((flags & ENQUEUE_RESTORE) && - dl_time_before(dl_se->deadline, - rq_clock(rq_of_dl_rq(dl_rq_of_se(dl_se))))) { + dl_time_before(dl_se->deadline, rq_clock(rq_of_dl_se(dl_se)))) { setup_new_dl_entity(dl_se); } @@ -1762,14 +1871,6 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags) enqueue_pushable_dl_task(rq, p); } -static void __dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags) -{ - update_stats_dequeue_dl(&rq->dl, &p->dl, flags); - dequeue_dl_entity(&p->dl, flags); - - if (!p->dl.dl_throttled) - dequeue_pushable_dl_task(rq, p); -} static void dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags) { @@ -1778,7 +1879,9 @@ static void dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags) if (p->on_rq == TASK_ON_RQ_MIGRATING) flags |= DEQUEUE_MIGRATING; - __dequeue_task_dl(rq, p, flags); + dequeue_dl_entity(&p->dl, flags); + if (!p->dl.dl_throttled) + dequeue_pushable_dl_task(rq, p); } /* @@ -1968,12 +2071,12 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, } #ifdef CONFIG_SCHED_HRTICK -static void start_hrtick_dl(struct rq *rq, struct task_struct *p) +static void start_hrtick_dl(struct rq *rq, struct sched_dl_entity *dl_se) { - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, dl_se->runtime); } #else /* !CONFIG_SCHED_HRTICK */ -static void start_hrtick_dl(struct rq *rq, struct task_struct *p) +static void start_hrtick_dl(struct rq *rq, struct sched_dl_entity *dl_se) { } #endif @@ -1993,9 +2096,6 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first) if (!first) return; - if (hrtick_enabled_dl(rq)) - start_hrtick_dl(rq, p); - if (rq->curr->sched_class != &dl_sched_class) update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0); @@ -2018,12 +2118,26 @@ static struct task_struct *pick_task_dl(struct rq *rq) struct dl_rq *dl_rq = &rq->dl; struct task_struct *p; +again: if (!sched_dl_runnable(rq)) return NULL; dl_se = pick_next_dl_entity(dl_rq); WARN_ON_ONCE(!dl_se); - p = dl_task_of(dl_se); + + + if (dl_server(dl_se)) { + p = dl_se->server_pick(dl_se); + if (!p) { + // XXX should not happen, warn?! + dl_se->dl_yielded = 1; + update_curr_dl_se(rq, dl_se, 0); + goto again; + } + p->server = dl_se; + } else { + p = dl_task_of(dl_se); + } return p; } @@ -2033,9 +2147,20 @@ static struct task_struct *pick_next_task_dl(struct rq *rq) struct task_struct *p; p = pick_task_dl(rq); - if (p) + if (!p) + return p; + + /* + * XXX: re-check !dl_server, changed from v2 because of + * pick_next_task_dl change + */ + if (!dl_server(&p->dl)) set_next_task_dl(rq, p, true); + /* XXX not quite right */ + if (hrtick_enabled(rq)) + start_hrtick_dl(rq, &p->dl); + return p; } @@ -2073,8 +2198,8 @@ static void task_tick_dl(struct rq *rq, struct task_struct *p, int queued) * be set and schedule() will start a new hrtick for the next task. */ if (hrtick_enabled_dl(rq) && queued && p->dl.runtime > 0 && - is_leftmost(p, &rq->dl)) - start_hrtick_dl(rq, p); + is_leftmost(&p->dl, &rq->dl)) + start_hrtick_dl(rq, &p->dl); } static void task_fork_dl(struct task_struct *p) @@ -3003,6 +3128,7 @@ static void __dl_clear_params(struct sched_dl_entity *dl_se) dl_se->dl_yielded = 0; dl_se->dl_non_contending = 0; dl_se->dl_overrun = 0; + dl_se->dl_server = 0; #ifdef CONFIG_RT_MUTEXES dl_se->pi_se = dl_se; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fda67f05190d..0c58d8e55b69 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -930,6 +930,8 @@ s64 update_curr_common(struct rq *rq) account_group_exec_runtime(curr, delta_exec); cgroup_account_cputime(curr, delta_exec); + if (curr->server) + dl_server_update(curr->server, delta_exec); return delta_exec; } @@ -958,6 +960,8 @@ static void update_curr(struct cfs_rq *cfs_rq) trace_sched_stat_runtime(curtask, delta_exec, curr->vruntime); cgroup_account_cputime(curtask, delta_exec); account_group_exec_runtime(curtask, delta_exec); + if (curtask->server) + dl_server_update(curtask->server, delta_exec); } account_cfs_rq_runtime(cfs_rq, delta_exec); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index aaf163695c2e..390c99e2f8a8 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -324,6 +324,35 @@ extern bool dl_param_changed(struct task_struct *p, const struct sched_attr *att extern int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial); extern int dl_cpu_busy(int cpu, struct task_struct *p); +/* + * SCHED_DEADLINE supports servers (nested scheduling) with the following + * interface: + * + * dl_se::rq -- runqueue we belong to. + * + * dl_se::server_has_tasks() -- used on bandwidth enforcement; we 'stop' the + * server when it runs out of tasks to run. + * + * dl_se::server_pick() -- nested pick_next_task(); we yield the period if this + * returns NULL. + * + * dl_server_update() -- called from update_curr_common(), propagates runtime + * to the server. + * + * dl_server_start() + * dl_server_stop() -- start/stop the server when it has (no) tasks + * + * dl_server_init() + * + * XXX + */ +extern void dl_server_update(struct sched_dl_entity *dl_se, s64 delta_exec); +extern void dl_server_start(struct sched_dl_entity *dl_se); +extern void dl_server_stop(struct sched_dl_entity *dl_se); +extern void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, + dl_server_has_tasks_f has_tasks, + dl_server_pick_f pick); + #ifdef CONFIG_CGROUP_SCHED struct cfs_rq; From patchwork Thu Jun 8 15:58:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Bristot de Oliveira X-Patchwork-Id: 105031 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp392615vqr; Thu, 8 Jun 2023 09:21:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6F44QL0heks9qrNxtImqOzOGSd8CmEbMwfX221ruMY9GEgLdKintlmr4u90WnUY9+vgv3s X-Received: by 2002:a17:902:da84:b0:1b0:5304:5b48 with SMTP id j4-20020a170902da8400b001b053045b48mr10231588plx.0.1686241277835; Thu, 08 Jun 2023 09:21:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686241277; cv=none; d=google.com; s=arc-20160816; b=NtEJpIx/ye6r9qfXerWQgjxLr21Ys9nQhua5v5oUSHeSaiCmLgn7w4CJ9rQClpSRpW dslW4nLmFN2IfSKxmfsSXyrkRD8y2u3SGxtz0+irPzGx1+l1YEm97z0I+Bx9sBU+sQ6k 2d/HzkEIAenQC207aQL4NIlS4Q06qUt4ce1AiCZ+C9em3WkgN3WmMdof+nXUZF7xaHJd HYRwnTWNTw8w4rfPCIBbMZQjyXEZzhX3cZl3ToRZcSpS0mWjlPk7Flh9pWIwzjkHro6S /RB4rzrUq5q1oYECzaqT92f0ni7Nv7I7029wkYITovCkLjhH03TbXDk9239I/m/0fQI+ Nl0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MN/YoxFFg4y5M96E6GyaAWDrSftVP9y6XhI3ngRQZGU=; b=qpG/j3IHQsTky/+h6Np02hotJjBgei0nktVb50y6nQISe+5BCjLzEQhum7FFpJP4+K vZpybNCjQdG1bssE4WAL4qySUuJ1KzTst9OBAKqB0p4p7WKhLrhfRmFnzl3xT7XO7XR8 But/qHWt7KhxM95fD4jC+KaZDyT5Gz1Ph43RmMXrNYfPIt8mPSDefxK9FZR73kGKZm/0 z+5fd9+E7NQWzabYy54cI7ofHI3eihQGbHZZOdorZyrRU2Ib97xO50TsZdF/OcqqfaLX Uc0a5fCpADsn70niv922YDYyYoBQtntGEvVLPFClk2uVFJPvMME7JiOmKeFv1hcSt4Iy WVMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=d+O9GR1c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m6-20020a17090a7f8600b0024e12dc1e4csi2967789pjl.86.2023.06.08.09.20.54; Thu, 08 Jun 2023 09:21:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=d+O9GR1c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237571AbjFHP7Y (ORCPT + 99 others); Thu, 8 Jun 2023 11:59:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237577AbjFHP65 (ORCPT ); Thu, 8 Jun 2023 11:58:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D0F211A for ; Thu, 8 Jun 2023 08:58:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 37B8764EC6 for ; Thu, 8 Jun 2023 15:58:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EDBC5C433D2; Thu, 8 Jun 2023 15:58:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686239931; bh=WAjW/aFhAStD3otQ1mS7pNSbSHMBmtJIFN275Bic5kM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d+O9GR1cnStXZVNONXFwv51PLpkZMfpBoC2XCXzOv8Nz3FT8NXr3VOi4mbUMpaPRD OWwS8wLjRSZT5ZpdeuoU/hd8jjATLEQXPVBVpSwxdYaAT3ViTTuLDQc5+PO8vYSVd+ n8VTlKuphSndp5t/abhf4nsJZMmKRQl/YB3aaVpMATWzYiJGTONIMDQAZoakCQtpiU Trw2MxQNJ2BXHUiVzApTftjAUX3bFFXpwQlp9iNAo5RwTNUHcLjd7XhqRegSrbpFGz vHGAxo8aHBefnSbQ/fr7liQTLde5MzJLXqmiU5X1L1VYrpYzjczFhmsHwt7oyaGjLg KugfnIT/Hsvog== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , Daniel Bristot de Oliveira Subject: [RFC PATCH V3 5/6] sched/fair: Add trivial fair server Date: Thu, 8 Jun 2023 17:58:17 +0200 Message-Id: <8db5a49ea92ad8b875d331d6136721645a382fe8.1686239016.git.bristot@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768152134158657469?= X-GMAIL-MSGID: =?utf-8?q?1768152134158657469?= From: Peter Zijlstra Use deadline servers to service fair tasks. This patch adds a fair_server deadline entity which acts as a container for fair entities and can be used to fix starvation when higher priority (wrt fair) tasks are monopolizing CPU(s). Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Bristot de Oliveira --- kernel/sched/core.c | 1 + kernel/sched/fair.c | 29 +++++++++++++++++++++++++++++ kernel/sched/sched.h | 4 ++++ 3 files changed, 34 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5b88b822ec89..7506dde9849d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10058,6 +10058,7 @@ void __init sched_init(void) #endif /* CONFIG_SMP */ hrtick_rq_init(rq); atomic_set(&rq->nr_iowait, 0); + fair_server_init(rq); #ifdef CONFIG_SCHED_CORE rq->core = rq; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0c58d8e55b69..f493f05c1f84 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6336,6 +6336,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) */ util_est_enqueue(&rq->cfs, p); + if (!rq->cfs.h_nr_running) + dl_server_start(&rq->fair_server); + /* * If in_iowait is set, the code below may not trigger any cpufreq * utilization updates, so do it here explicitly with the IOWAIT flag @@ -6480,6 +6483,9 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) rq->next_balance = jiffies; dequeue_throttle: + if (!rq->cfs.h_nr_running) + dl_server_stop(&rq->fair_server); + util_est_update(&rq->cfs, p, task_sleep); hrtick_update(rq); } @@ -8221,6 +8227,29 @@ static struct task_struct *__pick_next_task_fair(struct rq *rq) return pick_next_task_fair(rq, NULL, NULL); } +static bool fair_server_has_tasks(struct sched_dl_entity *dl_se) +{ + return !!dl_se->rq->cfs.nr_running; +} + +static struct task_struct *fair_server_pick(struct sched_dl_entity *dl_se) +{ + return pick_next_task_fair(dl_se->rq, NULL, NULL); +} + +void fair_server_init(struct rq *rq) +{ + struct sched_dl_entity *dl_se = &rq->fair_server; + + init_dl_entity(dl_se); + + dl_se->dl_runtime = TICK_NSEC; + dl_se->dl_deadline = 20 * TICK_NSEC; + dl_se->dl_period = 20 * TICK_NSEC; + + dl_server_init(dl_se, rq, fair_server_has_tasks, fair_server_pick); +} + /* * Account for a descheduled task: */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 390c99e2f8a8..d4a7c0823c53 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -353,6 +353,8 @@ extern void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, dl_server_has_tasks_f has_tasks, dl_server_pick_f pick); +extern void fair_server_init(struct rq *); + #ifdef CONFIG_CGROUP_SCHED struct cfs_rq; @@ -1015,6 +1017,8 @@ struct rq { struct rt_rq rt; struct dl_rq dl; + struct sched_dl_entity fair_server; + #ifdef CONFIG_FAIR_GROUP_SCHED /* list of leaf cfs_rq on this CPU: */ struct list_head leaf_cfs_rq_list; From patchwork Thu Jun 8 15:58:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Bristot de Oliveira X-Patchwork-Id: 105033 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp395936vqr; Thu, 8 Jun 2023 09:26:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5OAgnOTBRPwHSNzsVJdjEnkvFlZHf1OBFAk7ZoYz6a/PMe37YxEQMIJNCl91WzGDox3zLZ X-Received: by 2002:a05:6a00:a29:b0:655:89f1:2db8 with SMTP id p41-20020a056a000a2900b0065589f12db8mr11291635pfh.16.1686241590075; Thu, 08 Jun 2023 09:26:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686241590; cv=none; d=google.com; s=arc-20160816; b=eCh3KEhju/J2tu8o64zqxJTXd/cLSA5ZcNAzjhcta5OZdtgOu9oObEKnjUJOdBC0IX ADRWmbGVNR5usPP3xLwiBC2aQdQukB2gCJ8ZR4ZShx7QODimi9QBtbdjHcuCp8Nem6QG Hzu4Nt3KYTtLpxOl0gZxIci+10sQ3Kl+SNLQg8DoF45FYv66oLCKhTMwsxMbBWwpqneU 68BbR2jGJivzN2asv25osVajXTjXkR4wPQpE4U3P9fmVCxvhWNONLC3ub/CLFQx0p0Q4 gSE8zLCe+UYmyKiUDzMMRV9Xgko7w/MpSuDA0Aq1DOhwNtzuMywh9JBC0hyzYQ9DoC0+ 1E3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PozrojTSnDLaGSH0hQF6itzL9lnH4MUdxoEkZ5okEP0=; b=GdBJReac4P+dP5g9rBHV8LcoVCL4JvrAtiRx9Sgto8e1WRfBMiqeLshmex17uZwvVB eHTCJiFRw08oJqFl/0jNbB243YuQH9oRu3ZjoU4XzQAL/1XyRTFmH+vi8nJH/QPeql8K +sBX4DcRbBMUOLnsUrZ4ZjIYxAIJbraxSR/c0YPpZUaaftw4nvxGayV2gI5M1UGmX05+ 5l6Y3MqXepavlsgTaZIxksUUZJ8tII6GtvwiG0u/LXy8WTNXSESivVEhrJsytLM27Cc9 d8j+Ztc8/dKXlpM20pqV8gmlRckXOUCWIh1M7T0oqdqvzzFvmxsbL2pxrAxF22TxCDpa qNVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=KOowqkDD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t71-20020a63814a000000b0053fb85dd818si1243482pgd.52.2023.06.08.09.26.17; Thu, 08 Jun 2023 09:26:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=KOowqkDD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237590AbjFHP70 (ORCPT + 99 others); Thu, 8 Jun 2023 11:59:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237598AbjFHP7A (ORCPT ); Thu, 8 Jun 2023 11:59:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E7241FCC for ; Thu, 8 Jun 2023 08:58:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 50A6F64EC9 for ; Thu, 8 Jun 2023 15:58:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17D80C433EF; Thu, 8 Jun 2023 15:58:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686239935; bh=2tNnPtbaFRlQ2fLFW3vNKTom3zxVIALb9HVMI/WWhdo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KOowqkDDJ/O5Z3W4oTS0J7oZ4mPLZ0jn1TnJD1ebMQORivzxyacnpRd15Eo/zRkxp VrTBMYt2qIXrXivofWrbo9sV6PKmhlcb1scMODoNpkNjSsjpCQRr38jzamDAOf7jFF BPyTKpNerPo+wP818IYURc3ioZBoHr3c11LKN9RgcFUzpfxsU1OspDCgnwC6Trxg4l PW6WlI9Hvd5Ta309PGOpK8Rd8FU4UoZfMEUuHfdxKMT2uS4EFsW1ITywowFLJmeFhD 9vQkQHWnxsBxgnzeQyap/8DzEir824IHm+EtsD6NYKs4snPFu0f3ZHtHrJMI56nSUj SNYgvOOnlAxgQ== From: Daniel Bristot de Oliveira To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Tommaso Cucinotta , Thomas Gleixner , Joel Fernandes , Vineeth Pillai , Shuah Khan , Daniel Bristot de Oliveira Subject: [RFC PATCH V3 6/6] sched/fair: Implement starvation monitor Date: Thu, 8 Jun 2023 17:58:18 +0200 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768152461310956732?= X-GMAIL-MSGID: =?utf-8?q?1768152461310956732?= From: Juri Lelli Starting deadline server for lower priority classes right away when first task is enqueued might break guarantees, as tasks belonging to intermediate priority classes could be uselessly preempted. E.g., a well behaving (non hog) FIFO task can be preempted by NORMAL tasks even if there are still CPU cycles available for NORMAL tasks to run, as they'll be running inside the fair deadline server for some period of time. To prevent this issue, implement a starvation monitor mechanism that starts the deadline server only if a (fair in this case) task hasn't been scheduled for some interval of time after it has been enqueued. Use pick/put functions to manage starvation monitor status. Signed-off-by: Juri Lelli Signed-off-by: Daniel Bristot de Oliveira --- kernel/sched/fair.c | 57 ++++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 4 ++++ 2 files changed, 59 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f493f05c1f84..75eadd85e2b3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6315,6 +6315,53 @@ static int sched_idle_cpu(int cpu) } #endif + +static void fair_server_watchdog(struct timer_list *list) +{ + struct rq *rq = container_of(list, struct rq, fair_server_wd); + struct rq_flags rf; + + rq_lock_irqsave(rq, &rf); + rq->fair_server_wd_running = 0; + + if (!rq->cfs.h_nr_running) + goto out; + + update_rq_clock(rq); + dl_server_start(&rq->fair_server); + rq->fair_server_active = 1; + resched_curr(rq); + +out: + rq_unlock_irqrestore(rq, &rf); +} + +static inline void fair_server_watchdog_start(struct rq *rq) +{ + if (rq->fair_server_wd_running || rq->fair_server_active) + return; + + timer_setup(&rq->fair_server_wd, fair_server_watchdog, 0); + rq->fair_server_wd.expires = jiffies + FAIR_SERVER_WATCHDOG_INTERVAL; + add_timer_on(&rq->fair_server_wd, cpu_of(rq)); + rq->fair_server_active = 0; + rq->fair_server_wd_running = 1; +} + +static inline void fair_server_watchdog_stop(struct rq *rq, bool stop_server) +{ + if (!rq->fair_server_wd_running && !stop_server) + return; + + del_timer(&rq->fair_server_wd); + rq->fair_server_wd_running = 0; + + if (stop_server && rq->fair_server_active) { + dl_server_stop(&rq->fair_server); + rq->fair_server_active = 0; + } +} + /* * The enqueue_task method is called before nr_running is * increased. Here we update the fair scheduling stats and @@ -6337,7 +6384,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) util_est_enqueue(&rq->cfs, p); if (!rq->cfs.h_nr_running) - dl_server_start(&rq->fair_server); + fair_server_watchdog_start(rq); /* * If in_iowait is set, the code below may not trigger any cpufreq @@ -6484,7 +6531,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) dequeue_throttle: if (!rq->cfs.h_nr_running) - dl_server_stop(&rq->fair_server); + fair_server_watchdog_stop(rq, true); util_est_update(&rq->cfs, p, task_sleep); hrtick_update(rq); @@ -8193,6 +8240,7 @@ done: __maybe_unused; hrtick_start_fair(rq, p); update_misfit_status(p, rq); + fair_server_watchdog_stop(rq, false); return p; @@ -8248,6 +8296,8 @@ void fair_server_init(struct rq *rq) dl_se->dl_period = 20 * TICK_NSEC; dl_server_init(dl_se, rq, fair_server_has_tasks, fair_server_pick); + + rq->fair_server_wd_running = 0; } /* @@ -8262,6 +8312,9 @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev) cfs_rq = cfs_rq_of(se); put_prev_entity(cfs_rq, se); } + + if (rq->cfs.h_nr_running) + fair_server_watchdog_start(rq); } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d4a7c0823c53..cab5d2b1e71f 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -353,6 +353,7 @@ extern void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, dl_server_has_tasks_f has_tasks, dl_server_pick_f pick); +#define FAIR_SERVER_WATCHDOG_INTERVAL (HZ >> 1) extern void fair_server_init(struct rq *); #ifdef CONFIG_CGROUP_SCHED @@ -1018,6 +1019,9 @@ struct rq { struct dl_rq dl; struct sched_dl_entity fair_server; + int fair_server_active; + struct timer_list fair_server_wd; + int fair_server_wd_running; #ifdef CONFIG_FAIR_GROUP_SCHED /* list of leaf cfs_rq on this CPU: */