From patchwork Wed May 31 11:58:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 101449 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2868335vqr; Wed, 31 May 2023 06:11:20 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ64CNNej/lnztQi2mQnRTQRxvpS26mW6IBl2E7XuJBVAJvJrLqMw+XYRGW5I/Vx4TDOUqGe X-Received: by 2002:a05:6a00:b84:b0:64d:4a94:1a60 with SMTP id g4-20020a056a000b8400b0064d4a941a60mr7462630pfj.18.1685538680145; Wed, 31 May 2023 06:11:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685538680; cv=none; d=google.com; s=arc-20160816; b=oD1T5fn63fnJ45oJf/5zUsAslfKxygUfW9I+DydAzwWwK8D3Knl5CMntlWa/x0owgO +NLem40H78bBAFjVado0b3piEL0biGHZI7BZ4zeggwzxB/1FuyTtGKX470L6U65IbJ0v pZIhU/2iJimk5k/rv6sr+kPgFm7+FzeTQ9MPg3mkLEZv1OZLpZGTB6lPhOkawHNqo0c4 NTrSBzjweMNGdbRKwbqedfUvlJ78BYApV/e3Gi+vMRoboanPCh7yTGA0hDMiys4TnxOp KzJvIwR7G597Sak2+lKS9tgndmG7Xzh55mZ5xgNC8hzjORJY4aaV194xjzSMCcPECXYa VPSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=cgla+rvT0nVnQI4wEDomn+b4ABDX9Oku+7rZVELb/vE=; b=SpgDdCObq1ZSSjGCTiFN6aNYRC5AChQVO5ZWFqx1VcLzSF1o9a0B7FCGS/QLMbKzl6 1aTASqEe5jj9rZplzkfrf4nYOi9iFuuy23dwO5OAHbj4AcfUuQGs4+O1ppAH5zWQQyIm D7pq4yUiLNCmW6POR2AadcPiO5UKvtlFpHr4UsBBhWVdeV+j4Gb0Cy4GzlEw/NuK67OY h3mXYHfpDlU+2yDN0f9hLRqYsB4g6bQjwt5DJq2TORz1x4y2cC8w6AGasmJWBJyK81cA +Zqzqs+wBXpmP7bVJPAAGpKk37vGBel4MQf8XgxT5VG2fLs2KwwHjTIYZSEyID5cBvNE zo9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=T0EdiQhH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z128-20020a626586000000b0064354230c2asi1762175pfb.367.2023.05.31.06.11.06; Wed, 31 May 2023 06:11:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=T0EdiQhH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236071AbjEaMt1 (ORCPT + 99 others); Wed, 31 May 2023 08:49:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235989AbjEaMs7 (ORCPT ); Wed, 31 May 2023 08:48:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A156E45 for ; Wed, 31 May 2023 05:48:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=cgla+rvT0nVnQI4wEDomn+b4ABDX9Oku+7rZVELb/vE=; b=T0EdiQhHcgLKDjJpGdLZIb7wfg d0msWmAZJDo20YEDR09tl6xcly2DUMDSLNV06g2klNG8mzGPIf5AVkCCsBpuJIS/bdzQ2gD1ie14L R041esemRi5hK/uYgGYivHDQ93XzW4RQ5POReVj9JfhGYdUCFLHdGpkbuXRQEZ7MlxmjfxiZvGZ8G iTa7pAxSpYtkLWRgHkOCh7ldlCfhfOpzj8QC+mkIyPXRbxPgCfCwUI4djfQ3//Yr803OnRRWd28LS GfItLfMz2CNb5ndU/Ntwz7vEcfLb0vLzAiyAUylxaBPCxpk5KMAlnauYc/6sClx7xIOyqK6llHuV5 OszLAf6g==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q4LEq-007GRg-41; Wed, 31 May 2023 12:47:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9A2B33012A7; Wed, 31 May 2023 14:47:37 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 2486F22BA6461; Wed, 31 May 2023 14:47:34 +0200 (CEST) Message-ID: <20230531124604.546980086@infradead.org> User-Agent: quilt/0.66 Date: Wed, 31 May 2023 13:58:53 +0200 From: Peter Zijlstra To: mingo@kernel.org, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, corbet@lwn.net, qyousef@layalina.io, chris.hyser@oracle.com, patrick.bellasi@matbug.net, pjt@google.com, pavel@ucw.cz, qperret@google.com, tim.c.chen@linux.intel.com, joshdon@google.com, timj@gnu.org, kprateek.nayak@amd.com, yu.c.chen@intel.com, youssefesmat@chromium.org, joel@joelfernandes.org, efault@gmx.de, tglx@linutronix.de Subject: [RFC][PATCH 14/15] sched/fair: Add sched group latency support References: <20230531115839.089944915@infradead.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767415406803046482?= X-GMAIL-MSGID: =?utf-8?q?1767415406803046482?= From: Vincent Guittot Task can set its latency priority with sched_setattr(), which is then used to set the latency offset of its sched_enity, but sched group entities still have the default latency offset value. Add a latency.nice field in cpu cgroup controller to set the latency priority of the group similarly to sched_setattr(). The latency priority is then used to set the offset of the sched_entities of the group. Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Tested-by: K Prateek Nayak Link: https://lkml.kernel.org/r/20230224093454.956298-7-vincent.guittot@linaro.org --- Documentation/admin-guide/cgroup-v2.rst | 10 ++++++++++ kernel/sched/core.c | 30 ++++++++++++++++++++++++++++++ kernel/sched/fair.c | 27 +++++++++++++++++++++++++++ kernel/sched/sched.h | 4 ++++ 4 files changed, 71 insertions(+) --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1121,6 +1121,16 @@ All time durations are in microseconds. values similar to the sched_setattr(2). This maximum utilization value is used to clamp the task specific maximum utilization clamp. + cpu.latency.nice + A read-write single value file which exists on non-root + cgroups. The default is "0". + + The nice value is in the range [-20, 19]. + + This interface file allows reading and setting latency using the + same values used by sched_setattr(2). The latency_nice of a group is + used to limit the impact of the latency_nice of a task outside the + group. Memory --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11177,6 +11177,25 @@ static int cpu_idle_write_s64(struct cgr { return sched_group_set_idle(css_tg(css), idle); } + +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + return PRIO_TO_NICE(css_tg(css)->latency_prio); +} + +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css, + struct cftype *cft, s64 nice) +{ + int prio; + + if (nice < MIN_NICE || nice > MAX_NICE) + return -ERANGE; + + prio = NICE_TO_PRIO(nice); + + return sched_group_set_latency(css_tg(css), prio); +} #endif static struct cftype cpu_legacy_files[] = { @@ -11191,6 +11210,11 @@ static struct cftype cpu_legacy_files[] .read_s64 = cpu_idle_read_s64, .write_s64 = cpu_idle_write_s64, }, + { + .name = "latency.nice", + .read_s64 = cpu_latency_nice_read_s64, + .write_s64 = cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { @@ -11408,6 +11432,12 @@ static struct cftype cpu_files[] = { .read_s64 = cpu_idle_read_s64, .write_s64 = cpu_idle_write_s64, }, + { + .name = "latency.nice", + .flags = CFTYPE_NOT_ON_ROOT, + .read_s64 = cpu_latency_nice_read_s64, + .write_s64 = cpu_latency_nice_write_s64, + }, #endif #ifdef CONFIG_CFS_BANDWIDTH { --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12293,6 +12293,7 @@ int alloc_fair_sched_group(struct task_g goto err; tg->shares = NICE_0_LOAD; + tg->latency_prio = DEFAULT_PRIO; init_cfs_bandwidth(tg_cfs_bandwidth(tg)); @@ -12391,6 +12392,9 @@ void init_tg_cfs_entry(struct task_group } se->my_q = cfs_rq; + + set_latency_fair(se, tg->latency_prio - MAX_RT_PRIO); + /* guarantee group entities always have weight */ update_load_set(&se->load, NICE_0_LOAD); se->parent = parent; @@ -12519,6 +12523,29 @@ int sched_group_set_idle(struct task_gro mutex_unlock(&shares_mutex); return 0; +} + +int sched_group_set_latency(struct task_group *tg, int prio) +{ + int i; + + if (tg == &root_task_group) + return -EINVAL; + + mutex_lock(&shares_mutex); + + if (tg->latency_prio == prio) { + mutex_unlock(&shares_mutex); + return 0; + } + + tg->latency_prio = prio; + + for_each_possible_cpu(i) + set_latency_fair(tg->se[i], prio - MAX_RT_PRIO); + + mutex_unlock(&shares_mutex); + return 0; } #else /* CONFIG_FAIR_GROUP_SCHED */ --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -378,6 +378,8 @@ struct task_group { /* A positive value indicates that this is a SCHED_IDLE group. */ int idle; + /* latency priority of the group. */ + int latency_prio; #ifdef CONFIG_SMP /* @@ -488,6 +490,8 @@ extern int sched_group_set_shares(struct extern int sched_group_set_idle(struct task_group *tg, long idle); +extern int sched_group_set_latency(struct task_group *tg, int prio); + #ifdef CONFIG_SMP extern void set_task_rq_fair(struct sched_entity *se, struct cfs_rq *prev, struct cfs_rq *next);