Message ID | 20240220061542.489922-1-zhaoyang.huang@unisoc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-72395-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2685:b0:108:e6aa:91d0 with SMTP id mn5csp218614dyc; Mon, 19 Feb 2024 22:17:16 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWXDMOGmuL5cxAurRc341H0vCSD/8aEM+eH59T3wFK8TTe21PuS/+ZfJXLzGYZeD8Hzed2WR5XLMk74K2hdC08CnFNk5w== X-Google-Smtp-Source: AGHT+IFXcxZ3O63Y9TwrxDi0sm9EzSbZwG5oKzzvKRAmn0uOfdaPeKbHur33t6UXuT+3utIWxZw3 X-Received: by 2002:a17:90a:3486:b0:299:398e:5cee with SMTP id p6-20020a17090a348600b00299398e5ceemr7309445pjb.13.1708409836748; Mon, 19 Feb 2024 22:17:16 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708409836; cv=pass; d=google.com; s=arc-20160816; b=hJfWjm3HeAVxbhi9WgfSthvxox7JUmOidn0SIq3SsFxEpln4TS2OsjjY8O9OSsKbJm sWWeu4O4h83EBC2Vvuq1FWMOgZS4xViTLBX5r1jg1JNzOJKuEXfo0cQVWWKjf3C/+mnZ 1z6tYFh25Tk5G6//G83sSfWfKfModSXGzZYD5TCaRoxeUMBUIEGjjB1Sn/VUjJhgXF1j ZKaq0PSxpl0xTl8yeQvz+CzdrPTCtPoY6YmabYRVBvpb5DGrsKkWzN77RJiqAFdwXqY4 ZAgWRx7abfcvzL2USwSqrZi1I4eMSnig9v8tIQjvJV5tBYw6C1lXb9U/bg09MMfJgnkn h4zw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:to:from; bh=gMH5YSmCvCbf435zwL1mzp4OyG5+2BLb1IrlgD9lT2o=; fh=QtOHZtKrtQsMid7WyH7DZ+mtUaM1ZYqcizgd96ghgo4=; b=nrGrFA/7ebwB9WGdfRRbsKdPS0wkS+tejY1BEiuAR05/H6/QP5P1cDJUj/0AP6Srkd /RQcGVuZfeqBwRfPP/cqhjo/28M5MSvXj6Osz3/o1goowsXMShNnPzbAdPkjiBN6CyAz vyb75056UxMHYFkGrwmLoUZwNqgZ9VMekDCfZJh2f/I7ReQOQGjvjRfpOMaeVZ5438oA ZFLXe0TUvTIYw7Z0Pqh/b3pSpT04EWjjEZ+Fk/pvEBBJndDeLnLlFNx1c28OgAJrZdCe iXzG1CRPLfhYqsqiTWjthW4Zx5s4WYiAdAMjmddz6t+kCS2XNewIgZ7vipg4QldNl+vP P+GA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=unisoc.com); spf=pass (google.com: domain of linux-kernel+bounces-72395-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-72395-ouuuleilei=gmail.com@vger.kernel.org" Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id lt9-20020a17090b354900b002993f393765si5641147pjb.76.2024.02.19.22.17.16 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Feb 2024 22:17:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-72395-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=unisoc.com); spf=pass (google.com: domain of linux-kernel+bounces-72395-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-72395-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 8E50F282909 for <ouuuleilei@gmail.com>; Tue, 20 Feb 2024 06:17:16 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DA41459155; Tue, 20 Feb 2024 06:16:50 +0000 (UTC) Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B86158101 for <linux-kernel@vger.kernel.org>; Tue, 20 Feb 2024 06:16:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=222.66.158.135 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708409809; cv=none; b=lzkOkWIBZI7WQGfaMrin/dUeVWXCvvqbBTXEx+ypaumPIDI+CSFGvIi3WE1IWhopn2fmiWhWb/xTKz9O+XdFwhbN+5f2Hyzk/F9Q19DyldfViGYx5YcuYmGqEnWFDT015pXr1ORnUcunoIG/yQ1qnjBdAhxZWw2g+qHWBMvr6jk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708409809; c=relaxed/simple; bh=JUNp8NBGM/NdaEf4VeQTlcoAJK/EmXkAJZRb5fSNQkg=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; b=ZReTZENon22i8Y8i6X1sHrWhcEOqU3Luv7G1Meh30X8ALV983bjlxz7g/A0zS+lNlrAbRMvPV/FPVhMNhNmw9W7fcvkXpSOmuZQ/a8i+uGdP9rUoOE/MtbIKwBFWofRn22kMCPA4IBPh+D9HRLx0XK+yrR5HVbsKF5RORC6m6N0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=unisoc.com; spf=pass smtp.mailfrom=unisoc.com; arc=none smtp.client-ip=222.66.158.135 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=unisoc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=unisoc.com Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 41K6FrW0094999; Tue, 20 Feb 2024 14:15:53 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4Tf8K41mTxz2K25V6; Tue, 20 Feb 2024 14:15:20 +0800 (CST) Received: from bj03382pcu01.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 20 Feb 2024 14:15:51 +0800 From: "zhaoyang.huang" <zhaoyang.huang@unisoc.com> To: Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, Jens Axboe <axboe@kernel.dk>, <linux-block@vger.kernel.org>, <linux-kernel@vger.kernel.org>, Zhaoyang Huang <huangzhaoyang@gmail.com>, <steve.kang@unisoc.com> Subject: [PATCH 1/2] sched: introduce helper function to calculate distribution over sched class Date: Tue, 20 Feb 2024 14:15:41 +0800 Message-ID: <20240220061542.489922-1-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 41K6FrW0094999 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791397553062938810 X-GMAIL-MSGID: 1791397553062938810 |
Series |
[1/2] sched: introduce helper function to calculate distribution over sched class
|
|
Commit Message
zhaoyang.huang
Feb. 20, 2024, 6:15 a.m. UTC
From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> As RT, DL, IRQ time could be deemed as lost time of CFS's task, some timing value want to know the distribution of how these spread approximately by using utilization account value (nivcsw is not enough sometimes). This commit would like to introduce a helper function to achieve this goal. eg. Effective part of A = Total_time * cpu_util_cfs / cpu_util Timing value A (should be a process last for several TICKs or statistics of a repeadted process) Timing start | | preempted by RT, DL or IRQ |\ | This period time is nonvoluntary CPU give up, need to know how long |/ sched in again | | | Timing end Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> --- include/linux/sched.h | 1 + kernel/sched/core.c | 20 ++++++++++++++++++++ 2 files changed, 21 insertions(+)
Comments
On Tue, 20 Feb 2024 at 07:16, zhaoyang.huang <zhaoyang.huang@unisoc.com> wrote: > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > As RT, DL, IRQ time could be deemed as lost time of CFS's task, some It's lost only if cfs has been actually preempted > timing value want to know the distribution of how these spread > approximately by using utilization account value (nivcsw is not enough > sometimes). This commit would like to introduce a helper function to > achieve this goal. > > eg. > Effective part of A = Total_time * cpu_util_cfs / cpu_util > > Timing value A > (should be a process last for several TICKs or statistics of a repeadted > process) > > Timing start > | > | > preempted by RT, DL or IRQ > |\ > | This period time is nonvoluntary CPU give up, need to know how long > |/ preempted means that a cfs task stops running on the cpu and lets another rt/dl task or an irq run on the cpu instead. We can't know that. We know an average ratio of time spent in rt/dl and irq contexts but not if the cpu was idle or running cfs task > sched in again > | > | > | > Timing end > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > --- > include/linux/sched.h | 1 + > kernel/sched/core.c | 20 ++++++++++++++++++++ > 2 files changed, 21 insertions(+) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 77f01ac385f7..99cf09c47f72 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -2318,6 +2318,7 @@ static inline bool owner_on_cpu(struct task_struct *owner) > > /* Returns effective CPU energy utilization, as seen by the scheduler */ > unsigned long sched_cpu_util(int cpu); > +unsigned long cfs_prop_by_util(struct task_struct *tsk, unsigned long val); > #endif /* CONFIG_SMP */ > > #ifdef CONFIG_RSEQ > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 802551e0009b..217e2220fdc1 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -7494,6 +7494,26 @@ unsigned long sched_cpu_util(int cpu) > { > return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL); > } > + > +/* > + * Calculate the approximate proportion of timing value consumed in cfs. > + * The user must be aware of this is done by avg_util which is tracked by > + * the geometric series as decaying the load by y^32 = 0.5 (unit is 1ms). > + * That is, only the period last for at least several TICKs or the statistics > + * of repeated timing value are suitable for this helper function. > + */ > +unsigned long cfs_prop_by_util(struct task_struct *tsk, unsigned long val) > +{ > + unsigned int cpu = task_cpu(tsk); > + struct rq *rq = cpu_rq(cpu); > + unsigned long util; > + > + if (tsk->sched_class != &fair_sched_class) > + return val; > + util = cpu_util_rt(rq) + cpu_util_cfs(cpu) + cpu_util_irq(rq) + cpu_util_dl(rq); This is not correct as irq is not on the same clock domain: look at effective_cpu_util() You don't care about idle time ? > + return min(val, cpu_util_cfs(cpu) * val / util); > +} > + > #endif /* CONFIG_SMP */ > > /** > -- > 2.25.1 >
On Thu, Feb 22, 2024 at 1:51 AM Vincent Guittot <vincent.guittot@linaro.org> wrote: > > On Tue, 20 Feb 2024 at 07:16, zhaoyang.huang <zhaoyang.huang@unisoc.com> wrote: > > > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > As RT, DL, IRQ time could be deemed as lost time of CFS's task, some > > It's lost only if cfs has been actually preempted Yes. Actually, I just want to get the approximate proportion of how CFS tasks(whole runq) is preempted. The preemption among CFS is not considered. > > > timing value want to know the distribution of how these spread > > approximately by using utilization account value (nivcsw is not enough > > sometimes). This commit would like to introduce a helper function to > > achieve this goal. > > > > eg. > > Effective part of A = Total_time * cpu_util_cfs / cpu_util > > > > Timing value A > > (should be a process last for several TICKs or statistics of a repeadted > > process) > > > > Timing start > > | > > | > > preempted by RT, DL or IRQ > > |\ > > | This period time is nonvoluntary CPU give up, need to know how long > > |/ > > preempted means that a cfs task stops running on the cpu and lets > another rt/dl task or an irq run on the cpu instead. We can't know > that. We know an average ratio of time spent in rt/dl and irq contexts > but not if the cpu was idle or running cfs task ok, will take idle into consideration and as explained above, preemption among cfs tasks is not considered on purpose > > > sched in again > > | > > | > > | > > Timing end > > > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > --- > > include/linux/sched.h | 1 + > > kernel/sched/core.c | 20 ++++++++++++++++++++ > > 2 files changed, 21 insertions(+) > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > index 77f01ac385f7..99cf09c47f72 100644 > > --- a/include/linux/sched.h > > +++ b/include/linux/sched.h > > @@ -2318,6 +2318,7 @@ static inline bool owner_on_cpu(struct task_struct *owner) > > > > /* Returns effective CPU energy utilization, as seen by the scheduler */ > > unsigned long sched_cpu_util(int cpu); > > +unsigned long cfs_prop_by_util(struct task_struct *tsk, unsigned long val); > > #endif /* CONFIG_SMP */ > > > > #ifdef CONFIG_RSEQ > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 802551e0009b..217e2220fdc1 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -7494,6 +7494,26 @@ unsigned long sched_cpu_util(int cpu) > > { > > return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL); > > } > > + > > +/* > > + * Calculate the approximate proportion of timing value consumed in cfs. > > + * The user must be aware of this is done by avg_util which is tracked by > > + * the geometric series as decaying the load by y^32 = 0.5 (unit is 1ms). > > + * That is, only the period last for at least several TICKs or the statistics > > + * of repeated timing value are suitable for this helper function. > > + */ > > +unsigned long cfs_prop_by_util(struct task_struct *tsk, unsigned long val) > > +{ > > + unsigned int cpu = task_cpu(tsk); > > + struct rq *rq = cpu_rq(cpu); > > + unsigned long util; > > + > > + if (tsk->sched_class != &fair_sched_class) > > + return val; > > + util = cpu_util_rt(rq) + cpu_util_cfs(cpu) + cpu_util_irq(rq) + cpu_util_dl(rq); > > This is not correct as irq is not on the same clock domain: look at > effective_cpu_util() > > You don't care about idle time ? ok, will check. thanks > > > + return min(val, cpu_util_cfs(cpu) * val / util); > > +} > > + > > #endif /* CONFIG_SMP */ > > > > /** > > -- > > 2.25.1 > >
diff --git a/include/linux/sched.h b/include/linux/sched.h index 77f01ac385f7..99cf09c47f72 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2318,6 +2318,7 @@ static inline bool owner_on_cpu(struct task_struct *owner) /* Returns effective CPU energy utilization, as seen by the scheduler */ unsigned long sched_cpu_util(int cpu); +unsigned long cfs_prop_by_util(struct task_struct *tsk, unsigned long val); #endif /* CONFIG_SMP */ #ifdef CONFIG_RSEQ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 802551e0009b..217e2220fdc1 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7494,6 +7494,26 @@ unsigned long sched_cpu_util(int cpu) { return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL); } + +/* + * Calculate the approximate proportion of timing value consumed in cfs. + * The user must be aware of this is done by avg_util which is tracked by + * the geometric series as decaying the load by y^32 = 0.5 (unit is 1ms). + * That is, only the period last for at least several TICKs or the statistics + * of repeated timing value are suitable for this helper function. + */ +unsigned long cfs_prop_by_util(struct task_struct *tsk, unsigned long val) +{ + unsigned int cpu = task_cpu(tsk); + struct rq *rq = cpu_rq(cpu); + unsigned long util; + + if (tsk->sched_class != &fair_sched_class) + return val; + util = cpu_util_rt(rq) + cpu_util_cfs(cpu) + cpu_util_irq(rq) + cpu_util_dl(rq); + return min(val, cpu_util_cfs(cpu) * val / util); +} + #endif /* CONFIG_SMP */ /**