[16/24] Documentation: scheduler: correct spelling

Message ID 20230209071400.31476-17-rdunlap@infradead.org
State New
Headers
Series Documentation: correct lots of spelling errors (series 1) |

Commit Message

Randy Dunlap Feb. 9, 2023, 7:13 a.m. UTC
  Correct spelling problems for Documentation/scheduler/ as reported
by codespell.

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
Reviewed-by: Mukesh Ojha <quic_mojha@quicinc.com>
---
 Documentation/scheduler/sched-bwc.rst    |    2 +-
 Documentation/scheduler/sched-energy.rst |    4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
  

Comments

Vincent Guittot Feb. 9, 2023, 9:20 a.m. UTC | #1
On Thu, 9 Feb 2023 at 08:14, Randy Dunlap <rdunlap@infradead.org> wrote:
>
> Correct spelling problems for Documentation/scheduler/ as reported
> by codespell.
>
> Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: linux-doc@vger.kernel.org
> Reviewed-by: Mukesh Ojha <quic_mojha@quicinc.com>

Acked-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---
>  Documentation/scheduler/sched-bwc.rst    |    2 +-
>  Documentation/scheduler/sched-energy.rst |    4 ++--
>  2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff -- a/Documentation/scheduler/sched-bwc.rst b/Documentation/scheduler/sched-bwc.rst
> --- a/Documentation/scheduler/sched-bwc.rst
> +++ b/Documentation/scheduler/sched-bwc.rst
> @@ -186,7 +186,7 @@ average usage, albeit over a longer time
>  also limits the burst ability to no more than 1ms per cpu.  This provides
>  better more predictable user experience for highly threaded applications with
>  small quota limits on high core count machines. It also eliminates the
> -propensity to throttle these applications while simultanously using less than
> +propensity to throttle these applications while simultaneously using less than
>  quota amounts of cpu. Another way to say this, is that by allowing the unused
>  portion of a slice to remain valid across periods we have decreased the
>  possibility of wastefully expiring quota on cpu-local silos that don't need a
> diff -- a/Documentation/scheduler/sched-energy.rst b/Documentation/scheduler/sched-energy.rst
> --- a/Documentation/scheduler/sched-energy.rst
> +++ b/Documentation/scheduler/sched-energy.rst
> @@ -82,7 +82,7 @@ through the arch_scale_cpu_capacity() ca
>  The rest of platform knowledge used by EAS is directly read from the Energy
>  Model (EM) framework. The EM of a platform is composed of a power cost table
>  per 'performance domain' in the system (see Documentation/power/energy-model.rst
> -for futher details about performance domains).
> +for further details about performance domains).
>
>  The scheduler manages references to the EM objects in the topology code when the
>  scheduling domains are built, or re-built. For each root domain (rd), the
> @@ -281,7 +281,7 @@ mechanism called 'over-utilization'.
>  From a general standpoint, the use-cases where EAS can help the most are those
>  involving a light/medium CPU utilization. Whenever long CPU-bound tasks are
>  being run, they will require all of the available CPU capacity, and there isn't
> -much that can be done by the scheduler to save energy without severly harming
> +much that can be done by the scheduler to save energy without severely harming
>  throughput. In order to avoid hurting performance with EAS, CPUs are flagged as
>  'over-utilized' as soon as they are used at more than 80% of their compute
>  capacity. As long as no CPUs are over-utilized in a root domain, load balancing
  

Patch

diff -- a/Documentation/scheduler/sched-bwc.rst b/Documentation/scheduler/sched-bwc.rst
--- a/Documentation/scheduler/sched-bwc.rst
+++ b/Documentation/scheduler/sched-bwc.rst
@@ -186,7 +186,7 @@  average usage, albeit over a longer time
 also limits the burst ability to no more than 1ms per cpu.  This provides
 better more predictable user experience for highly threaded applications with
 small quota limits on high core count machines. It also eliminates the
-propensity to throttle these applications while simultanously using less than
+propensity to throttle these applications while simultaneously using less than
 quota amounts of cpu. Another way to say this, is that by allowing the unused
 portion of a slice to remain valid across periods we have decreased the
 possibility of wastefully expiring quota on cpu-local silos that don't need a
diff -- a/Documentation/scheduler/sched-energy.rst b/Documentation/scheduler/sched-energy.rst
--- a/Documentation/scheduler/sched-energy.rst
+++ b/Documentation/scheduler/sched-energy.rst
@@ -82,7 +82,7 @@  through the arch_scale_cpu_capacity() ca
 The rest of platform knowledge used by EAS is directly read from the Energy
 Model (EM) framework. The EM of a platform is composed of a power cost table
 per 'performance domain' in the system (see Documentation/power/energy-model.rst
-for futher details about performance domains).
+for further details about performance domains).
 
 The scheduler manages references to the EM objects in the topology code when the
 scheduling domains are built, or re-built. For each root domain (rd), the
@@ -281,7 +281,7 @@  mechanism called 'over-utilization'.
 From a general standpoint, the use-cases where EAS can help the most are those
 involving a light/medium CPU utilization. Whenever long CPU-bound tasks are
 being run, they will require all of the available CPU capacity, and there isn't
-much that can be done by the scheduler to save energy without severly harming
+much that can be done by the scheduler to save energy without severely harming
 throughput. In order to avoid hurting performance with EAS, CPUs are flagged as
 'over-utilized' as soon as they are used at more than 80% of their compute
 capacity. As long as no CPUs are over-utilized in a root domain, load balancing