[1/2] sched/core: switch struct rq->nr_iowait to a normal int

Message ID 20240228192355.290114-2-axboe@kernel.dk
State New
Headers
Series Split iowait into two states |

Commit Message

Jens Axboe Feb. 28, 2024, 7:16 p.m. UTC
  In 3 of the 4 spots where we modify rq->nr_iowait we already hold the
rq lock, and hence don't need atomics to modify the current per-rq
iowait count. In the 4th case, where we are scheduling in on a different
CPU than the task was previously on, we do not hold the previous rq lock,
and hence still need to use an atomic to increment the iowait count.

Rename the existing nr_iowait to nr_iowait_remote, and use that for the
4th case. The other three cases can simply inc/dec in a non-atomic
fashion under the held rq lock.

The per-rq iowait now becomes the difference between the two, the local
count minus the remote count.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 kernel/sched/core.c    | 15 ++++++++++-----
 kernel/sched/cputime.c |  3 +--
 kernel/sched/sched.h   |  8 +++++++-
 3 files changed, 18 insertions(+), 8 deletions(-)
  

Comments

Thomas Gleixner Feb. 29, 2024, 4:53 p.m. UTC | #1
On Wed, Feb 28 2024 at 12:16, Jens Axboe wrote:
> In 3 of the 4 spots where we modify rq->nr_iowait we already hold the

We modify something and hold locks? It's documented that changelogs
should not impersonate code. It simply does not make any sense.

> rq lock, and hence don't need atomics to modify the current per-rq
> iowait count. In the 4th case, where we are scheduling in on a different
> CPU than the task was previously on, we do not hold the previous rq lock,
> and hence still need to use an atomic to increment the iowait count.
>
> Rename the existing nr_iowait to nr_iowait_remote, and use that for the
> 4th case. The other three cases can simply inc/dec in a non-atomic
> fashion under the held rq lock.
>
> The per-rq iowait now becomes the difference between the two, the local
> count minus the remote count.
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>

Other than that:

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
  
Jens Axboe Feb. 29, 2024, 5:19 p.m. UTC | #2
On 2/29/24 9:53 AM, Thomas Gleixner wrote:
> On Wed, Feb 28 2024 at 12:16, Jens Axboe wrote:
>> In 3 of the 4 spots where we modify rq->nr_iowait we already hold the
> 
> We modify something and hold locks? It's documented that changelogs
> should not impersonate code. It simply does not make any sense.

Agree it doesn't read that well... It's meant to say that we already
hold the rq lock in 3 of the 4 spots, hence using atomic_inc/dec is
pointless for those cases.

> Other than that:
> 
> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

Thanks for the review!
  
Thomas Gleixner Feb. 29, 2024, 5:42 p.m. UTC | #3
On Thu, Feb 29 2024 at 10:19, Jens Axboe wrote:
> On 2/29/24 9:53 AM, Thomas Gleixner wrote:
>> On Wed, Feb 28 2024 at 12:16, Jens Axboe wrote:
>>> In 3 of the 4 spots where we modify rq->nr_iowait we already hold the
>> 
>> We modify something and hold locks? It's documented that changelogs
>> should not impersonate code. It simply does not make any sense.
>
> Agree it doesn't read that well... It's meant to say that we already
> hold the rq lock in 3 of the 4 spots, hence using atomic_inc/dec is
> pointless for those cases.

That and the 'we'. Write it neutral.

The accounting of rq::nr_iowait is using an atomic_t but 3 out of 4
places hold runqueue lock already. ....

So but I just noticed that there is actually an issue with this:

>  unsigned int nr_iowait_cpu(int cpu)
>  {
> -	return atomic_read(&cpu_rq(cpu)->nr_iowait);
> +	struct rq *rq = cpu_rq(cpu);
> +
> +	return rq->nr_iowait - atomic_read(&rq->nr_iowait_remote);

The access to rq->nr_iowait is not protected by the runqueue lock and
therefore a data race when @cpu is not the current CPU.

This needs to be properly annotated and explained why it does not
matter.

So s/Reviewed-by/Un-Reviewed-by/

Though thinking about it some more. Is this split a real benefit over
always using the atomic? Do you have numbers to show?

Thanks,

        tglx
  
Jens Axboe Feb. 29, 2024, 5:49 p.m. UTC | #4
On 2/29/24 10:42 AM, Thomas Gleixner wrote:
> On Thu, Feb 29 2024 at 10:19, Jens Axboe wrote:
>> On 2/29/24 9:53 AM, Thomas Gleixner wrote:
>>> On Wed, Feb 28 2024 at 12:16, Jens Axboe wrote:
>>>> In 3 of the 4 spots where we modify rq->nr_iowait we already hold the
>>>
>>> We modify something and hold locks? It's documented that changelogs
>>> should not impersonate code. It simply does not make any sense.
>>
>> Agree it doesn't read that well... It's meant to say that we already
>> hold the rq lock in 3 of the 4 spots, hence using atomic_inc/dec is
>> pointless for those cases.
> 
> That and the 'we'. Write it neutral.
> 
> The accounting of rq::nr_iowait is using an atomic_t but 3 out of 4
> places hold runqueue lock already. ....

Will do

> So but I just noticed that there is actually an issue with this:
> 
>>  unsigned int nr_iowait_cpu(int cpu)
>>  {
>> -	return atomic_read(&cpu_rq(cpu)->nr_iowait);
>> +	struct rq *rq = cpu_rq(cpu);
>> +
>> +	return rq->nr_iowait - atomic_read(&rq->nr_iowait_remote);
> 
> The access to rq->nr_iowait is not protected by the runqueue lock and
> therefore a data race when @cpu is not the current CPU.
> 
> This needs to be properly annotated and explained why it does not
> matter.

But that was always racy before as well, if someone else is inc/dec'ing
->nr_iowait while it's being read, you could get either the before or
after value. This doesn't really change that. I could've sworn I
mentioned that in the commit message, but I did not.

> So s/Reviewed-by/Un-Reviewed-by/
> 
> Though thinking about it some more. Is this split a real benefit over
> always using the atomic? Do you have numbers to show?

It was more on Peter's complaint that now we're trading a single atomic
for two, hence I got to thinking about nr_iowait in general. I don't
have numbers showing it matters, as mentioned in another email the most
costly part about this seems to be fetching task->in_iowait and not the
actual atomic.
  
Thomas Gleixner Feb. 29, 2024, 7:52 p.m. UTC | #5
On Thu, Feb 29 2024 at 10:49, Jens Axboe wrote:
> On 2/29/24 10:42 AM, Thomas Gleixner wrote:
>> So but I just noticed that there is actually an issue with this:
>> 
>>>  unsigned int nr_iowait_cpu(int cpu)
>>>  {
>>> -	return atomic_read(&cpu_rq(cpu)->nr_iowait);
>>> +	struct rq *rq = cpu_rq(cpu);
>>> +
>>> +	return rq->nr_iowait - atomic_read(&rq->nr_iowait_remote);
>> 
>> The access to rq->nr_iowait is not protected by the runqueue lock and
>> therefore a data race when @cpu is not the current CPU.
>> 
>> This needs to be properly annotated and explained why it does not
>> matter.
>
> But that was always racy before as well, if someone else is inc/dec'ing
> ->nr_iowait while it's being read, you could get either the before or
> after value. This doesn't really change that. I could've sworn I
> mentioned that in the commit message, but I did not.

There are actually two issues here:

1) atomic_read() vs. atomic_inc/dec() guarantees that the read value
   is consistent in itself.

   Non-atomic inc/dec is not guaranteeing that the concurrent read is a
   consistent value as the compiler is free to do store/load
   tearing. Unlikely but not guaranteed to never happen.

   KCSAN will complain about it sooner than later and then someone has
   to go and do the analysis and the annotation. I rather let you do
   the reasoning now than chasing you down later :)

2) What's worse is that the result can be completely bogus:

   i.e.

   CPU0                                 CPU1                    CPU2
   a = rq(CPU1)->nr_iowait; // 0
                                        rq->nr_iowait++;
                                                                rq(CPU1)->nr_iowait_remote++;
   b = rq(CPU1)->nr_iowait_remote; // 1

   r = a - b; // -1
   return (unsigned int) r; // UINT_MAX

   The consumers of this interface might be upset. :)

   While with a single atomic_t it's guaranteed that the result is
   always greater or equal zero.

>> So s/Reviewed-by/Un-Reviewed-by/
>> 
>> Though thinking about it some more. Is this split a real benefit over
>> always using the atomic? Do you have numbers to show?
>
> It was more on Peter's complaint that now we're trading a single atomic
> for two, hence I got to thinking about nr_iowait in general. I don't
> have numbers showing it matters, as mentioned in another email the most
> costly part about this seems to be fetching task->in_iowait and not the
> actual atomic.

On the write side (except for the remote case) the cache line is already
dirty on the current CPU and I doubt that the atomic will be
noticable. If there is concurrent remote access to the runqueue then the
cache line is bouncing no matter what.

On the read side there is always an atomic operation required, so it's
not really different.

I assume Peter's complaint was about the extra nr_iowait_acct part. I
think that's solvable without the extra atomic_t member and with a
single atomic_add()/sub(). atomic_t is 32bit wide, so what about
splitting the thing and adding/subtracting both in one go?

While sketching this I noticed that prepare/finish can be written w/o
any conditionals.

int io_schedule_prepare(void)
{
	int flags = current->in_iowait + current->in_iowait_acct << 16;

	current->in_iowait = 1;
	current->in_iowait_acct = 1;
	blk_flush_plug(current->plug, true);
	return flags;
}

void io_schedule_finish(int old_wait_flags)
{
	current->in_iowait = flags & 0x01;
        current->in_iowait_acct = flags >> 16;
}

Now __schedule():

	if (prev->in_iowait) {
           	int x = 1 + current->in_iowait_acct << 16;

		atomic_add(x, rq->nr_iowait);
		delayacct_blkio_start();
	}

and ttwu_do_activate():

	if (p->in_iowait) {
        	int x = 1 + current->in_iowait_acct << 16;

                delayacct_blkio_end(p);
                atomic_sub(x, task_rq(p)->nr_iowait);
	}


and try_to_wake_up():

	delayacct_blkio_end(p);

	int x = 1 + current->in_iowait_acct << 16;

	atomic_add(x, task_rq(p)->nr_iowait);

nr_iowait_acct_cpu() becomes:

        return atomic_read(&cpu_rq(cpu)->nr_iowait) >> 16;

and nr_iowait_cpu():

        return atomic_read(&cpu_rq(cpu)->nr_iowait) & ((1 << 16) - 1);

Obviously written with proper inline wrappers and defines, but you get
the idea.

Hmm?

Thanks,

        tglx
  
Jens Axboe Feb. 29, 2024, 10:30 p.m. UTC | #6
On 2/29/24 12:52 PM, Thomas Gleixner wrote:
> On Thu, Feb 29 2024 at 10:49, Jens Axboe wrote:
>> On 2/29/24 10:42 AM, Thomas Gleixner wrote:
>>> So but I just noticed that there is actually an issue with this:
>>>
>>>>  unsigned int nr_iowait_cpu(int cpu)
>>>>  {
>>>> -	return atomic_read(&cpu_rq(cpu)->nr_iowait);
>>>> +	struct rq *rq = cpu_rq(cpu);
>>>> +
>>>> +	return rq->nr_iowait - atomic_read(&rq->nr_iowait_remote);
>>>
>>> The access to rq->nr_iowait is not protected by the runqueue lock and
>>> therefore a data race when @cpu is not the current CPU.
>>>
>>> This needs to be properly annotated and explained why it does not
>>> matter.
>>
>> But that was always racy before as well, if someone else is inc/dec'ing
>> ->nr_iowait while it's being read, you could get either the before or
>> after value. This doesn't really change that. I could've sworn I
>> mentioned that in the commit message, but I did not.
> 
> There are actually two issues here:
> 
> 1) atomic_read() vs. atomic_inc/dec() guarantees that the read value
>    is consistent in itself.
> 
>    Non-atomic inc/dec is not guaranteeing that the concurrent read is a
>    consistent value as the compiler is free to do store/load
>    tearing. Unlikely but not guaranteed to never happen.
> 
>    KCSAN will complain about it sooner than later and then someone has
>    to go and do the analysis and the annotation. I rather let you do
>    the reasoning now than chasing you down later :)

Fair enough.

> 2) What's worse is that the result can be completely bogus:
> 
>    i.e.
> 
>    CPU0                                 CPU1                    CPU2
>    a = rq(CPU1)->nr_iowait; // 0
>                                         rq->nr_iowait++;
>                                                                 rq(CPU1)->nr_iowait_remote++;
>    b = rq(CPU1)->nr_iowait_remote; // 1
> 
>    r = a - b; // -1
>    return (unsigned int) r; // UINT_MAX
> 
>    The consumers of this interface might be upset. :)
> 
>    While with a single atomic_t it's guaranteed that the result is
>    always greater or equal zero.

Yeah OK, this is a real problem...

>>> So s/Reviewed-by/Un-Reviewed-by/
>>>
>>> Though thinking about it some more. Is this split a real benefit over
>>> always using the atomic? Do you have numbers to show?
>>
>> It was more on Peter's complaint that now we're trading a single atomic
>> for two, hence I got to thinking about nr_iowait in general. I don't
>> have numbers showing it matters, as mentioned in another email the most
>> costly part about this seems to be fetching task->in_iowait and not the
>> actual atomic.
> 
> On the write side (except for the remote case) the cache line is already
> dirty on the current CPU and I doubt that the atomic will be
> noticable. If there is concurrent remote access to the runqueue then the
> cache line is bouncing no matter what.

That was my exact thinking too, same cacheline and back-to-back atomics
don't really matter vs a single atomic on it.

> On the read side there is always an atomic operation required, so it's
> not really different.
> 
> I assume Peter's complaint was about the extra nr_iowait_acct part. I
> think that's solvable without the extra atomic_t member and with a
> single atomic_add()/sub(). atomic_t is 32bit wide, so what about
> splitting the thing and adding/subtracting both in one go?
> 
> While sketching this I noticed that prepare/finish can be written w/o
> any conditionals.
> 
> int io_schedule_prepare(void)
> {
> 	int flags = current->in_iowait + current->in_iowait_acct << 16;
> 
> 	current->in_iowait = 1;
> 	current->in_iowait_acct = 1;
> 	blk_flush_plug(current->plug, true);
> 	return flags;
> }
> 
> void io_schedule_finish(int old_wait_flags)
> {
> 	current->in_iowait = flags & 0x01;
>         current->in_iowait_acct = flags >> 16;
> }
> 
> Now __schedule():
> 
> 	if (prev->in_iowait) {
>            	int x = 1 + current->in_iowait_acct << 16;
> 
> 		atomic_add(x, rq->nr_iowait);
> 		delayacct_blkio_start();
> 	}
> 
> and ttwu_do_activate():
> 
> 	if (p->in_iowait) {
>         	int x = 1 + current->in_iowait_acct << 16;
> 
>                 delayacct_blkio_end(p);
>                 atomic_sub(x, task_rq(p)->nr_iowait);
> 	}
> 
> 
> and try_to_wake_up():
> 
> 	delayacct_blkio_end(p);
> 
> 	int x = 1 + current->in_iowait_acct << 16;
> 
> 	atomic_add(x, task_rq(p)->nr_iowait);
> 
> nr_iowait_acct_cpu() becomes:
> 
>         return atomic_read(&cpu_rq(cpu)->nr_iowait) >> 16;
> 
> and nr_iowait_cpu():
> 
>         return atomic_read(&cpu_rq(cpu)->nr_iowait) & ((1 << 16) - 1);
> 
> Obviously written with proper inline wrappers and defines, but you get
> the idea.

I'll play with this a bit, but do we want to switch to an atomic_long_t
for this? 2^16 in iowait seems extreme, but it definitely seems possible
to overflow it.
  
Thomas Gleixner March 1, 2024, 12:02 a.m. UTC | #7
On Thu, Feb 29 2024 at 15:30, Jens Axboe wrote:
> On 2/29/24 12:52 PM, Thomas Gleixner wrote:
>>         return atomic_read(&cpu_rq(cpu)->nr_iowait) & ((1 << 16) - 1);
>> 
>> Obviously written with proper inline wrappers and defines, but you get
>> the idea.
>
> I'll play with this a bit, but do we want to switch to an atomic_long_t
> for this? 2^16 in iowait seems extreme, but it definitely seems possible
> to overflow it.

Indeed. 32bit has PID_MAX_LIMIT == 0x8000 which obviously fits into 16
bits, while 64bit lifts that limit and relies on memory exhaustion to
limit the number of concurrent threads on the machine, but that
obviously can exceed 16bits.

Whether more than 2^16 sit in iowait concurrently on a single CPU that's
a different question and probably more academic. :)

Though as this will touch all nr_iowait places anyway changing it to
atomic_long_t in a preparatory patch first makes a lot of sense.

Thanks,

        tglx
  

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9116bcc90346..48d15529a777 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3789,7 +3789,7 @@  ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
 #endif
 	if (p->in_iowait) {
 		delayacct_blkio_end(p);
-		atomic_dec(&task_rq(p)->nr_iowait);
+		task_rq(p)->nr_iowait--;
 	}
 
 	activate_task(rq, p, en_flags);
@@ -4354,8 +4354,10 @@  int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 		cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU);
 		if (task_cpu(p) != cpu) {
 			if (p->in_iowait) {
+				struct rq *__rq = task_rq(p);
+
 				delayacct_blkio_end(p);
-				atomic_dec(&task_rq(p)->nr_iowait);
+				atomic_inc(&__rq->nr_iowait_remote);
 			}
 
 			wake_flags |= WF_MIGRATED;
@@ -5463,7 +5465,9 @@  unsigned long long nr_context_switches(void)
 
 unsigned int nr_iowait_cpu(int cpu)
 {
-	return atomic_read(&cpu_rq(cpu)->nr_iowait);
+	struct rq *rq = cpu_rq(cpu);
+
+	return rq->nr_iowait - atomic_read(&rq->nr_iowait_remote);
 }
 
 /*
@@ -6681,7 +6685,7 @@  static void __sched notrace __schedule(unsigned int sched_mode)
 			deactivate_task(rq, prev, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK);
 
 			if (prev->in_iowait) {
-				atomic_inc(&rq->nr_iowait);
+				rq->nr_iowait++;
 				delayacct_blkio_start();
 			}
 		}
@@ -10029,7 +10033,8 @@  void __init sched_init(void)
 #endif
 #endif /* CONFIG_SMP */
 		hrtick_rq_init(rq);
-		atomic_set(&rq->nr_iowait, 0);
+		rq->nr_iowait = 0;
+		atomic_set(&rq->nr_iowait_remote, 0);
 
 #ifdef CONFIG_SCHED_CORE
 		rq->core = rq;
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index af7952f12e6c..0ed81c2d3c3b 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -222,9 +222,8 @@  void account_steal_time(u64 cputime)
 void account_idle_time(u64 cputime)
 {
 	u64 *cpustat = kcpustat_this_cpu->cpustat;
-	struct rq *rq = this_rq();
 
-	if (atomic_read(&rq->nr_iowait) > 0)
+	if (nr_iowait_cpu(smp_processor_id()) > 0)
 		cpustat[CPUTIME_IOWAIT] += cputime;
 	else
 		cpustat[CPUTIME_IDLE] += cputime;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 001fe047bd5d..91fa5b4d45ed 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1049,7 +1049,13 @@  struct rq {
 	u64			clock_idle_copy;
 #endif
 
-	atomic_t		nr_iowait;
+	/*
+	 * Total per-cpu iowait is the difference of the two below. One is
+	 * modified under the rq lock (nr_iowait), and if we don't have the rq
+	 * lock, then nr_iowait_remote is used.
+	 */
+	unsigned int		nr_iowait;
+	atomic_t		nr_iowait_remote;
 
 #ifdef CONFIG_SCHED_DEBUG
 	u64 last_seen_need_resched_ns;