[v2,2/3] vmstat: skip periodic vmstat update for nohz full CPUs

Message ID 20230602190115.521067386@redhat.com
State New
Headers
Series vmstat bug fixes for nohz_full and isolated CPUs |

Commit Message

Marcelo Tosatti June 2, 2023, 6:57 p.m. UTC
  The interruption caused by vmstat_update is undesirable 
for certain aplications:

oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...

The example above shows an additional 7us for the

       	oslat -> kworker -> oslat

switches. In the case of a virtualized CPU, and the vmstat_update  
interruption in the host (of a qemu-kvm vcpu), the latency penalty
observed in the guest is higher than 50us, violating the acceptable
latency threshold.

Skip periodic updates for nohz full CPUs. Any callers who
need precise values should use a snapshot of the per-CPU
counters, or use the global counters with measures to 
handle errors up to thresholds (see calculate_normal_threshold).

Suggested by Michal Hocko.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

v2: use cpu_is_isolated		(Michal Hocko)
  

Comments

Michal Hocko June 5, 2023, 7:55 a.m. UTC | #1
On Fri 02-06-23 15:57:59, Marcelo Tosatti wrote:
> The interruption caused by vmstat_update is undesirable 
> for certain aplications:
> 
> oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
> 
> The example above shows an additional 7us for the
> 
>        	oslat -> kworker -> oslat
> 
> switches. In the case of a virtualized CPU, and the vmstat_update  
> interruption in the host (of a qemu-kvm vcpu), the latency penalty
> observed in the guest is higher than 50us, violating the acceptable
> latency threshold.

I personally find the above problem description insufficient. I have
asked several times and only got piece by piece information each time.
Maybe there is a reason to be secretive but it would be great to get at
least some basic expectations described  and what they are based on.

E.g. workloads are running on isolated cpus with nohz full mode to
shield off any kernel interruption. Yet there are operations that update
counters (like mlock, but not mlock alone) that update per cpu counters
that will eventually get flushed and that will cause some interference.
Now the host/guest transition and intereference. How that happens when
the guest is running on an isolated and dedicated cpu?

> Skip periodic updates for nohz full CPUs. Any callers who
> need precise values should use a snapshot of the per-CPU
> counters, or use the global counters with measures to 
> handle errors up to thresholds (see calculate_normal_threshold).

I would rephrase this paragraph. 
In kernel users of vmstat counters either require the precise value and
they are using zone_page_state_snapshot interface or they can live with
an imprecision as the regular flushing can happen at arbitrary time and
cumulative error can grow (see calculate_normal_threshold).

From that POV the regular flushing can be postponed for CPUs that have
been isolated from the kernel interference withtout critical
infrastructure ever noticing. Skip regular flushing from vmstat_shepherd
for all isolated CPUs to avoid interference with the isolated workload.

> Suggested by Michal Hocko.
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> 
> ---
> 
> v2: use cpu_is_isolated		(Michal Hocko)
> 
> Index: linux-vmstat-remote/mm/vmstat.c
> ===================================================================
> --- linux-vmstat-remote.orig/mm/vmstat.c
> +++ linux-vmstat-remote/mm/vmstat.c
> @@ -28,6 +28,7 @@
>  #include <linux/mm_inline.h>
>  #include <linux/page_ext.h>
>  #include <linux/page_owner.h>
> +#include <linux/sched/isolation.h>
>  
>  #include "internal.h"
>  
> @@ -2022,6 +2023,16 @@ static void vmstat_shepherd(struct work_
>  	for_each_online_cpu(cpu) {
>  		struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
>  
> +		/*
> +		 * Skip periodic updates for isolated CPUs.
> +		 * Any callers who need precise values should use
> +		 * a snapshot of the per-CPU counters, or use the global
> +		 * counters with measures to handle errors up to
> +		 * thresholds (see calculate_normal_threshold).
> +		 */
> +		if (cpu_is_isolated(cpu))
> +			continue;
> +
>  		if (!delayed_work_pending(dw) && need_update(cpu))
>  			queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
>  
>
  
Marcelo Tosatti June 5, 2023, 2:53 p.m. UTC | #2
On Mon, Jun 05, 2023 at 09:55:57AM +0200, Michal Hocko wrote:
> On Fri 02-06-23 15:57:59, Marcelo Tosatti wrote:
> > The interruption caused by vmstat_update is undesirable 
> > for certain aplications:
> > 
> > oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> > oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> > oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> > kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
> > 
> > The example above shows an additional 7us for the
> > 
> >        	oslat -> kworker -> oslat
> > 
> > switches. In the case of a virtualized CPU, and the vmstat_update  
> > interruption in the host (of a qemu-kvm vcpu), the latency penalty
> > observed in the guest is higher than 50us, violating the acceptable
> > latency threshold.
> 
> I personally find the above problem description insufficient. I have
> asked several times and only got piece by piece information each time.
> Maybe there is a reason to be secretive but it would be great to get at
> least some basic expectations described  and what they are based on.

There is no reason to be secretive. 

> 
> E.g. workloads are running on isolated cpus with nohz full mode to
> shield off any kernel interruption. Yet there are operations that update
> counters (like mlock, but not mlock alone) that update per cpu counters
> that will eventually get flushed and that will cause some interference.
> Now the host/guest transition and intereference. How that happens when
> the guest is running on an isolated and dedicated cpu?

Follows the updated changelog. Does it contain the information
requested ?

----

Performance details for the kworker interruption:

With workloads that are running on isolated cpus with nohz full mode to
shield off any kernel interruption. For example, a VM running a
time sensitive application with a 50us maximum acceptable interruption
(use case: soft PLC).

oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...

The example above shows an additional 7us for the

        oslat -> kworker -> oslat

switches. In the case of a virtualized CPU, and the vmstat_update
interruption in the host (of a qemu-kvm vcpu), the latency penalty
observed in the guest is higher than 50us, violating the acceptable
latency threshold.

The isolated vCPU can perform operations that modify per-CPU page counters,
for example to complete I/O operations:

      CPU 11/KVM-9540    [001] dNh1.  2314.248584: mod_zone_page_state <-__folio_end_writeback
      CPU 11/KVM-9540    [001] dNh1.  2314.248585: <stack trace>
 => 0xffffffffc042b083
 => mod_zone_page_state
 => __folio_end_writeback
 => folio_end_writeback
 => iomap_finish_ioend
 => blk_mq_end_request_batch
 => nvme_irq
 => __handle_irq_event_percpu
 => handle_irq_event
 => handle_edge_irq
 => __common_interrupt
 => common_interrupt
 => asm_common_interrupt
 => vmx_do_interrupt_nmi_irqoff
 => vmx_handle_exit_irqoff
 => vcpu_enter_guest
 => vcpu_run
 => kvm_arch_vcpu_ioctl_run
 => kvm_vcpu_ioctl
 => __x64_sys_ioctl
 => do_syscall_64
 => entry_SYSCALL_64_after_hwframe

> > Skip periodic updates for nohz full CPUs. Any callers who
> > need precise values should use a snapshot of the per-CPU
> > counters, or use the global counters with measures to 
> > handle errors up to thresholds (see calculate_normal_threshold).
> 
> I would rephrase this paragraph. 
> In kernel users of vmstat counters either require the precise value and
> they are using zone_page_state_snapshot interface or they can live with
> an imprecision as the regular flushing can happen at arbitrary time and
> cumulative error can grow (see calculate_normal_threshold).

> >From that POV the regular flushing can be postponed for CPUs that have
> been isolated from the kernel interference withtout critical
> infrastructure ever noticing. Skip regular flushing from vmstat_shepherd
> for all isolated CPUs to avoid interference with the isolated workload.
> 
> > Suggested by Michal Hocko.
> > 
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> Acked-by: Michal Hocko <mhocko@suse.com>

OK, updated comment, thanks.
  
Michal Hocko June 5, 2023, 3:55 p.m. UTC | #3
On Mon 05-06-23 11:53:56, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 09:55:57AM +0200, Michal Hocko wrote:
> > On Fri 02-06-23 15:57:59, Marcelo Tosatti wrote:
> > > The interruption caused by vmstat_update is undesirable 
> > > for certain aplications:
> > > 
> > > oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> > > oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> > > oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> > > kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
> > > 
> > > The example above shows an additional 7us for the
> > > 
> > >        	oslat -> kworker -> oslat
> > > 
> > > switches. In the case of a virtualized CPU, and the vmstat_update  
> > > interruption in the host (of a qemu-kvm vcpu), the latency penalty
> > > observed in the guest is higher than 50us, violating the acceptable
> > > latency threshold.
> > 
> > I personally find the above problem description insufficient. I have
> > asked several times and only got piece by piece information each time.
> > Maybe there is a reason to be secretive but it would be great to get at
> > least some basic expectations described  and what they are based on.
> 
> There is no reason to be secretive. 
> 
> > 
> > E.g. workloads are running on isolated cpus with nohz full mode to
> > shield off any kernel interruption. Yet there are operations that update
> > counters (like mlock, but not mlock alone) that update per cpu counters
> > that will eventually get flushed and that will cause some interference.
> > Now the host/guest transition and intereference. How that happens when
> > the guest is running on an isolated and dedicated cpu?
> 
> Follows the updated changelog. Does it contain the information
> requested ?
> 
> ----
> 
> Performance details for the kworker interruption:
> 
> With workloads that are running on isolated cpus with nohz full mode to
> shield off any kernel interruption. For example, a VM running a
> time sensitive application with a 50us maximum acceptable interruption
> (use case: soft PLC).
> 
> oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
> 
> The example above shows an additional 7us for the
> 
>         oslat -> kworker -> oslat
> 
> switches. In the case of a virtualized CPU, and the vmstat_update
> interruption in the host (of a qemu-kvm vcpu), the latency penalty
> observed in the guest is higher than 50us, violating the acceptable
> latency threshold.
> 
> The isolated vCPU can perform operations that modify per-CPU page counters,
> for example to complete I/O operations:
> 
>       CPU 11/KVM-9540    [001] dNh1.  2314.248584: mod_zone_page_state <-__folio_end_writeback
>       CPU 11/KVM-9540    [001] dNh1.  2314.248585: <stack trace>
>  => 0xffffffffc042b083
>  => mod_zone_page_state
>  => __folio_end_writeback
>  => folio_end_writeback
>  => iomap_finish_ioend
>  => blk_mq_end_request_batch
>  => nvme_irq
>  => __handle_irq_event_percpu
>  => handle_irq_event
>  => handle_edge_irq
>  => __common_interrupt
>  => common_interrupt
>  => asm_common_interrupt
>  => vmx_do_interrupt_nmi_irqoff
>  => vmx_handle_exit_irqoff
>  => vcpu_enter_guest
>  => vcpu_run
>  => kvm_arch_vcpu_ioctl_run
>  => kvm_vcpu_ioctl
>  => __x64_sys_ioctl
>  => do_syscall_64
>  => entry_SYSCALL_64_after_hwframe

OK, this is really useful. It is just not really clear whether the IO
triggered here is from the guest or it a host activity.

overall this is much better!
  
Marcelo Tosatti June 5, 2023, 5:35 p.m. UTC | #4
On Mon, Jun 05, 2023 at 05:55:49PM +0200, Michal Hocko wrote:
> > The example above shows an additional 7us for the
> > 
> >         oslat -> kworker -> oslat
> > 
> > switches. In the case of a virtualized CPU, and the vmstat_update
> > interruption in the host (of a qemu-kvm vcpu), the latency penalty
> > observed in the guest is higher than 50us, violating the acceptable
> > latency threshold.
> > 
> > The isolated vCPU can perform operations that modify per-CPU page counters,
> > for example to complete I/O operations:
> > 
> >       CPU 11/KVM-9540    [001] dNh1.  2314.248584: mod_zone_page_state <-__folio_end_writeback
> >       CPU 11/KVM-9540    [001] dNh1.  2314.248585: <stack trace>
> >  => 0xffffffffc042b083
> >  => mod_zone_page_state
> >  => __folio_end_writeback
> >  => folio_end_writeback
> >  => iomap_finish_ioend
> >  => blk_mq_end_request_batch
> >  => nvme_irq
> >  => __handle_irq_event_percpu
> >  => handle_irq_event
> >  => handle_edge_irq
> >  => __common_interrupt
> >  => common_interrupt
> >  => asm_common_interrupt
> >  => vmx_do_interrupt_nmi_irqoff
> >  => vmx_handle_exit_irqoff
> >  => vcpu_enter_guest
> >  => vcpu_run
> >  => kvm_arch_vcpu_ioctl_run
> >  => kvm_vcpu_ioctl
> >  => __x64_sys_ioctl
> >  => do_syscall_64
> >  => entry_SYSCALL_64_after_hwframe
> 
> OK, this is really useful. It is just not really clear whether the IO
> triggered here is from the guest or it a host activity.

Guest initiated I/O, since the host CPU is isolated.
  
Michal Hocko June 5, 2023, 6:57 p.m. UTC | #5
On Mon 05-06-23 14:35:56, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 05:55:49PM +0200, Michal Hocko wrote:
> > > The example above shows an additional 7us for the
> > > 
> > >         oslat -> kworker -> oslat
> > > 
> > > switches. In the case of a virtualized CPU, and the vmstat_update
> > > interruption in the host (of a qemu-kvm vcpu), the latency penalty
> > > observed in the guest is higher than 50us, violating the acceptable
> > > latency threshold.
> > > 
> > > The isolated vCPU can perform operations that modify per-CPU page counters,
> > > for example to complete I/O operations:
> > > 
> > >       CPU 11/KVM-9540    [001] dNh1.  2314.248584: mod_zone_page_state <-__folio_end_writeback
> > >       CPU 11/KVM-9540    [001] dNh1.  2314.248585: <stack trace>
> > >  => 0xffffffffc042b083
> > >  => mod_zone_page_state
> > >  => __folio_end_writeback
> > >  => folio_end_writeback
> > >  => iomap_finish_ioend
> > >  => blk_mq_end_request_batch
> > >  => nvme_irq
> > >  => __handle_irq_event_percpu
> > >  => handle_irq_event
> > >  => handle_edge_irq
> > >  => __common_interrupt
> > >  => common_interrupt
> > >  => asm_common_interrupt
> > >  => vmx_do_interrupt_nmi_irqoff
> > >  => vmx_handle_exit_irqoff
> > >  => vcpu_enter_guest
> > >  => vcpu_run
> > >  => kvm_arch_vcpu_ioctl_run
> > >  => kvm_vcpu_ioctl
> > >  => __x64_sys_ioctl
> > >  => do_syscall_64
> > >  => entry_SYSCALL_64_after_hwframe
> > 
> > OK, this is really useful. It is just not really clear whether the IO
> > triggered here is from the guest or it a host activity.
> 
> Guest initiated I/O, since the host CPU is isolated.

Make it explicit in the changelog.

I am just wondering how you can achieve your strict deadlines when IO is
involved but that is another story I guess.
  
Marcelo Tosatti June 5, 2023, 7:14 p.m. UTC | #6
On Mon, Jun 05, 2023 at 08:57:15PM +0200, Michal Hocko wrote:
> > Guest initiated I/O, since the host CPU is isolated.
> 
> Make it explicit in the changelog.

I think better use of our time would be to focus on

https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html

1) Operate in terms of add CPU, remove CPU on sysfs (to avoid races).
2) Don't allow all CPUs to be marked as "block_interf".
3) Remove percpu rwsem lock.

> I am just wondering how you can achieve your strict deadlines when IO is
> involved but that is another story I guess.

IO can be submitted when the ELF binary and libraries are read
from the virtual disk, for example.
  

Patch

Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -28,6 +28,7 @@ 
 #include <linux/mm_inline.h>
 #include <linux/page_ext.h>
 #include <linux/page_owner.h>
+#include <linux/sched/isolation.h>
 
 #include "internal.h"
 
@@ -2022,6 +2023,16 @@  static void vmstat_shepherd(struct work_
 	for_each_online_cpu(cpu) {
 		struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
 
+		/*
+		 * Skip periodic updates for isolated CPUs.
+		 * Any callers who need precise values should use
+		 * a snapshot of the per-CPU counters, or use the global
+		 * counters with measures to handle errors up to
+		 * thresholds (see calculate_normal_threshold).
+		 */
+		if (cpu_is_isolated(cpu))
+			continue;
+
 		if (!delayed_work_pending(dw) && need_update(cpu))
 			queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);