[v6,0/2] sched/numa: add per-process numa_balancing

Message ID 20230412140701.58337-1-ligang.bdlg@bytedance.com
Headers
Series sched/numa: add per-process numa_balancing |

Message

Gang Li April 12, 2023, 2:06 p.m. UTC
  # Introduce
Add PR_NUMA_BALANCING in prctl.

A large number of page faults will cause performance loss when numa
balancing is performing. Thus those processes which care about worst-case
performance need numa balancing disabled. Others, on the contrary, allow a
temporary performance loss in exchange for higher average performance, so
enable numa balancing is better for them.

Numa balancing can only be controlled globally by
/proc/sys/kernel/numa_balancing. Due to the above case, we want to
disable/enable numa_balancing per-process instead.

Set per-process numa balancing:
	prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_DISABLE); //disable
	prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_ENABLE);  //enable
	prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_DEFAULT); //follow global
Get numa_balancing state:
	prctl(PR_NUMA_BALANCING, PR_GET_NUMA_BALANCING, &ret);
	cat /proc/<pid>/status | grep NumaB_mode

# Unixbench
This is overhead of this patch, not performance improvement.
+-------------------+----------+
|       NAME        | OVERHEAD |
+-------------------+----------+
| Pipe_Throughput   |  0.98%   |
| Context_Switching | -0.96%   |
| Process_Creation  |  1.18%   |
+-------------------+----------+

# Changes
Changes in v6:
- rebase on top of next-20230411
- run Unixbench on physical machine
- acked by John Hubbard <jhubbard@nvidia.com>

Changes in v5:
- replace numab_enabled with numa_balancing_mode (Peter Zijlstra)
- make numa_balancing_enabled and numa_balancing_mode inline (Peter Zijlstra)
- use static_branch_inc/dec instead of static_branch_enable/disable (Peter Zijlstra)
- delete CONFIG_NUMA_BALANCING in task_tick_fair (Peter Zijlstra)
- reword commit, use imperative mood (Bagas Sanjaya)
- Unixbench overhead result

Changes in v4:
- code clean: add wrapper function `numa_balancing_enabled`

Changes in v3:
- Fix compile error.

Changes in v2:
- Now PR_NUMA_BALANCING support three states: enabled, disabled, default.
  enabled and disabled will ignore global setting, and default will follow
  global setting.

Gang Li (2):
  sched/numa: use static_branch_inc/dec for sched_numa_balancing
  sched/numa: add per-process numa_balancing

 Documentation/filesystems/proc.rst   |  2 ++
 fs/proc/task_mmu.c                   | 20 ++++++++++++
 include/linux/mm_types.h             |  3 ++
 include/linux/sched/numa_balancing.h | 45 ++++++++++++++++++++++++++
 include/uapi/linux/prctl.h           |  8 +++++
 kernel/fork.c                        |  4 +++
 kernel/sched/core.c                  | 26 +++++++--------
 kernel/sched/fair.c                  |  9 +++---
 kernel/sys.c                         | 47 ++++++++++++++++++++++++++++
 mm/mprotect.c                        |  6 ++--
 10 files changed, 151 insertions(+), 19 deletions(-)
  

Comments

Gang Li April 27, 2023, 5:17 a.m. UTC | #1
Hi,

Looks like there are no objections or comments. Do you have any ideas?

Can we merge this patch in the next merge window.

Thanks!

On 2023/4/12 22:06, Gang Li wrote:
> # Introduce
> Add PR_NUMA_BALANCING in prctl.
> 
> A large number of page faults will cause performance loss when numa
> balancing is performing. Thus those processes which care about worst-case
> performance need numa balancing disabled. Others, on the contrary, allow a
> temporary performance loss in exchange for higher average performance, so
> enable numa balancing is better for them.
> 
> Numa balancing can only be controlled globally by
> /proc/sys/kernel/numa_balancing. Due to the above case, we want to
> disable/enable numa_balancing per-process instead.
>
  
Bagas Sanjaya April 28, 2023, 7:40 a.m. UTC | #2
On 4/27/23 12:17, Gang Li wrote:
> Hi,
> 
> Looks like there are no objections or comments. Do you have any ideas?
> 
> Can we merge this patch in the next merge window.
> 

We're at 6.4 merge window, so the maintainer focus is to send PR updates
to Linus. And this series didn't get applied before this merge window.
Wait until 6.4-rc1 is out and reroll.

Thanks.
  
Gang Li April 28, 2023, 7:45 a.m. UTC | #3
Thank you.

I'll keep an eye on the progress.

On 2023/4/28 15:40, Bagas Sanjaya wrote:
> On 4/27/23 12:17, Gang Li wrote:
>> Hi,
>>
>> Looks like there are no objections or comments. Do you have any ideas?
>>
>> Can we merge this patch in the next merge window.
>>
> 
> We're at 6.4 merge window, so the maintainer focus is to send PR updates
> to Linus. And this series didn't get applied before this merge window.
> Wait until 6.4-rc1 is out and reroll.
> 
> Thanks.
>
  
Gang Li May 17, 2023, 9:28 a.m. UTC | #4
Hi all,

Since both 6.4-rc1 merge window and LSFMMBPF are over, could you take
a look now?

Thanks,
Gang Li

On 2023/4/28 15:45, Gang Li wrote:
> Thank you.
> 
> I'll keep an eye on the progress.
> 
> On 2023/4/28 15:40, Bagas Sanjaya wrote:
>> On 4/27/23 12:17, Gang Li wrote:
>>> Hi,
>>>
>>> Looks like there are no objections or comments. Do you have any ideas?
>>>
>>> Can we merge this patch in the next merge window.
>>>
>>
>> We're at 6.4 merge window, so the maintainer focus is to send PR updates
>> to Linus. And this series didn't get applied before this merge window.
>> Wait until 6.4-rc1 is out and reroll.
>>
>> Thanks.
>>