[RFC,0/5] slub: Delay freezing of CPU partial slabs

Message ID 20231017154439.3036608-1-chengming.zhou@linux.dev
Headers
Series slub: Delay freezing of CPU partial slabs |

Message

Chengming Zhou Oct. 17, 2023, 3:44 p.m. UTC
  From: Chengming Zhou <zhouchengming@bytedance.com>

1. Problem
==========
Now we have to freeze the slab when get from the node partial list, and
unfreeze the slab when put to the node partial list. Because we need to
rely on the node list_lock to synchronize the "frozen" bit changes.

This implementation has some drawbacks:

 - Alloc path: twice cmpxchg_double.
   It has to get some partial slabs from node when the allocator has used
   up the CPU partial slabs. So it freeze the slab (one cmpxchg_double)
   with node list_lock held, put those frozen slabs on its CPU partial
   list. Later ___slab_alloc() will cmpxchg_double try-loop again if that
   slab is picked to use.

 - Alloc path: amplified contention on node list_lock.
   Since we have to synchronize the "frozen" bit changes under the node
   list_lock, the contention of slab (struct page) can be transferred
   to the node list_lock. On machine with many CPUs in one node, the
   contention of list_lock will be amplified by all CPUs' alloc path.

   The current code has to workaround this problem by avoiding using
   cmpxchg_double try-loop, which will just break and return when
   contention of page encountered and the first cmpxchg_double failed.
   But this workaround has its own problem.

 - Free path: redundant unfreeze.
   __slab_free() will freeze and cache some slabs on its partial list,
   and flush them to the node partial list when exceed, which has to
   unfreeze those slabs again under the node list_lock. Actually we
   don't need to freeze slab on CPU partial list, in which case we
   can save the unfreeze cmpxchg_double operations in flush path.

2. Solution
===========
We solve these problems by leaving slabs unfrozen when moving out of
the node partial list and on CPU partial list, so "frozen" bit is 0.

These partial slabs won't be manipulate concurrently by alloc path,
the only racer is free path, which may manipulate its list when !inuse.
So we need to introduce another synchronization way to avoid it, we
use a bit in slab->flags to indicate whether the slab is on node partial
list or not, only in that case we can manipulate the slab list.

The slab will be delay frozen when it's picked to actively use by the
CPU, it becomes full at the same time, in which case we still need to
rely on "frozen" bit to avoid manipulating its list. So the slab will
be frozen only when activate use and be unfrozen only when deactivate.

3. Patches
==========
Patch-1 introduce the new slab->flags to indicate whether the slab is
on node partial list, which is protected by node list_lock.

Patch-2 change the free path to check if slab is on node partial list,
only in which case we can manipulate its list. Then we can keep unfrozen
partial slabs out of node partial list, since the free path won't
concurrently manipulate with it.

Patch-3 optimize the deactivate path, we can directly unfreeze the slab,
(since node list_lock is not needed to synchronize "frozen" bit anymore)
then grab node list_lock if it's needed to put on the node partial list.

Patch-4 change to don't freeze slab when moving out from node partial
list or put on the CPU partial list, and don't need to unfreeze these
slabs when put back to node partial list from CPU partial list.

Patch-5 change the alloc path to freeze the CPU partial slab when picked
to use.

4. Testing
==========
We just did some simple testing on a server with 128 CPUs (2 nodes) to
compare performance for now.

 - perf bench sched messaging -g 5 -t -l 100000
   baseline	RFC
   7.042s	6.966s
   7.022s	7.045s
   7.054s	6.985s

 - stress-ng --rawpkt 128 --rawpkt-ops 100000000
   baseline	RFC
   2.42s	2.15s
   2.45s	2.16s
   2.44s	2.17s

It shows above there is about 10% improvement on stress-ng rawpkt
testcase, although no much improvement on perf sched bench testcase.

Thanks for any comment and code review!

Chengming Zhou (5):
  slub: Introduce on_partial()
  slub: Don't manipulate slab list when used by cpu
  slub: Optimize deactivate_slab()
  slub: Don't freeze slabs for cpu partial
  slub: Introduce get_cpu_partial()

 mm/slab.h |   2 +-
 mm/slub.c | 257 +++++++++++++++++++++++++++++++-----------------------
 2 files changed, 150 insertions(+), 109 deletions(-)
  

Comments

Hyeonggon Yoo Oct. 18, 2023, 6:34 a.m. UTC | #1
On Wed, Oct 18, 2023 at 12:45 AM <chengming.zhou@linux.dev> wrote:
> 4. Testing
> ==========
> We just did some simple testing on a server with 128 CPUs (2 nodes) to
> compare performance for now.
>
>  - perf bench sched messaging -g 5 -t -l 100000
>    baseline     RFC
>    7.042s       6.966s
>    7.022s       7.045s
>    7.054s       6.985s
>
>  - stress-ng --rawpkt 128 --rawpkt-ops 100000000
>    baseline     RFC
>    2.42s        2.15s
>    2.45s        2.16s
>    2.44s        2.17s
>
> It shows above there is about 10% improvement on stress-ng rawpkt
> testcase, although no much improvement on perf sched bench testcase.
>
> Thanks for any comment and code review!

Hi Chengming, this is the kerneltesting.org test report for your patch series.

I applied this series on my slab-experimental tree [1] for testing,
and I observed several kernel panics [2] [3] [4] on kernels without
CONFIG_SLUB_CPU_PARTIAL.

To verify that this series caused kernel panics, I tested before and after
applying it on Vlastimil's slab/for-next and yeah, this series was the cause.

System is deadlocked on memory and the OOM-killer says there is a
huge amount of slab memory. So maybe there is a memory leak or it makes
slab memory grow unboundedly?

[1] https://git.kerneltesting.org/slab-experimental/
[2] https://lava.kerneltesting.org/scheduler/job/127#bottom
[3] https://lava.kerneltesting.org/scheduler/job/131#bottom
[4] https://lava.kerneltesting.org/scheduler/job/134#bottom

>
> Chengming Zhou (5):
>   slub: Introduce on_partial()
>   slub: Don't manipulate slab list when used by cpu
>   slub: Optimize deactivate_slab()
>   slub: Don't freeze slabs for cpu partial
>   slub: Introduce get_cpu_partial()
>
>  mm/slab.h |   2 +-
>  mm/slub.c | 257 +++++++++++++++++++++++++++++++-----------------------
>  2 files changed, 150 insertions(+), 109 deletions(-)
>
> --
> 2.40.1
>
  
Chengming Zhou Oct. 18, 2023, 7:44 a.m. UTC | #2
On 2023/10/18 14:34, Hyeonggon Yoo wrote:
> On Wed, Oct 18, 2023 at 12:45 AM <chengming.zhou@linux.dev> wrote:
>> 4. Testing
>> ==========
>> We just did some simple testing on a server with 128 CPUs (2 nodes) to
>> compare performance for now.
>>
>>  - perf bench sched messaging -g 5 -t -l 100000
>>    baseline     RFC
>>    7.042s       6.966s
>>    7.022s       7.045s
>>    7.054s       6.985s
>>
>>  - stress-ng --rawpkt 128 --rawpkt-ops 100000000
>>    baseline     RFC
>>    2.42s        2.15s
>>    2.45s        2.16s
>>    2.44s        2.17s
>>
>> It shows above there is about 10% improvement on stress-ng rawpkt
>> testcase, although no much improvement on perf sched bench testcase.
>>
>> Thanks for any comment and code review!
> 
> Hi Chengming, this is the kerneltesting.org test report for your patch series.
> 
> I applied this series on my slab-experimental tree [1] for testing,
> and I observed several kernel panics [2] [3] [4] on kernels without
> CONFIG_SLUB_CPU_PARTIAL.
> 
> To verify that this series caused kernel panics, I tested before and after
> applying it on Vlastimil's slab/for-next and yeah, this series was the cause.
> 
> System is deadlocked on memory and the OOM-killer says there is a
> huge amount of slab memory. So maybe there is a memory leak or it makes
> slab memory grow unboundedly?

Thanks for the testing!

I can reproduce the OOM locally without CONFIG_SLUB_CPU_PARTIAL.

I made a quick fix below (will need to get another better fix). The root
cause is in patch-4, which wrongly put some partial slabs onto the CPU
partial list even without CONFIG_SLUB_CPU_PARTIAL. So these partial slabs
are leaked.

diff --git a/mm/slub.c b/mm/slub.c
index d58eaf8447fd..b7ba6c008122 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2339,12 +2339,12 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
                        }
                }

+#ifdef CONFIG_SLUB_CPU_PARTIAL
                remove_partial(n, slab);
                put_cpu_partial(s, slab, 0);
                stat(s, CPU_PARTIAL_NODE);
                partial_slabs++;

-#ifdef CONFIG_SLUB_CPU_PARTIAL
                if (!kmem_cache_has_cpu_partial(s)
                        || partial_slabs > s->cpu_partial_slabs / 2)
                        break;


> 
> [1] https://git.kerneltesting.org/slab-experimental/
> [2] https://lava.kerneltesting.org/scheduler/job/127#bottom
> [3] https://lava.kerneltesting.org/scheduler/job/131#bottom
> [4] https://lava.kerneltesting.org/scheduler/job/134#bottom
> 
>>
>> Chengming Zhou (5):
>>   slub: Introduce on_partial()
>>   slub: Don't manipulate slab list when used by cpu
>>   slub: Optimize deactivate_slab()
>>   slub: Don't freeze slabs for cpu partial
>>   slub: Introduce get_cpu_partial()
>>
>>  mm/slab.h |   2 +-
>>  mm/slub.c | 257 +++++++++++++++++++++++++++++++-----------------------
>>  2 files changed, 150 insertions(+), 109 deletions(-)
>>
>> --
>> 2.40.1
>>