[RFC,v3,5/7] slub: Introduce freeze_slab()

Message ID 20231024093345.3676493-6-chengming.zhou@linux.dev
State New
Headers
Series slub: Delay freezing of CPU partial slabs |

Commit Message

Chengming Zhou Oct. 24, 2023, 9:33 a.m. UTC
  From: Chengming Zhou <zhouchengming@bytedance.com>

We will have unfrozen slabs out of the node partial list later, so we
need a freeze_slab() function to freeze the partial slab and get its
freelist.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 mm/slub.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)
  

Comments

Vlastimil Babka Oct. 30, 2023, 6:11 p.m. UTC | #1
On 10/24/23 11:33, chengming.zhou@linux.dev wrote:
> From: Chengming Zhou <zhouchengming@bytedance.com>
> 
> We will have unfrozen slabs out of the node partial list later, so we
> need a freeze_slab() function to freeze the partial slab and get its
> freelist.
> 
> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>

As you noted, we'll need slab_update_freelist().
Otherwise,

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/slub.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 7d0234bffad3..5b428648021f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3079,6 +3079,33 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab)
>  	return freelist;
>  }
>  
> +/*
> + * Freeze the partial slab and return the pointer to the freelist.
> + */
> +static inline void *freeze_slab(struct kmem_cache *s, struct slab *slab)
> +{
> +	struct slab new;
> +	unsigned long counters;
> +	void *freelist;
> +
> +	do {
> +		freelist = slab->freelist;
> +		counters = slab->counters;
> +
> +		new.counters = counters;
> +		VM_BUG_ON(new.frozen);
> +
> +		new.inuse = slab->objects;
> +		new.frozen = 1;
> +
> +	} while (!__slab_update_freelist(s, slab,
> +		freelist, counters,
> +		NULL, new.counters,
> +		"freeze_slab"));
> +
> +	return freelist;
> +}
> +
>  /*
>   * Slow path. The lockless freelist is empty or we need to perform
>   * debugging duties.
  

Patch

diff --git a/mm/slub.c b/mm/slub.c
index 7d0234bffad3..5b428648021f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3079,6 +3079,33 @@  static inline void *get_freelist(struct kmem_cache *s, struct slab *slab)
 	return freelist;
 }
 
+/*
+ * Freeze the partial slab and return the pointer to the freelist.
+ */
+static inline void *freeze_slab(struct kmem_cache *s, struct slab *slab)
+{
+	struct slab new;
+	unsigned long counters;
+	void *freelist;
+
+	do {
+		freelist = slab->freelist;
+		counters = slab->counters;
+
+		new.counters = counters;
+		VM_BUG_ON(new.frozen);
+
+		new.inuse = slab->objects;
+		new.frozen = 1;
+
+	} while (!__slab_update_freelist(s, slab,
+		freelist, counters,
+		NULL, new.counters,
+		"freeze_slab"));
+
+	return freelist;
+}
+
 /*
  * Slow path. The lockless freelist is empty or we need to perform
  * debugging duties.