[v2,-next,1/2] mm/slb: add is_kmalloc_cache() helper function

Message ID 20221123123159.2325763-1-feng.tang@intel.com
State New
Headers
Series [v2,-next,1/2] mm/slb: add is_kmalloc_cache() helper function |

Commit Message

Feng Tang Nov. 23, 2022, 12:31 p.m. UTC
  commit 6edf2576a6cc ("mm/slub: enable debugging memory wasting of
kmalloc") introduces 'SLAB_KMALLOC' bit specifying whether a
kmem_cache is a kmalloc cache for slab/slub (slob doesn't have
dedicated kmalloc caches).

Add a helper inline function for other components like kasan to
simplify code.

Signed-off-by: Feng Tang <feng.tang@intel.com>
---
changlog:
  
  since v1:
  * don't use macro for the helper (Andrew Morton)
  * place the inline function in mm/slb.h to solve data structure
    definition issue (Vlastimil Babka)

 mm/slab.h | 8 ++++++++
 1 file changed, 8 insertions(+)
  

Comments

Vlastimil Babka Nov. 23, 2022, 5:03 p.m. UTC | #1
Subject should say mm/slab

On 11/23/22 13:31, Feng Tang wrote:
> commit 6edf2576a6cc ("mm/slub: enable debugging memory wasting of
> kmalloc") introduces 'SLAB_KMALLOC' bit specifying whether a
> kmem_cache is a kmalloc cache for slab/slub (slob doesn't have
> dedicated kmalloc caches).
> 
> Add a helper inline function for other components like kasan to
> simplify code.
> 
> Signed-off-by: Feng Tang <feng.tang@intel.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Patch 2 seems to depend on patches in Andrew's tree so it's simpler if he
takes both of these too.

Thanks,
Vlastimil

> ---
> changlog:
>   
>   since v1:
>   * don't use macro for the helper (Andrew Morton)
>   * place the inline function in mm/slb.h to solve data structure
>     definition issue (Vlastimil Babka)
> 
>  mm/slab.h | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index e3b3231af742..0d72fd62751a 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -325,6 +325,14 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  }
>  #endif
>  
> +static inline bool is_kmalloc_cache(struct kmem_cache *s)
> +{
> +#ifndef CONFIG_SLOB
> +	return (s->flags & SLAB_KMALLOC);
> +#else
> +	return false;
> +#endif
> +}
>  
>  /* Legal flag mask for kmem_cache_create(), for various configurations */
>  #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
  
Feng Tang Nov. 24, 2022, 1:21 a.m. UTC | #2
On Wed, Nov 23, 2022 at 06:03:26PM +0100, Vlastimil Babka wrote:
> Subject should say mm/slab

My bad, thanks for catching this.

> On 11/23/22 13:31, Feng Tang wrote:
> > commit 6edf2576a6cc ("mm/slub: enable debugging memory wasting of
> > kmalloc") introduces 'SLAB_KMALLOC' bit specifying whether a
> > kmem_cache is a kmalloc cache for slab/slub (slob doesn't have
> > dedicated kmalloc caches).
> > 
> > Add a helper inline function for other components like kasan to
> > simplify code.
> > 
> > Signed-off-by: Feng Tang <feng.tang@intel.com>
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks!

> Patch 2 seems to depend on patches in Andrew's tree so it's simpler if he
> takes both of these too.

Yes, patch 2/2 change many places of kasan code.

Hi Andrew,

Could you consider taking these 2 patches to your tree? If you think
it's too close to the merge windown, I can respin after 6.2. thanks!

- Feng

> Thanks,
> Vlastimil
> 
> > ---
> > changlog:
> >   
> >   since v1:
> >   * don't use macro for the helper (Andrew Morton)
> >   * place the inline function in mm/slb.h to solve data structure
> >     definition issue (Vlastimil Babka)
> > 
> >  mm/slab.h | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> > 
> > diff --git a/mm/slab.h b/mm/slab.h
> > index e3b3231af742..0d72fd62751a 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -325,6 +325,14 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> >  }
> >  #endif
> >  
> > +static inline bool is_kmalloc_cache(struct kmem_cache *s)
> > +{
> > +#ifndef CONFIG_SLOB
> > +	return (s->flags & SLAB_KMALLOC);
> > +#else
> > +	return false;
> > +#endif
> > +}
> >  
> >  /* Legal flag mask for kmem_cache_create(), for various configurations */
> >  #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
>
  
Hyeonggon Yoo Nov. 24, 2022, 10:57 a.m. UTC | #3
On Wed, Nov 23, 2022 at 08:31:58PM +0800, Feng Tang wrote:
> commit 6edf2576a6cc ("mm/slub: enable debugging memory wasting of
> kmalloc") introduces 'SLAB_KMALLOC' bit specifying whether a
> kmem_cache is a kmalloc cache for slab/slub (slob doesn't have
> dedicated kmalloc caches).
> 
> Add a helper inline function for other components like kasan to
> simplify code.
> 
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
> changlog:
>   
>   since v1:
>   * don't use macro for the helper (Andrew Morton)
>   * place the inline function in mm/slb.h to solve data structure
>     definition issue (Vlastimil Babka)
> 
>  mm/slab.h | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index e3b3231af742..0d72fd62751a 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -325,6 +325,14 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  }
>  #endif
>  
> +static inline bool is_kmalloc_cache(struct kmem_cache *s)
> +{
> +#ifndef CONFIG_SLOB
> +	return (s->flags & SLAB_KMALLOC);
> +#else
> +	return false;
> +#endif
> +}
>  
>  /* Legal flag mask for kmem_cache_create(), for various configurations */
>  #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
> -- 
> 2.34.1

With Vlastimil's comment:

Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
  

Patch

diff --git a/mm/slab.h b/mm/slab.h
index e3b3231af742..0d72fd62751a 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -325,6 +325,14 @@  static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
 }
 #endif
 
+static inline bool is_kmalloc_cache(struct kmem_cache *s)
+{
+#ifndef CONFIG_SLOB
+	return (s->flags & SLAB_KMALLOC);
+#else
+	return false;
+#endif
+}
 
 /* Legal flag mask for kmem_cache_create(), for various configurations */
 #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \