[1/1] mm: vmalloc: Add a scan area of VA only once

Message ID 20240202190628.47806-1-urezki@gmail.com
State New
Headers
Series [1/1] mm: vmalloc: Add a scan area of VA only once |

Commit Message

Uladzislau Rezki Feb. 2, 2024, 7:06 p.m. UTC
  Invoke a kmemleak_scan_area() function only for newly allocated
objects to add a scan area within that object. There is no reason
to add a same scan area(pointer to beginning or inside the object)
several times. If a VA is obtained from the cache its scan area
has already been associated.

Fixes: 7db166b4aa0d ("mm: vmalloc: offload free_vmap_area_lock lock")
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)
  

Comments

Lorenzo Stoakes Feb. 4, 2024, 7:44 p.m. UTC | #1
On Fri, Feb 02, 2024 at 08:06:28PM +0100, Uladzislau Rezki (Sony) wrote:
> Invoke a kmemleak_scan_area() function only for newly allocated
> objects to add a scan area within that object. There is no reason
> to add a same scan area(pointer to beginning or inside the object)
> several times. If a VA is obtained from the cache its scan area
> has already been associated.
>
> Fixes: 7db166b4aa0d ("mm: vmalloc: offload free_vmap_area_lock lock")
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
>  mm/vmalloc.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 449f45b0e474..25a8df497255 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1882,13 +1882,13 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>  		va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
>  		if (unlikely(!va))
>  			return ERR_PTR(-ENOMEM);
> -	}
>
> -	/*
> -	 * Only scan the relevant parts containing pointers to other objects
> -	 * to avoid false negatives.
> -	 */
> -	kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
> +		/*
> +		 * Only scan the relevant parts containing pointers to other objects
> +		 * to avoid false negatives.
> +		 */
> +		kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
> +	}
>
>  retry:
>  	if (addr == vend) {
> --
> 2.39.2
>

Looks good to me, feel free to add:

Reviewed-by: Lorenzo Stoakes <lstoakes@gmail.com>
  
Christoph Hellwig Feb. 5, 2024, 6:41 a.m. UTC | #2
> +		 * Only scan the relevant parts containing pointers to other objects

Please avoid the overly long line.

The rest looks fine.

> +		 * to avoid false negatives.
  
Uladzislau Rezki Feb. 5, 2024, 5:20 p.m. UTC | #3
On Sun, Feb 04, 2024 at 10:41:13PM -0800, Christoph Hellwig wrote:
> > +		 * Only scan the relevant parts containing pointers to other objects
> 
> Please avoid the overly long line.
> 
> The rest looks fine.
> 
> > +		 * to avoid false negatives.
>
Thanks!

--
Uladzislau Rezki
  
Uladzislau Rezki Feb. 5, 2024, 5:20 p.m. UTC | #4
On Sun, Feb 04, 2024 at 07:44:55PM +0000, Lorenzo Stoakes wrote:
> On Fri, Feb 02, 2024 at 08:06:28PM +0100, Uladzislau Rezki (Sony) wrote:
> > Invoke a kmemleak_scan_area() function only for newly allocated
> > objects to add a scan area within that object. There is no reason
> > to add a same scan area(pointer to beginning or inside the object)
> > several times. If a VA is obtained from the cache its scan area
> > has already been associated.
> >
> > Fixes: 7db166b4aa0d ("mm: vmalloc: offload free_vmap_area_lock lock")
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > ---
> >  mm/vmalloc.c | 12 ++++++------
> >  1 file changed, 6 insertions(+), 6 deletions(-)
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 449f45b0e474..25a8df497255 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1882,13 +1882,13 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> >  		va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
> >  		if (unlikely(!va))
> >  			return ERR_PTR(-ENOMEM);
> > -	}
> >
> > -	/*
> > -	 * Only scan the relevant parts containing pointers to other objects
> > -	 * to avoid false negatives.
> > -	 */
> > -	kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
> > +		/*
> > +		 * Only scan the relevant parts containing pointers to other objects
> > +		 * to avoid false negatives.
> > +		 */
> > +		kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
> > +	}
> >
> >  retry:
> >  	if (addr == vend) {
> > --
> > 2.39.2
> >
> 
> Looks good to me, feel free to add:
> 
> Reviewed-by: Lorenzo Stoakes <lstoakes@gmail.com>
>
Appreciate for review!

--
Uladzislau Rezki
  

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 449f45b0e474..25a8df497255 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1882,13 +1882,13 @@  static struct vmap_area *alloc_vmap_area(unsigned long size,
 		va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
 		if (unlikely(!va))
 			return ERR_PTR(-ENOMEM);
-	}
 
-	/*
-	 * Only scan the relevant parts containing pointers to other objects
-	 * to avoid false negatives.
-	 */
-	kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
+		/*
+		 * Only scan the relevant parts containing pointers to other objects
+		 * to avoid false negatives.
+		 */
+		kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
+	}
 
 retry:
 	if (addr == vend) {