[v6] mm/vmalloc: lock contention optimization under multi-threading

Message ID 20240229082611.4104839-1-rulin.huang@intel.com
State New
Headers
Series [v6] mm/vmalloc: lock contention optimization under multi-threading |

Commit Message

rulinhuang Feb. 29, 2024, 8:26 a.m. UTC
  When allocating a new memory area where the mapping address range is
known, it is observed that the vmap_node->busy.lock is acquired twice.

The first acquisition occurs in the alloc_vmap_area() function when
inserting the vm area into the vm mapping red-black tree. The second
acquisition occurs in the setup_vmalloc_vm() function when updating the
properties of the vm, such as flags and address, etc.

Combine these two operations together in alloc_vmap_area(), which
improves scalability when the vmap_node->busy.lock is contended.
By doing so, the need to acquire the lock twice can also be eliminated
 to once.

With the above change, tested on intel sapphire rapids
platform(224 vcpu), a 4% performance improvement is
gained on stress-ng/pthread(https://github.com/ColinIanKing/stress-ng),
which is the stress test of thread creations.

Reviewed-by: Uladzislau Rezki <urezki@gmail.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Reviewed-by: "Chen, Tim C" <tim.c.chen@intel.com>
Reviewed-by: "King, Colin" <colin.king@intel.com>
Signed-off-by: rulinhuang <rulin.huang@intel.com>
---
V1 -> V2: Avoided the partial initialization issue of vm and
separated insert_vmap_area() from alloc_vmap_area()
V2 -> V3: Rebased on 6.8-rc5
V3 -> V4: Rebased on mm-unstable branch
V4 -> V5: cancel the split of alloc_vmap_area()
and keep insert_vmap_area()
V5 -> V6: add bug_on
---
 mm/vmalloc.c | 132 +++++++++++++++++++++++++--------------------------
 1 file changed, 64 insertions(+), 68 deletions(-)


base-commit: 7e6ae2db7f319bf9613ec6db8fa3c9bc1de1b346
  

Comments

rulinhuang Feb. 29, 2024, 8:31 a.m. UTC | #1
Apologizes for the confusions the original format led to and thanks so
much for your guidance which will surely enhance the efficiency when
communicating with the kernel community.

We've submitted the v6 of the patch, which more rigorously checks
va_flag with BUG_ON, and at the same time ensures the additional
performance overhead is subtle. In this modification we also moved the
position of the macros because the definition of VMAP_RAM should be
placed before alloc_vmap_area().

Much appreciation from you and Uladzislau on the code refinement. And at
the same time, we'd also respect the internal review comments and
suggestions from Tim and Colin, without which this patch cannot be
qualified to be sent out for your review. Although the current
implementation has been much different from its first version, I'd still
recommend properly recognizing their contributions with the "review-by"
tag. Does it make sense?

Could you kindly help us review this version and share with us your
further comments? Thanks again for your continuous help!

On 2024/2/29 16:26, rulinhuang wrote:
> When allocating a new memory area where the mapping address range is
> known, it is observed that the vmap_node->busy.lock is acquired twice.
> 
> The first acquisition occurs in the alloc_vmap_area() function when
> inserting the vm area into the vm mapping red-black tree. The second
> acquisition occurs in the setup_vmalloc_vm() function when updating the
> properties of the vm, such as flags and address, etc.
> 
> Combine these two operations together in alloc_vmap_area(), which
> improves scalability when the vmap_node->busy.lock is contended.
> By doing so, the need to acquire the lock twice can also be eliminated
>  to once.
> 
> With the above change, tested on intel sapphire rapids
> platform(224 vcpu), a 4% performance improvement is
> gained on stress-ng/pthread(https://github.com/ColinIanKing/stress-ng),
> which is the stress test of thread creations.
> 
> Reviewed-by: Uladzislau Rezki <urezki@gmail.com>
> Reviewed-by: Baoquan He <bhe@redhat.com>
> Reviewed-by: "Chen, Tim C" <tim.c.chen@intel.com>
> Reviewed-by: "King, Colin" <colin.king@intel.com>
> Signed-off-by: rulinhuang <rulin.huang@intel.com>
> ---
> V1 -> V2: Avoided the partial initialization issue of vm and
> separated insert_vmap_area() from alloc_vmap_area()
> V2 -> V3: Rebased on 6.8-rc5
> V3 -> V4: Rebased on mm-unstable branch
> V4 -> V5: cancel the split of alloc_vmap_area()
> and keep insert_vmap_area()
> V5 -> V6: add bug_on
> ---
>  mm/vmalloc.c | 132 +++++++++++++++++++++++++--------------------------
>  1 file changed, 64 insertions(+), 68 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 25a8df497255..5ae028b0d58d 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1841,15 +1841,66 @@ node_alloc(unsigned long size, unsigned long align,
>  	return va;
>  }
>  
> +/*** Per cpu kva allocator ***/
> +
> +/*
> + * vmap space is limited especially on 32 bit architectures. Ensure there is
> + * room for at least 16 percpu vmap blocks per CPU.
> + */
> +/*
> + * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
> + * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
> + * instead (we just need a rough idea)
> + */
> +#if BITS_PER_LONG == 32
> +#define VMALLOC_SPACE		(128UL*1024*1024)
> +#else
> +#define VMALLOC_SPACE		(128UL*1024*1024*1024)
> +#endif
> +
> +#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
> +#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
> +#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
> +#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
> +#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
> +#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
> +#define VMAP_BBMAP_BITS		\
> +		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
> +		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
> +			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
> +
> +#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
> +
> +/*
> + * Purge threshold to prevent overeager purging of fragmented blocks for
> + * regular operations: Purge if vb->free is less than 1/4 of the capacity.
> + */
> +#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
> +
> +#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
> +#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
> +#define VMAP_FLAGS_MASK		0x3
> +
> +static inline void setup_vmalloc_vm(struct vm_struct *vm,
> +	struct vmap_area *va, unsigned long flags, const void *caller)
> +{
> +	vm->flags = flags;
> +	vm->addr = (void *)va->va_start;
> +	vm->size = va->va_end - va->va_start;
> +	vm->caller = caller;
> +	va->vm = vm;
> +}
> +
>  /*
>   * Allocate a region of KVA of the specified size and alignment, within the
> - * vstart and vend.
> + * vstart and vend. If vm is passed in, the two will also be bound.
>   */
>  static struct vmap_area *alloc_vmap_area(unsigned long size,
>  				unsigned long align,
>  				unsigned long vstart, unsigned long vend,
>  				int node, gfp_t gfp_mask,
> -				unsigned long va_flags)
> +				unsigned long va_flags, struct vm_struct *vm,
> +				unsigned long flags, const void *caller)
>  {
>  	struct vmap_node *vn;
>  	struct vmap_area *va;
> @@ -1912,6 +1963,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>  	va->vm = NULL;
>  	va->flags = (va_flags | vn_id);
>  
> +	if (vm) {
> +		BUG_ON(va_flags & VMAP_RAM);
> +		setup_vmalloc_vm(vm, va, flags, caller);
> +	}
> +
>  	vn = addr_to_node(va->va_start);
>  
>  	spin_lock(&vn->busy.lock);
> @@ -2325,46 +2381,6 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
>  	return NULL;
>  }
>  
> -/*** Per cpu kva allocator ***/
> -
> -/*
> - * vmap space is limited especially on 32 bit architectures. Ensure there is
> - * room for at least 16 percpu vmap blocks per CPU.
> - */
> -/*
> - * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
> - * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
> - * instead (we just need a rough idea)
> - */
> -#if BITS_PER_LONG == 32
> -#define VMALLOC_SPACE		(128UL*1024*1024)
> -#else
> -#define VMALLOC_SPACE		(128UL*1024*1024*1024)
> -#endif
> -
> -#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
> -#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
> -#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
> -#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
> -#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
> -#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
> -#define VMAP_BBMAP_BITS		\
> -		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
> -		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
> -			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
> -
> -#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
> -
> -/*
> - * Purge threshold to prevent overeager purging of fragmented blocks for
> - * regular operations: Purge if vb->free is less than 1/4 of the capacity.
> - */
> -#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
> -
> -#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
> -#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
> -#define VMAP_FLAGS_MASK		0x3
> -
>  struct vmap_block_queue {
>  	spinlock_t lock;
>  	struct list_head free;
> @@ -2486,7 +2502,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
>  	va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
>  					VMALLOC_START, VMALLOC_END,
>  					node, gfp_mask,
> -					VMAP_RAM|VMAP_BLOCK);
> +					VMAP_RAM|VMAP_BLOCK, NULL,
> +					0, NULL);
>  	if (IS_ERR(va)) {
>  		kfree(vb);
>  		return ERR_CAST(va);
> @@ -2843,7 +2860,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
>  		struct vmap_area *va;
>  		va = alloc_vmap_area(size, PAGE_SIZE,
>  				VMALLOC_START, VMALLOC_END,
> -				node, GFP_KERNEL, VMAP_RAM);
> +				node, GFP_KERNEL, VMAP_RAM,
> +				NULL, 0, NULL);
>  		if (IS_ERR(va))
>  			return NULL;
>  
> @@ -2946,26 +2964,6 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
>  	kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
>  }
>  
> -static inline void setup_vmalloc_vm_locked(struct vm_struct *vm,
> -	struct vmap_area *va, unsigned long flags, const void *caller)
> -{
> -	vm->flags = flags;
> -	vm->addr = (void *)va->va_start;
> -	vm->size = va->va_end - va->va_start;
> -	vm->caller = caller;
> -	va->vm = vm;
> -}
> -
> -static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
> -			      unsigned long flags, const void *caller)
> -{
> -	struct vmap_node *vn = addr_to_node(va->va_start);
> -
> -	spin_lock(&vn->busy.lock);
> -	setup_vmalloc_vm_locked(vm, va, flags, caller);
> -	spin_unlock(&vn->busy.lock);
> -}
> -
>  static void clear_vm_uninitialized_flag(struct vm_struct *vm)
>  {
>  	/*
> @@ -3002,14 +3000,12 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
>  	if (!(flags & VM_NO_GUARD))
>  		size += PAGE_SIZE;
>  
> -	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0);
> +	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area, flags, caller);
>  	if (IS_ERR(va)) {
>  		kfree(area);
>  		return NULL;
>  	}
>  
> -	setup_vmalloc_vm(area, va, flags, caller);
> -
>  	/*
>  	 * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a
>  	 * best-effort approach, as they can be mapped outside of vmalloc code.
> @@ -4584,7 +4580,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
>  
>  		spin_lock(&vn->busy.lock);
>  		insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head);
> -		setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC,
> +		setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC,
>  				 pcpu_get_vm_areas);
>  		spin_unlock(&vn->busy.lock);
>  	}
> 
> base-commit: 7e6ae2db7f319bf9613ec6db8fa3c9bc1de1b346
  
Baoquan He Feb. 29, 2024, 10:12 a.m. UTC | #2
Hi Rulin,

Thanks for the great work and v6, some concerns, please see inline
comments.

On 02/29/24 at 12:26am, rulinhuang wrote:
> When allocating a new memory area where the mapping address range is
> known, it is observed that the vmap_node->busy.lock is acquired twice.
> 
> The first acquisition occurs in the alloc_vmap_area() function when
> inserting the vm area into the vm mapping red-black tree. The second
> acquisition occurs in the setup_vmalloc_vm() function when updating the
> properties of the vm, such as flags and address, etc.
> 
> Combine these two operations together in alloc_vmap_area(), which
> improves scalability when the vmap_node->busy.lock is contended.
> By doing so, the need to acquire the lock twice can also be eliminated
>  to once.
> 
> With the above change, tested on intel sapphire rapids
> platform(224 vcpu), a 4% performance improvement is
> gained on stress-ng/pthread(https://github.com/ColinIanKing/stress-ng),
> which is the stress test of thread creations.
> 
> Reviewed-by: Uladzislau Rezki <urezki@gmail.com>
> Reviewed-by: Baoquan He <bhe@redhat.com>
> Reviewed-by: "Chen, Tim C" <tim.c.chen@intel.com>
> Reviewed-by: "King, Colin" <colin.king@intel.com>


We possibly need remove these reviewers' tags when new code change is
taken so that people check and add Acked-by or Reviewed-by again if then
agree, or add new comments if any concern.

> Signed-off-by: rulinhuang <rulin.huang@intel.com>
> ---
> V1 -> V2: Avoided the partial initialization issue of vm and
> separated insert_vmap_area() from alloc_vmap_area()
> V2 -> V3: Rebased on 6.8-rc5
> V3 -> V4: Rebased on mm-unstable branch
> V4 -> V5: cancel the split of alloc_vmap_area()
> and keep insert_vmap_area()
> V5 -> V6: add bug_on
> ---
>  mm/vmalloc.c | 132 +++++++++++++++++++++++++--------------------------
>  1 file changed, 64 insertions(+), 68 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 25a8df497255..5ae028b0d58d 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1841,15 +1841,66 @@ node_alloc(unsigned long size, unsigned long align,
>  	return va;
>  }
>  
> +/*** Per cpu kva allocator ***/
> +
> +/*
> + * vmap space is limited especially on 32 bit architectures. Ensure there is
> + * room for at least 16 percpu vmap blocks per CPU.
> + */
> +/*
> + * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
> + * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
> + * instead (we just need a rough idea)
> + */
> +#if BITS_PER_LONG == 32
> +#define VMALLOC_SPACE		(128UL*1024*1024)
> +#else
> +#define VMALLOC_SPACE		(128UL*1024*1024*1024)
> +#endif
> +
> +#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
> +#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
> +#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
> +#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
> +#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
> +#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
> +#define VMAP_BBMAP_BITS		\
> +		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
> +		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
> +			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
> +
> +#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
> +
> +/*
> + * Purge threshold to prevent overeager purging of fragmented blocks for
> + * regular operations: Purge if vb->free is less than 1/4 of the capacity.
> + */
> +#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
> +
> +#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
> +#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
> +#define VMAP_FLAGS_MASK		0x3

These code moving is made because we need check VMAP_RAM in advance. We
may need move all those data structures and basic helpers related to per
cpu kva allocator up too to along with these macros, just as the newly
introduced vmap_node does. If that's agreed, better be done in a
separate patch. My personal opinion. Not sure if Uladzislau has
different thoughts.

Other than this, the overall looks good to me.

> +
> +static inline void setup_vmalloc_vm(struct vm_struct *vm,
> +	struct vmap_area *va, unsigned long flags, const void *caller)
> +{
> +	vm->flags = flags;
> +	vm->addr = (void *)va->va_start;
> +	vm->size = va->va_end - va->va_start;
> +	vm->caller = caller;
> +	va->vm = vm;
> +}
> +
>  /*
>   * Allocate a region of KVA of the specified size and alignment, within the
> - * vstart and vend.
> + * vstart and vend. If vm is passed in, the two will also be bound.
>   */
>  static struct vmap_area *alloc_vmap_area(unsigned long size,
>  				unsigned long align,
>  				unsigned long vstart, unsigned long vend,
>  				int node, gfp_t gfp_mask,
> -				unsigned long va_flags)
> +				unsigned long va_flags, struct vm_struct *vm,
> +				unsigned long flags, const void *caller)
>  {
>  	struct vmap_node *vn;
>  	struct vmap_area *va;
> @@ -1912,6 +1963,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>  	va->vm = NULL;
>  	va->flags = (va_flags | vn_id);
>  
> +	if (vm) {
> +		BUG_ON(va_flags & VMAP_RAM);
> +		setup_vmalloc_vm(vm, va, flags, caller);
> +	}
> +
>  	vn = addr_to_node(va->va_start);
>  
>  	spin_lock(&vn->busy.lock);
> @@ -2325,46 +2381,6 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
>  	return NULL;
>  }
>  
> -/*** Per cpu kva allocator ***/
> -
> -/*
> - * vmap space is limited especially on 32 bit architectures. Ensure there is
> - * room for at least 16 percpu vmap blocks per CPU.
> - */
> -/*
> - * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
> - * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
> - * instead (we just need a rough idea)
> - */
> -#if BITS_PER_LONG == 32
> -#define VMALLOC_SPACE		(128UL*1024*1024)
> -#else
> -#define VMALLOC_SPACE		(128UL*1024*1024*1024)
> -#endif
> -
> -#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
> -#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
> -#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
> -#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
> -#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
> -#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
> -#define VMAP_BBMAP_BITS		\
> -		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
> -		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
> -			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
> -
> -#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
> -
> -/*
> - * Purge threshold to prevent overeager purging of fragmented blocks for
> - * regular operations: Purge if vb->free is less than 1/4 of the capacity.
> - */
> -#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
> -
> -#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
> -#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
> -#define VMAP_FLAGS_MASK		0x3
> -
>  struct vmap_block_queue {
>  	spinlock_t lock;
>  	struct list_head free;
> @@ -2486,7 +2502,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
>  	va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
>  					VMALLOC_START, VMALLOC_END,
>  					node, gfp_mask,
> -					VMAP_RAM|VMAP_BLOCK);
> +					VMAP_RAM|VMAP_BLOCK, NULL,
> +					0, NULL);
>  	if (IS_ERR(va)) {
>  		kfree(vb);
>  		return ERR_CAST(va);
> @@ -2843,7 +2860,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
>  		struct vmap_area *va;
>  		va = alloc_vmap_area(size, PAGE_SIZE,
>  				VMALLOC_START, VMALLOC_END,
> -				node, GFP_KERNEL, VMAP_RAM);
> +				node, GFP_KERNEL, VMAP_RAM,
> +				NULL, 0, NULL);
>  		if (IS_ERR(va))
>  			return NULL;
>  
> @@ -2946,26 +2964,6 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
>  	kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
>  }
>  
> -static inline void setup_vmalloc_vm_locked(struct vm_struct *vm,
> -	struct vmap_area *va, unsigned long flags, const void *caller)
> -{
> -	vm->flags = flags;
> -	vm->addr = (void *)va->va_start;
> -	vm->size = va->va_end - va->va_start;
> -	vm->caller = caller;
> -	va->vm = vm;
> -}
> -
> -static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
> -			      unsigned long flags, const void *caller)
> -{
> -	struct vmap_node *vn = addr_to_node(va->va_start);
> -
> -	spin_lock(&vn->busy.lock);
> -	setup_vmalloc_vm_locked(vm, va, flags, caller);
> -	spin_unlock(&vn->busy.lock);
> -}
> -
>  static void clear_vm_uninitialized_flag(struct vm_struct *vm)
>  {
>  	/*
> @@ -3002,14 +3000,12 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
>  	if (!(flags & VM_NO_GUARD))
>  		size += PAGE_SIZE;
>  
> -	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0);
> +	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area, flags, caller);
>  	if (IS_ERR(va)) {
>  		kfree(area);
>  		return NULL;
>  	}
>  
> -	setup_vmalloc_vm(area, va, flags, caller);
> -
>  	/*
>  	 * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a
>  	 * best-effort approach, as they can be mapped outside of vmalloc code.
> @@ -4584,7 +4580,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
>  
>  		spin_lock(&vn->busy.lock);
>  		insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head);
> -		setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC,
> +		setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC,
>  				 pcpu_get_vm_areas);
>  		spin_unlock(&vn->busy.lock);
>  	}
> 
> base-commit: 7e6ae2db7f319bf9613ec6db8fa3c9bc1de1b346
> -- 
> 2.43.0
>
  
Uladzislau Rezki Feb. 29, 2024, 10:33 a.m. UTC | #3
On Thu, Feb 29, 2024 at 06:12:00PM +0800, Baoquan He wrote:
> Hi Rulin,
> 
> Thanks for the great work and v6, some concerns, please see inline
> comments.
> 
> On 02/29/24 at 12:26am, rulinhuang wrote:
> > When allocating a new memory area where the mapping address range is
> > known, it is observed that the vmap_node->busy.lock is acquired twice.
> > 
> > The first acquisition occurs in the alloc_vmap_area() function when
> > inserting the vm area into the vm mapping red-black tree. The second
> > acquisition occurs in the setup_vmalloc_vm() function when updating the
> > properties of the vm, such as flags and address, etc.
> > 
> > Combine these two operations together in alloc_vmap_area(), which
> > improves scalability when the vmap_node->busy.lock is contended.
> > By doing so, the need to acquire the lock twice can also be eliminated
> >  to once.
> > 
> > With the above change, tested on intel sapphire rapids
> > platform(224 vcpu), a 4% performance improvement is
> > gained on stress-ng/pthread(https://github.com/ColinIanKing/stress-ng),
> > which is the stress test of thread creations.
> > 
> > Reviewed-by: Uladzislau Rezki <urezki@gmail.com>
> > Reviewed-by: Baoquan He <bhe@redhat.com>
> > Reviewed-by: "Chen, Tim C" <tim.c.chen@intel.com>
> > Reviewed-by: "King, Colin" <colin.king@intel.com>
> 
> 
> We possibly need remove these reviewers' tags when new code change is
> taken so that people check and add Acked-by or Reviewed-by again if then
> agree, or add new comments if any concern.
> 
> > Signed-off-by: rulinhuang <rulin.huang@intel.com>
> > ---
> > V1 -> V2: Avoided the partial initialization issue of vm and
> > separated insert_vmap_area() from alloc_vmap_area()
> > V2 -> V3: Rebased on 6.8-rc5
> > V3 -> V4: Rebased on mm-unstable branch
> > V4 -> V5: cancel the split of alloc_vmap_area()
> > and keep insert_vmap_area()
> > V5 -> V6: add bug_on
> > ---
> >  mm/vmalloc.c | 132 +++++++++++++++++++++++++--------------------------
> >  1 file changed, 64 insertions(+), 68 deletions(-)
> > 
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 25a8df497255..5ae028b0d58d 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1841,15 +1841,66 @@ node_alloc(unsigned long size, unsigned long align,
> >  	return va;
> >  }
> >  
> > +/*** Per cpu kva allocator ***/
> > +
> > +/*
> > + * vmap space is limited especially on 32 bit architectures. Ensure there is
> > + * room for at least 16 percpu vmap blocks per CPU.
> > + */
> > +/*
> > + * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
> > + * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
> > + * instead (we just need a rough idea)
> > + */
> > +#if BITS_PER_LONG == 32
> > +#define VMALLOC_SPACE		(128UL*1024*1024)
> > +#else
> > +#define VMALLOC_SPACE		(128UL*1024*1024*1024)
> > +#endif
> > +
> > +#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
> > +#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
> > +#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
> > +#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
> > +#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
> > +#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
> > +#define VMAP_BBMAP_BITS		\
> > +		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
> > +		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
> > +			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
> > +
> > +#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
> > +
> > +/*
> > + * Purge threshold to prevent overeager purging of fragmented blocks for
> > + * regular operations: Purge if vb->free is less than 1/4 of the capacity.
> > + */
> > +#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
> > +
> > +#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
> > +#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
> > +#define VMAP_FLAGS_MASK		0x3
> 
> These code moving is made because we need check VMAP_RAM in advance. We
> may need move all those data structures and basic helpers related to per
> cpu kva allocator up too to along with these macros, just as the newly
> introduced vmap_node does. If that's agreed, better be done in a
> separate patch. My personal opinion. Not sure if Uladzislau has
> different thoughts.
> 
> Other than this, the overall looks good to me.
> 
I agree, the split should be done. One is a preparation move saying that
no functional change happens and final one an actual change is.

--
Uladzislau Rezki
  
Baoquan He Feb. 29, 2024, 12:07 p.m. UTC | #4
On 02/29/24 at 04:31pm, Huang, Rulin wrote:
> Apologizes for the confusions the original format led to and thanks so
> much for your guidance which will surely enhance the efficiency when
> communicating with the kernel community.
> 
> We've submitted the v6 of the patch, which more rigorously checks
> va_flag with BUG_ON, and at the same time ensures the additional
> performance overhead is subtle. In this modification we also moved the
> position of the macros because the definition of VMAP_RAM should be
> placed before alloc_vmap_area().
> 
> Much appreciation from you and Uladzislau on the code refinement. And at
> the same time, we'd also respect the internal review comments and
> suggestions from Tim and Colin, without which this patch cannot be
> qualified to be sent out for your review. Although the current
> implementation has been much different from its first version, I'd still
> recommend properly recognizing their contributions with the "review-by"
> tag. Does it make sense?

Just checked Documentation/process/submitting-patches.rst, seems below
tags are more appropriate? Because the work you mentioned is your
internal cooperation and effort, may not be related to upstream patch
reviewing.

Co-developed-by: "Chen, Tim C" <tim.c.chen@intel.com>
Signed-off-by: "Chen, Tim C" <tim.c.chen@intel.com>
Co-developed-by: "King, Colin" <colin.king@intel.com>
Signed-off-by: "King, Colin" <colin.king@intel.com>
  
King, Colin March 1, 2024, 9:14 a.m. UTC | #5
Just to confirm, looks good to me. Thanks Rulin.

Colin

-----Original Message-----
From: Huang, Rulin <rulin.huang@intel.com> 
Sent: Thursday, February 29, 2024 8:26 AM
To: urezki@gmail.com; bhe@redhat.com
Cc: akpm@linux-foundation.org; King, Colin <colin.king@intel.com>; hch@infradead.org; linux-kernel@vger.kernel.org; linux-mm@kvack.org; lstoakes@gmailcom; Huang, Rulin <rulin.huang@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen, Tim C <tim.c.chen@intel.com>; Guo, Wangyang <wangyang.guo@intel.com>; Zhou, Zhiguo <zhiguo.zhou@intel.com>
Subject: [PATCH v6] mm/vmalloc: lock contention optimization under multi-threading

When allocating a new memory area where the mapping address range is known, it is observed that the vmap_node->busy.lock is acquired twice.

The first acquisition occurs in the alloc_vmap_area() function when inserting the vm area into the vm mapping red-black tree. The second acquisition occurs in the setup_vmalloc_vm() function when updating the properties of the vm, such as flags and address, etc.

Combine these two operations together in alloc_vmap_area(), which improves scalability when the vmap_node->busy.lock is contended.
By doing so, the need to acquire the lock twice can also be eliminated  to once.

With the above change, tested on intel sapphire rapids
platform(224 vcpu), a 4% performance improvement is gained on stress-ng/pthread(https://github.com/ColinIanKing/stress-ng),
which is the stress test of thread creations.

Reviewed-by: Uladzislau Rezki <urezki@gmail.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Reviewed-by: "Chen, Tim C" <tim.c.chen@intel.com>
Reviewed-by: "King, Colin" <colin.king@intel.com>
Signed-off-by: rulinhuang <rulin.huang@intel.com>
---
V1 -> V2: Avoided the partial initialization issue of vm and separated insert_vmap_area() from alloc_vmap_area()
V2 -> V3: Rebased on 6.8-rc5
V3 -> V4: Rebased on mm-unstable branch
V4 -> V5: cancel the split of alloc_vmap_area() and keep insert_vmap_area()
V5 -> V6: add bug_on
---
 mm/vmalloc.c | 132 +++++++++++++++++++++++++--------------------------
 1 file changed, 64 insertions(+), 68 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 25a8df497255..5ae028b0d58d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1841,15 +1841,66 @@ node_alloc(unsigned long size, unsigned long align,
 	return va;
 }
 
+/*** Per cpu kva allocator ***/
+
+/*
+ * vmap space is limited especially on 32 bit architectures. Ensure 
+there is
+ * room for at least 16 percpu vmap blocks per CPU.
+ */
+/*
+ * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
+ * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
+ * instead (we just need a rough idea)
+ */
+#if BITS_PER_LONG == 32
+#define VMALLOC_SPACE		(128UL*1024*1024)
+#else
+#define VMALLOC_SPACE		(128UL*1024*1024*1024)
+#endif
+
+#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
+#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
+#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
+#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
+#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
+#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
+#define VMAP_BBMAP_BITS		\
+		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
+		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
+			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
+
+#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
+
+/*
+ * Purge threshold to prevent overeager purging of fragmented blocks 
+for
+ * regular operations: Purge if vb->free is less than 1/4 of the capacity.
+ */
+#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
+
+#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
+#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
+#define VMAP_FLAGS_MASK		0x3
+
+static inline void setup_vmalloc_vm(struct vm_struct *vm,
+	struct vmap_area *va, unsigned long flags, const void *caller) {
+	vm->flags = flags;
+	vm->addr = (void *)va->va_start;
+	vm->size = va->va_end - va->va_start;
+	vm->caller = caller;
+	va->vm = vm;
+}
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
- * vstart and vend.
+ * vstart and vend. If vm is passed in, the two will also be bound.
  */
 static struct vmap_area *alloc_vmap_area(unsigned long size,
 				unsigned long align,
 				unsigned long vstart, unsigned long vend,
 				int node, gfp_t gfp_mask,
-				unsigned long va_flags)
+				unsigned long va_flags, struct vm_struct *vm,
+				unsigned long flags, const void *caller)
 {
 	struct vmap_node *vn;
 	struct vmap_area *va;
@@ -1912,6 +1963,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	va->vm = NULL;
 	va->flags = (va_flags | vn_id);
 
+	if (vm) {
+		BUG_ON(va_flags & VMAP_RAM);
+		setup_vmalloc_vm(vm, va, flags, caller);
+	}
+
 	vn = addr_to_node(va->va_start);
 
 	spin_lock(&vn->busy.lock);
@@ -2325,46 +2381,6 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
 	return NULL;
 }
 
-/*** Per cpu kva allocator ***/
-
-/*
- * vmap space is limited especially on 32 bit architectures. Ensure there is
- * room for at least 16 percpu vmap blocks per CPU.
- */
-/*
- * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
- * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
- * instead (we just need a rough idea)
- */
-#if BITS_PER_LONG == 32
-#define VMALLOC_SPACE		(128UL*1024*1024)
-#else
-#define VMALLOC_SPACE		(128UL*1024*1024*1024)
-#endif
-
-#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
-#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
-#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
-#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
-#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
-#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
-#define VMAP_BBMAP_BITS		\
-		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
-		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
-			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
-
-#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
-
-/*
- * Purge threshold to prevent overeager purging of fragmented blocks for
- * regular operations: Purge if vb->free is less than 1/4 of the capacity.
- */
-#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
-
-#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
-#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
-#define VMAP_FLAGS_MASK		0x3
-
 struct vmap_block_queue {
 	spinlock_t lock;
 	struct list_head free;
@@ -2486,7 +2502,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
 	va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
 					VMALLOC_START, VMALLOC_END,
 					node, gfp_mask,
-					VMAP_RAM|VMAP_BLOCK);
+					VMAP_RAM|VMAP_BLOCK, NULL,
+					0, NULL);
 	if (IS_ERR(va)) {
 		kfree(vb);
 		return ERR_CAST(va);
@@ -2843,7 +2860,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
 		struct vmap_area *va;
 		va = alloc_vmap_area(size, PAGE_SIZE,
 				VMALLOC_START, VMALLOC_END,
-				node, GFP_KERNEL, VMAP_RAM);
+				node, GFP_KERNEL, VMAP_RAM,
+				NULL, 0, NULL);
 		if (IS_ERR(va))
 			return NULL;
 
@@ -2946,26 +2964,6 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
 	kasan_populate_early_vm_area_shadow(vm->addr, vm->size);  }
 
-static inline void setup_vmalloc_vm_locked(struct vm_struct *vm,
-	struct vmap_area *va, unsigned long flags, const void *caller)
-{
-	vm->flags = flags;
-	vm->addr = (void *)va->va_start;
-	vm->size = va->va_end - va->va_start;
-	vm->caller = caller;
-	va->vm = vm;
-}
-
-static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
-			      unsigned long flags, const void *caller)
-{
-	struct vmap_node *vn = addr_to_node(va->va_start);
-
-	spin_lock(&vn->busy.lock);
-	setup_vmalloc_vm_locked(vm, va, flags, caller);
-	spin_unlock(&vn->busy.lock);
-}
-
 static void clear_vm_uninitialized_flag(struct vm_struct *vm)  {
 	/*
@@ -3002,14 +3000,12 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (!(flags & VM_NO_GUARD))
 		size += PAGE_SIZE;
 
-	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0);
+	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area, 
+flags, caller);
 	if (IS_ERR(va)) {
 		kfree(area);
 		return NULL;
 	}
 
-	setup_vmalloc_vm(area, va, flags, caller);
-
 	/*
 	 * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a
 	 * best-effort approach, as they can be mapped outside of vmalloc code.
@@ -4584,7 +4580,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
 
 		spin_lock(&vn->busy.lock);
 		insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head);
-		setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC,
+		setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC,
 				 pcpu_get_vm_areas);
 		spin_unlock(&vn->busy.lock);
 	}

base-commit: 7e6ae2db7f319bf9613ec6db8fa3c9bc1de1b346
--
2.43.0
  

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 25a8df497255..5ae028b0d58d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1841,15 +1841,66 @@  node_alloc(unsigned long size, unsigned long align,
 	return va;
 }
 
+/*** Per cpu kva allocator ***/
+
+/*
+ * vmap space is limited especially on 32 bit architectures. Ensure there is
+ * room for at least 16 percpu vmap blocks per CPU.
+ */
+/*
+ * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
+ * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
+ * instead (we just need a rough idea)
+ */
+#if BITS_PER_LONG == 32
+#define VMALLOC_SPACE		(128UL*1024*1024)
+#else
+#define VMALLOC_SPACE		(128UL*1024*1024*1024)
+#endif
+
+#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
+#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
+#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
+#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
+#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
+#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
+#define VMAP_BBMAP_BITS		\
+		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
+		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
+			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
+
+#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
+
+/*
+ * Purge threshold to prevent overeager purging of fragmented blocks for
+ * regular operations: Purge if vb->free is less than 1/4 of the capacity.
+ */
+#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
+
+#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
+#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
+#define VMAP_FLAGS_MASK		0x3
+
+static inline void setup_vmalloc_vm(struct vm_struct *vm,
+	struct vmap_area *va, unsigned long flags, const void *caller)
+{
+	vm->flags = flags;
+	vm->addr = (void *)va->va_start;
+	vm->size = va->va_end - va->va_start;
+	vm->caller = caller;
+	va->vm = vm;
+}
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
- * vstart and vend.
+ * vstart and vend. If vm is passed in, the two will also be bound.
  */
 static struct vmap_area *alloc_vmap_area(unsigned long size,
 				unsigned long align,
 				unsigned long vstart, unsigned long vend,
 				int node, gfp_t gfp_mask,
-				unsigned long va_flags)
+				unsigned long va_flags, struct vm_struct *vm,
+				unsigned long flags, const void *caller)
 {
 	struct vmap_node *vn;
 	struct vmap_area *va;
@@ -1912,6 +1963,11 @@  static struct vmap_area *alloc_vmap_area(unsigned long size,
 	va->vm = NULL;
 	va->flags = (va_flags | vn_id);
 
+	if (vm) {
+		BUG_ON(va_flags & VMAP_RAM);
+		setup_vmalloc_vm(vm, va, flags, caller);
+	}
+
 	vn = addr_to_node(va->va_start);
 
 	spin_lock(&vn->busy.lock);
@@ -2325,46 +2381,6 @@  static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
 	return NULL;
 }
 
-/*** Per cpu kva allocator ***/
-
-/*
- * vmap space is limited especially on 32 bit architectures. Ensure there is
- * room for at least 16 percpu vmap blocks per CPU.
- */
-/*
- * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able
- * to #define VMALLOC_SPACE		(VMALLOC_END-VMALLOC_START). Guess
- * instead (we just need a rough idea)
- */
-#if BITS_PER_LONG == 32
-#define VMALLOC_SPACE		(128UL*1024*1024)
-#else
-#define VMALLOC_SPACE		(128UL*1024*1024*1024)
-#endif
-
-#define VMALLOC_PAGES		(VMALLOC_SPACE / PAGE_SIZE)
-#define VMAP_MAX_ALLOC		BITS_PER_LONG	/* 256K with 4K pages */
-#define VMAP_BBMAP_BITS_MAX	1024	/* 4MB with 4K pages */
-#define VMAP_BBMAP_BITS_MIN	(VMAP_MAX_ALLOC*2)
-#define VMAP_MIN(x, y)		((x) < (y) ? (x) : (y)) /* can't use min() */
-#define VMAP_MAX(x, y)		((x) > (y) ? (x) : (y)) /* can't use max() */
-#define VMAP_BBMAP_BITS		\
-		VMAP_MIN(VMAP_BBMAP_BITS_MAX,	\
-		VMAP_MAX(VMAP_BBMAP_BITS_MIN,	\
-			VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16))
-
-#define VMAP_BLOCK_SIZE		(VMAP_BBMAP_BITS * PAGE_SIZE)
-
-/*
- * Purge threshold to prevent overeager purging of fragmented blocks for
- * regular operations: Purge if vb->free is less than 1/4 of the capacity.
- */
-#define VMAP_PURGE_THRESHOLD	(VMAP_BBMAP_BITS / 4)
-
-#define VMAP_RAM		0x1 /* indicates vm_map_ram area*/
-#define VMAP_BLOCK		0x2 /* mark out the vmap_block sub-type*/
-#define VMAP_FLAGS_MASK		0x3
-
 struct vmap_block_queue {
 	spinlock_t lock;
 	struct list_head free;
@@ -2486,7 +2502,8 @@  static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
 	va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
 					VMALLOC_START, VMALLOC_END,
 					node, gfp_mask,
-					VMAP_RAM|VMAP_BLOCK);
+					VMAP_RAM|VMAP_BLOCK, NULL,
+					0, NULL);
 	if (IS_ERR(va)) {
 		kfree(vb);
 		return ERR_CAST(va);
@@ -2843,7 +2860,8 @@  void *vm_map_ram(struct page **pages, unsigned int count, int node)
 		struct vmap_area *va;
 		va = alloc_vmap_area(size, PAGE_SIZE,
 				VMALLOC_START, VMALLOC_END,
-				node, GFP_KERNEL, VMAP_RAM);
+				node, GFP_KERNEL, VMAP_RAM,
+				NULL, 0, NULL);
 		if (IS_ERR(va))
 			return NULL;
 
@@ -2946,26 +2964,6 @@  void __init vm_area_register_early(struct vm_struct *vm, size_t align)
 	kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
 }
 
-static inline void setup_vmalloc_vm_locked(struct vm_struct *vm,
-	struct vmap_area *va, unsigned long flags, const void *caller)
-{
-	vm->flags = flags;
-	vm->addr = (void *)va->va_start;
-	vm->size = va->va_end - va->va_start;
-	vm->caller = caller;
-	va->vm = vm;
-}
-
-static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
-			      unsigned long flags, const void *caller)
-{
-	struct vmap_node *vn = addr_to_node(va->va_start);
-
-	spin_lock(&vn->busy.lock);
-	setup_vmalloc_vm_locked(vm, va, flags, caller);
-	spin_unlock(&vn->busy.lock);
-}
-
 static void clear_vm_uninitialized_flag(struct vm_struct *vm)
 {
 	/*
@@ -3002,14 +3000,12 @@  static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (!(flags & VM_NO_GUARD))
 		size += PAGE_SIZE;
 
-	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0);
+	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area, flags, caller);
 	if (IS_ERR(va)) {
 		kfree(area);
 		return NULL;
 	}
 
-	setup_vmalloc_vm(area, va, flags, caller);
-
 	/*
 	 * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a
 	 * best-effort approach, as they can be mapped outside of vmalloc code.
@@ -4584,7 +4580,7 @@  struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
 
 		spin_lock(&vn->busy.lock);
 		insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head);
-		setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC,
+		setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC,
 				 pcpu_get_vm_areas);
 		spin_unlock(&vn->busy.lock);
 	}