[3/9] maple_tree: Modify the allocation method of mtree_alloc_range/rrange()

Message ID 20230425110511.11680-4-zhangpeng.00@bytedance.com
State New
Headers
Series fix, rework and clean up for maple tree |

Commit Message

Peng Zhang April 25, 2023, 11:05 a.m. UTC
  Let mtree_alloc_range() and mtree_alloc_rrange() use mas_empty_area()
and mas_empty_area_rev() respectively for allocation to reduce code
redundancy. And after doing this, we don't need to maintain two logically
identical codes to improve maintainability.

In fact, mtree_alloc_range/rrange() has some bugs. For example, when
dealing with min equals to max (mas_empty_area/area_rev() has been fixed),
the allocation will fail.
There are still some other bugs in it, I saw it with my naked eyes, but
I didn't test it, for example:
When mtree_alloc_range()->mas_alloc()->mas_awalk(), we set mas.index = min,
mas.last = max - size. However, mas_awalk() requires mas.index = min,
mas.last = max, which may lead to allocation failures.

Right now no users are using these two functions so the bug won't trigger,
but this might trigger in the future.

Also use mas_store_gfp() instead of mas_fill_gap() as I don't see any
difference between them.

After doing this, we no longer need the three functions
mas_fill_gap(), mas_alloc(), and mas_rev_alloc().

Fixes: 54a611b60590 ("Maple Tree: add new data structure")
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
---
 lib/maple_tree.c | 45 ++++++++++++---------------------------------
 1 file changed, 12 insertions(+), 33 deletions(-)
  

Comments

Liam R. Howlett April 25, 2023, 4:08 p.m. UTC | #1
* Peng Zhang <zhangpeng.00@bytedance.com> [230425 07:05]:
> Let mtree_alloc_range() and mtree_alloc_rrange() use mas_empty_area()
> and mas_empty_area_rev() respectively for allocation to reduce code
> redundancy. And after doing this, we don't need to maintain two logically
> identical codes to improve maintainability.
> 
> In fact, mtree_alloc_range/rrange() has some bugs. For example, when
> dealing with min equals to max (mas_empty_area/area_rev() has been fixed),
> the allocation will fail.
> There are still some other bugs in it, I saw it with my naked eyes, but
> I didn't test it, for example:
> When mtree_alloc_range()->mas_alloc()->mas_awalk(), we set mas.index = min,
> mas.last = max - size. However, mas_awalk() requires mas.index = min,
> mas.last = max, which may lead to allocation failures.

Please don't re-state code in your commit messages.

Try to focus on what you did, and not why.

ie: Aligned mtree_alloc_range() to use the same internal function as
mas_empty_area().

> 
> Right now no users are using these two functions so the bug won't trigger,
> but this might trigger in the future.
> 
> Also use mas_store_gfp() instead of mas_fill_gap() as I don't see any
> difference between them.

Yeah, evolution of the code converged on the same design.  Thanks for
seeing this.

> 
> After doing this, we no longer need the three functions
> mas_fill_gap(), mas_alloc(), and mas_rev_alloc().

Let's just drop mtree_alloc_range() and mtree_alloc_rrange() and
whatever else you found here.  They were planned to simplify the mmap
code allocations, but since there would need to be arch involvement
(coloring, etc) and alignment, etc; it is better to leave this job to
the mm code itself.

> 
> Fixes: 54a611b60590 ("Maple Tree: add new data structure")
> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
> ---
>  lib/maple_tree.c | 45 ++++++++++++---------------------------------
>  1 file changed, 12 insertions(+), 33 deletions(-)
> 
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index aa55c914818a0..294d4c8668323 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -6362,32 +6362,20 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
>  {
>  	int ret = 0;
>  
> -	MA_STATE(mas, mt, min, max - size);
> +	MA_STATE(mas, mt, 0, 0);
>  	if (!mt_is_alloc(mt))
>  		return -EINVAL;
>  
>  	if (WARN_ON_ONCE(mt_is_reserved(entry)))
>  		return -EINVAL;
>  
> -	if (min > max)
> -		return -EINVAL;
> -
> -	if (max < size)
> -		return -EINVAL;
> -
> -	if (!size)
> -		return -EINVAL;
> -
>  	mtree_lock(mt);
> -retry:
> -	mas.offset = 0;
> -	mas.index = min;
> -	mas.last = max - size;
> -	ret = mas_alloc(&mas, entry, size, startp);
> -	if (mas_nomem(&mas, gfp))
> -		goto retry;
> -
> +	ret = mas_empty_area(&mas, min, max, size);
> +	if (!ret)
> +		ret = mas_store_gfp(&mas, entry, gfp);
>  	mtree_unlock(mt);
> +	if (!ret)
> +		*startp = mas.index;
>  	return ret;
>  }
>  EXPORT_SYMBOL(mtree_alloc_range);
> @@ -6398,29 +6386,20 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
>  {
>  	int ret = 0;
>  
> -	MA_STATE(mas, mt, min, max - size);
> +	MA_STATE(mas, mt, 0, 0);
>  	if (!mt_is_alloc(mt))
>  		return -EINVAL;
>  
>  	if (WARN_ON_ONCE(mt_is_reserved(entry)))
>  		return -EINVAL;
>  
> -	if (min >= max)
> -		return -EINVAL;
> -
> -	if (max < size - 1)
> -		return -EINVAL;
> -
> -	if (!size)
> -		return -EINVAL;
> -
>  	mtree_lock(mt);
> -retry:
> -	ret = mas_rev_alloc(&mas, min, max, entry, size, startp);
> -	if (mas_nomem(&mas, gfp))
> -		goto retry;
> -
> +	ret = mas_empty_area_rev(&mas, min, max, size);
> +	if (!ret)
> +		ret = mas_store_gfp(&mas, entry, gfp);
>  	mtree_unlock(mt);
> +	if (!ret)
> +		*startp = mas.index;
>  	return ret;
>  }
>  EXPORT_SYMBOL(mtree_alloc_rrange);
> -- 
> 2.20.1
>
  
Peng Zhang April 26, 2023, 12:34 p.m. UTC | #2
在 2023/4/26 00:08, Liam R. Howlett 写道:
> * Peng Zhang <zhangpeng.00@bytedance.com> [230425 07:05]:
>> Let mtree_alloc_range() and mtree_alloc_rrange() use mas_empty_area()
>> and mas_empty_area_rev() respectively for allocation to reduce code
>> redundancy. And after doing this, we don't need to maintain two logically
>> identical codes to improve maintainability.
>>
>> In fact, mtree_alloc_range/rrange() has some bugs. For example, when
>> dealing with min equals to max (mas_empty_area/area_rev() has been fixed),
>> the allocation will fail.
>> There are still some other bugs in it, I saw it with my naked eyes, but
>> I didn't test it, for example:
>> When mtree_alloc_range()->mas_alloc()->mas_awalk(), we set mas.index = min,
>> mas.last = max - size. However, mas_awalk() requires mas.index = min,
>> mas.last = max, which may lead to allocation failures.
> 
> Please don't re-state code in your commit messages.
> 
> Try to focus on what you did, and not why.
> 
> ie: Aligned mtree_alloc_range() to use the same internal function as
> mas_empty_area().
> 
>>
>> Right now no users are using these two functions so the bug won't trigger,
>> but this might trigger in the future.
>>
>> Also use mas_store_gfp() instead of mas_fill_gap() as I don't see any
>> difference between them.
> 
> Yeah, evolution of the code converged on the same design.  Thanks for
> seeing this.
> 
>>
>> After doing this, we no longer need the three functions
>> mas_fill_gap(), mas_alloc(), and mas_rev_alloc().
> 
> Let's just drop mtree_alloc_range() and mtree_alloc_rrange() and
> whatever else you found here.  They were planned to simplify the mmap
> code allocations, but since there would need to be arch involvement
> (coloring, etc) and alignment, etc; it is better to leave this job to
> the mm code itself.
Ok, I will remove some useless functions here.
But mtree_alloc_range() and mtree_alloc_rrange() really don't need to be 
reserved? Because I don't know if there will be users using it in other 
scenarios in the future.

Thank you for all your suggestions on this patch set, I will update them.
> 
>>
>> Fixes: 54a611b60590 ("Maple Tree: add new data structure")
>> Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
>> ---
>>   lib/maple_tree.c | 45 ++++++++++++---------------------------------
>>   1 file changed, 12 insertions(+), 33 deletions(-)
>>
>> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>> index aa55c914818a0..294d4c8668323 100644
>> --- a/lib/maple_tree.c
>> +++ b/lib/maple_tree.c
>> @@ -6362,32 +6362,20 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
>>   {
>>   	int ret = 0;
>>   
>> -	MA_STATE(mas, mt, min, max - size);
>> +	MA_STATE(mas, mt, 0, 0);
>>   	if (!mt_is_alloc(mt))
>>   		return -EINVAL;
>>   
>>   	if (WARN_ON_ONCE(mt_is_reserved(entry)))
>>   		return -EINVAL;
>>   
>> -	if (min > max)
>> -		return -EINVAL;
>> -
>> -	if (max < size)
>> -		return -EINVAL;
>> -
>> -	if (!size)
>> -		return -EINVAL;
>> -
>>   	mtree_lock(mt);
>> -retry:
>> -	mas.offset = 0;
>> -	mas.index = min;
>> -	mas.last = max - size;
>> -	ret = mas_alloc(&mas, entry, size, startp);
>> -	if (mas_nomem(&mas, gfp))
>> -		goto retry;
>> -
>> +	ret = mas_empty_area(&mas, min, max, size);
>> +	if (!ret)
>> +		ret = mas_store_gfp(&mas, entry, gfp);
>>   	mtree_unlock(mt);
>> +	if (!ret)
>> +		*startp = mas.index;
>>   	return ret;
>>   }
>>   EXPORT_SYMBOL(mtree_alloc_range);
>> @@ -6398,29 +6386,20 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
>>   {
>>   	int ret = 0;
>>   
>> -	MA_STATE(mas, mt, min, max - size);
>> +	MA_STATE(mas, mt, 0, 0);
>>   	if (!mt_is_alloc(mt))
>>   		return -EINVAL;
>>   
>>   	if (WARN_ON_ONCE(mt_is_reserved(entry)))
>>   		return -EINVAL;
>>   
>> -	if (min >= max)
>> -		return -EINVAL;
>> -
>> -	if (max < size - 1)
>> -		return -EINVAL;
>> -
>> -	if (!size)
>> -		return -EINVAL;
>> -
>>   	mtree_lock(mt);
>> -retry:
>> -	ret = mas_rev_alloc(&mas, min, max, entry, size, startp);
>> -	if (mas_nomem(&mas, gfp))
>> -		goto retry;
>> -
>> +	ret = mas_empty_area_rev(&mas, min, max, size);
>> +	if (!ret)
>> +		ret = mas_store_gfp(&mas, entry, gfp);
>>   	mtree_unlock(mt);
>> +	if (!ret)
>> +		*startp = mas.index;
>>   	return ret;
>>   }
>>   EXPORT_SYMBOL(mtree_alloc_rrange);
>> -- 
>> 2.20.1
>>
  
Liam R. Howlett April 27, 2023, 1:10 a.m. UTC | #3
* Peng Zhang <perlyzhang@gmail.com> [230426 08:34]:
> 
> 
> 在 2023/4/26 00:08, Liam R. Howlett 写道:
> > * Peng Zhang <zhangpeng.00@bytedance.com> [230425 07:05]:
> > > Let mtree_alloc_range() and mtree_alloc_rrange() use mas_empty_area()
> > > and mas_empty_area_rev() respectively for allocation to reduce code
> > > redundancy. And after doing this, we don't need to maintain two logically
> > > identical codes to improve maintainability.
> > > 
> > > In fact, mtree_alloc_range/rrange() has some bugs. For example, when
> > > dealing with min equals to max (mas_empty_area/area_rev() has been fixed),
> > > the allocation will fail.
> > > There are still some other bugs in it, I saw it with my naked eyes, but
> > > I didn't test it, for example:
> > > When mtree_alloc_range()->mas_alloc()->mas_awalk(), we set mas.index = min,
> > > mas.last = max - size. However, mas_awalk() requires mas.index = min,
> > > mas.last = max, which may lead to allocation failures.
> > 
> > Please don't re-state code in your commit messages.
> > 
> > Try to focus on what you did, and not why.
> > 
> > ie: Aligned mtree_alloc_range() to use the same internal function as
> > mas_empty_area().
> > 
> > > 
> > > Right now no users are using these two functions so the bug won't trigger,
> > > but this might trigger in the future.
> > > 
> > > Also use mas_store_gfp() instead of mas_fill_gap() as I don't see any
> > > difference between them.
> > 
> > Yeah, evolution of the code converged on the same design.  Thanks for
> > seeing this.
> > 
> > > 
> > > After doing this, we no longer need the three functions
> > > mas_fill_gap(), mas_alloc(), and mas_rev_alloc().
> > 
> > Let's just drop mtree_alloc_range() and mtree_alloc_rrange() and
> > whatever else you found here.  They were planned to simplify the mmap
> > code allocations, but since there would need to be arch involvement
> > (coloring, etc) and alignment, etc; it is better to leave this job to
> > the mm code itself.
> Ok, I will remove some useless functions here.
> But mtree_alloc_range() and mtree_alloc_rrange() really don't need to be
> reserved? Because I don't know if there will be users using it in other
> scenarios in the future.

As you showed, a lot of the code is now the same elsewhere, so it
wouldn't take much to make a version of this outside of the tree if
someone needs the functionality.

> 
> Thank you for all your suggestions on this patch set, I will update them.
> > 
> > > 
> > > Fixes: 54a611b60590 ("Maple Tree: add new data structure")
> > > Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
> > > ---
> > >   lib/maple_tree.c | 45 ++++++++++++---------------------------------
> > >   1 file changed, 12 insertions(+), 33 deletions(-)
> > > 
> > > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > > index aa55c914818a0..294d4c8668323 100644
> > > --- a/lib/maple_tree.c
> > > +++ b/lib/maple_tree.c
> > > @@ -6362,32 +6362,20 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
> > >   {
> > >   	int ret = 0;
> > > -	MA_STATE(mas, mt, min, max - size);
> > > +	MA_STATE(mas, mt, 0, 0);
> > >   	if (!mt_is_alloc(mt))
> > >   		return -EINVAL;
> > >   	if (WARN_ON_ONCE(mt_is_reserved(entry)))
> > >   		return -EINVAL;
> > > -	if (min > max)
> > > -		return -EINVAL;
> > > -
> > > -	if (max < size)
> > > -		return -EINVAL;
> > > -
> > > -	if (!size)
> > > -		return -EINVAL;
> > > -
> > >   	mtree_lock(mt);
> > > -retry:
> > > -	mas.offset = 0;
> > > -	mas.index = min;
> > > -	mas.last = max - size;
> > > -	ret = mas_alloc(&mas, entry, size, startp);
> > > -	if (mas_nomem(&mas, gfp))
> > > -		goto retry;
> > > -
> > > +	ret = mas_empty_area(&mas, min, max, size);
> > > +	if (!ret)
> > > +		ret = mas_store_gfp(&mas, entry, gfp);
> > >   	mtree_unlock(mt);
> > > +	if (!ret)
> > > +		*startp = mas.index;
> > >   	return ret;
> > >   }
> > >   EXPORT_SYMBOL(mtree_alloc_range);
> > > @@ -6398,29 +6386,20 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
> > >   {
> > >   	int ret = 0;
> > > -	MA_STATE(mas, mt, min, max - size);
> > > +	MA_STATE(mas, mt, 0, 0);
> > >   	if (!mt_is_alloc(mt))
> > >   		return -EINVAL;
> > >   	if (WARN_ON_ONCE(mt_is_reserved(entry)))
> > >   		return -EINVAL;
> > > -	if (min >= max)
> > > -		return -EINVAL;
> > > -
> > > -	if (max < size - 1)
> > > -		return -EINVAL;
> > > -
> > > -	if (!size)
> > > -		return -EINVAL;
> > > -
> > >   	mtree_lock(mt);
> > > -retry:
> > > -	ret = mas_rev_alloc(&mas, min, max, entry, size, startp);
> > > -	if (mas_nomem(&mas, gfp))
> > > -		goto retry;
> > > -
> > > +	ret = mas_empty_area_rev(&mas, min, max, size);
> > > +	if (!ret)
> > > +		ret = mas_store_gfp(&mas, entry, gfp);
> > >   	mtree_unlock(mt);
> > > +	if (!ret)
> > > +		*startp = mas.index;
> > >   	return ret;
> > >   }
> > >   EXPORT_SYMBOL(mtree_alloc_rrange);
> > > -- 
> > > 2.20.1
> > >
  

Patch

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index aa55c914818a0..294d4c8668323 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6362,32 +6362,20 @@  int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
 {
 	int ret = 0;
 
-	MA_STATE(mas, mt, min, max - size);
+	MA_STATE(mas, mt, 0, 0);
 	if (!mt_is_alloc(mt))
 		return -EINVAL;
 
 	if (WARN_ON_ONCE(mt_is_reserved(entry)))
 		return -EINVAL;
 
-	if (min > max)
-		return -EINVAL;
-
-	if (max < size)
-		return -EINVAL;
-
-	if (!size)
-		return -EINVAL;
-
 	mtree_lock(mt);
-retry:
-	mas.offset = 0;
-	mas.index = min;
-	mas.last = max - size;
-	ret = mas_alloc(&mas, entry, size, startp);
-	if (mas_nomem(&mas, gfp))
-		goto retry;
-
+	ret = mas_empty_area(&mas, min, max, size);
+	if (!ret)
+		ret = mas_store_gfp(&mas, entry, gfp);
 	mtree_unlock(mt);
+	if (!ret)
+		*startp = mas.index;
 	return ret;
 }
 EXPORT_SYMBOL(mtree_alloc_range);
@@ -6398,29 +6386,20 @@  int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
 {
 	int ret = 0;
 
-	MA_STATE(mas, mt, min, max - size);
+	MA_STATE(mas, mt, 0, 0);
 	if (!mt_is_alloc(mt))
 		return -EINVAL;
 
 	if (WARN_ON_ONCE(mt_is_reserved(entry)))
 		return -EINVAL;
 
-	if (min >= max)
-		return -EINVAL;
-
-	if (max < size - 1)
-		return -EINVAL;
-
-	if (!size)
-		return -EINVAL;
-
 	mtree_lock(mt);
-retry:
-	ret = mas_rev_alloc(&mas, min, max, entry, size, startp);
-	if (mas_nomem(&mas, gfp))
-		goto retry;
-
+	ret = mas_empty_area_rev(&mas, min, max, size);
+	if (!ret)
+		ret = mas_store_gfp(&mas, entry, gfp);
 	mtree_unlock(mt);
+	if (!ret)
+		*startp = mas.index;
 	return ret;
 }
 EXPORT_SYMBOL(mtree_alloc_rrange);