[PATCHv4,4/9] zsmalloc: make huge class watermark zs_pool member

Message ID 20221031054108.541190-5-senozhatsky@chromium.org
State New
Headers
Series zsmalloc/zram: configurable zspage size |

Commit Message

Sergey Senozhatsky Oct. 31, 2022, 5:41 a.m. UTC
  We will permit per-pool configuration of pages per-zspage value,
which changes characteristics of the classes and moves around
huge class size watermark. Thus huge class size needs to be
a per-pool variable.

Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
 mm/zsmalloc.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
  

Comments

Minchan Kim Nov. 10, 2022, 10:25 p.m. UTC | #1
On Mon, Oct 31, 2022 at 02:41:03PM +0900, Sergey Senozhatsky wrote:
> We will permit per-pool configuration of pages per-zspage value,
> which changes characteristics of the classes and moves around
> huge class size watermark. Thus huge class size needs to be
> a per-pool variable.

I think part of code in previous patch should move here since
you are creating the feature in this patch:

BTW, I am wondering we really need to jump the per-pool config
option over global general golden ratio and/or smarter approach
to optimize transparently depending on how much memory we have
wasted.

> 
> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> ---
>  mm/zsmalloc.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 5f79223e7bfe..d329bd673baa 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -178,7 +178,6 @@ static struct dentry *zs_stat_root;
>   * (see: fix_fullness_group())
>   */
>  static const int fullness_threshold_frac = 4;
> -static size_t huge_class_size;
>  
>  struct size_class {
>  	spinlock_t lock;
> @@ -227,6 +226,7 @@ struct zs_pool {
>  
>  	u32 num_size_classes;
>  	u32 min_alloc_size;
> +	size_t huge_class_size;
>  
>  	struct zs_pool_stats stats;
>  
> @@ -1350,7 +1350,7 @@ EXPORT_SYMBOL_GPL(zs_unmap_object);
>   */
>  size_t zs_huge_class_size(struct zs_pool *pool)
>  {
> -	return huge_class_size;
> +	return pool->huge_class_size;
>  }
>  EXPORT_SYMBOL_GPL(zs_huge_class_size);
>  
> @@ -2262,8 +2262,8 @@ struct zs_pool *zs_create_pool(const char *name)
>  		 * endup in the huge class.
>  		 */
>  		if (pages_per_zspage != 1 && objs_per_zspage != 1 &&
> -				!huge_class_size) {
> -			huge_class_size = size;
> +				!pool->huge_class_size) {
> +			pool->huge_class_size = size;
>  			/*
>  			 * The object uses ZS_HANDLE_SIZE bytes to store the
>  			 * handle. We need to subtract it, because zs_malloc()
> @@ -2273,7 +2273,7 @@ struct zs_pool *zs_create_pool(const char *name)
>  			 * class because it grows by ZS_HANDLE_SIZE extra bytes
>  			 * right before class lookup.
>  			 */
> -			huge_class_size -= (ZS_HANDLE_SIZE - 1);
> +			pool->huge_class_size -= (ZS_HANDLE_SIZE - 1);
>  		}
>  
>  		/*
> -- 
> 2.38.1.273.g43a17bfeac-goog
>
  
Sergey Senozhatsky Nov. 11, 2022, 1:07 a.m. UTC | #2
On (22/11/10 14:25), Minchan Kim wrote:
> On Mon, Oct 31, 2022 at 02:41:03PM +0900, Sergey Senozhatsky wrote:
> > We will permit per-pool configuration of pages per-zspage value,
> > which changes characteristics of the classes and moves around
> > huge class size watermark. Thus huge class size needs to be
> > a per-pool variable.
> 
> I think part of code in previous patch should move here since
> you are creating the feature in this patch:

What do you mean? This patch - make huge_class_size a pool value - looks
completely independent to me.

> BTW, I am wondering we really need to jump the per-pool config
> option over global general golden ratio and/or smarter approach
> to optimize transparently depending on how much memory we have
> wasted.

I like the per-zspool value.

Dynamic zspage sizing is going to be very very difficult if possible at
all. With different zspage limits we create different size class clusters
and we also limit huge size class watermark. So if we say, increase the
zspage length value, then we have more size classes: but in order for us
to actually start saving memory we need to move objects that waste
memory in previous cluster configuration to new classes. It's even more
complex with huge objects. When we say move huge size class watermark
from 3264 to 3632 then in order to actually save memory we need to
recompress huge objects and put them into size classes that are between
3264 and 3632.

And that's only half. We also can lower the zspage length limit and
we'll have less size classes (because they merge more) and move huge
size class watermark from 3632 back to 3264. How do we handle this?

I really think that per-zspool knob is the easiest way. And it doesn't
block us from doing any improvements in the future.
  

Patch

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5f79223e7bfe..d329bd673baa 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -178,7 +178,6 @@  static struct dentry *zs_stat_root;
  * (see: fix_fullness_group())
  */
 static const int fullness_threshold_frac = 4;
-static size_t huge_class_size;
 
 struct size_class {
 	spinlock_t lock;
@@ -227,6 +226,7 @@  struct zs_pool {
 
 	u32 num_size_classes;
 	u32 min_alloc_size;
+	size_t huge_class_size;
 
 	struct zs_pool_stats stats;
 
@@ -1350,7 +1350,7 @@  EXPORT_SYMBOL_GPL(zs_unmap_object);
  */
 size_t zs_huge_class_size(struct zs_pool *pool)
 {
-	return huge_class_size;
+	return pool->huge_class_size;
 }
 EXPORT_SYMBOL_GPL(zs_huge_class_size);
 
@@ -2262,8 +2262,8 @@  struct zs_pool *zs_create_pool(const char *name)
 		 * endup in the huge class.
 		 */
 		if (pages_per_zspage != 1 && objs_per_zspage != 1 &&
-				!huge_class_size) {
-			huge_class_size = size;
+				!pool->huge_class_size) {
+			pool->huge_class_size = size;
 			/*
 			 * The object uses ZS_HANDLE_SIZE bytes to store the
 			 * handle. We need to subtract it, because zs_malloc()
@@ -2273,7 +2273,7 @@  struct zs_pool *zs_create_pool(const char *name)
 			 * class because it grows by ZS_HANDLE_SIZE extra bytes
 			 * right before class lookup.
 			 */
-			huge_class_size -= (ZS_HANDLE_SIZE - 1);
+			pool->huge_class_size -= (ZS_HANDLE_SIZE - 1);
 		}
 
 		/*