[0/6] zsmalloc/zram: configurable zspage size

Message ID 20221024161213.3221725-1-senozhatsky@chromium.org
Headers
Series zsmalloc/zram: configurable zspage size |

Message

Sergey Senozhatsky Oct. 24, 2022, 4:12 p.m. UTC
  Hello,

	Some use-cases and/or data patterns may benefit from
larger zspages. Currently the limit on the number of physical
pages that are linked into a zspage is hardcoded to 4. Higher
limit changes key characteristics of a number of the size
clases, improving compactness of the pool and redusing the
amount of memory zsmalloc pool uses.

For instance, the huge size class watermark is currently set
to 3264 bytes. With order 3 zspages we have more normal classe
and huge size watermark becomes 3632. With order 4 zspages
huge size watermark becomes 3840.

Commit #1 has more numbers and some analysis.

Sergey Senozhatsky (6):
  zsmalloc: turn zspage order into runtime variable
  zsmalloc/zram: pass zspage order to zs_create_pool()
  zram: add pool_page_order device attribute
  Documentation: document zram pool_page_order attribute
  zsmalloc: break out of loop when found perfect zspage order
  zsmalloc: make sure we select best zspage size

 Documentation/admin-guide/blockdev/zram.rst | 31 +++++--
 drivers/block/zram/zram_drv.c               | 44 ++++++++-
 drivers/block/zram/zram_drv.h               |  2 +
 include/linux/zsmalloc.h                    | 15 +++-
 mm/zsmalloc.c                               | 98 +++++++++++++--------
 5 files changed, 145 insertions(+), 45 deletions(-)
  

Comments

Bagas Sanjaya Oct. 25, 2022, 3:26 a.m. UTC | #1
On Tue, Oct 25, 2022 at 01:12:07AM +0900, Sergey Senozhatsky wrote:
> 	Hello,
> 
> 	Some use-cases and/or data patterns may benefit from
> larger zspages. Currently the limit on the number of physical
> pages that are linked into a zspage is hardcoded to 4. Higher
> limit changes key characteristics of a number of the size
> clases, improving compactness of the pool and redusing the
> amount of memory zsmalloc pool uses.
> 
> For instance, the huge size class watermark is currently set
> to 3264 bytes. With order 3 zspages we have more normal classe
> and huge size watermark becomes 3632. With order 4 zspages
> huge size watermark becomes 3840.
> 
> Commit #1 has more numbers and some analysis.
> 
> Sergey Senozhatsky (6):
>   zsmalloc: turn zspage order into runtime variable
>   zsmalloc/zram: pass zspage order to zs_create_pool()
>   zram: add pool_page_order device attribute
>   Documentation: document zram pool_page_order attribute
>   zsmalloc: break out of loop when found perfect zspage order
>   zsmalloc: make sure we select best zspage size
> 
>  Documentation/admin-guide/blockdev/zram.rst | 31 +++++--
>  drivers/block/zram/zram_drv.c               | 44 ++++++++-
>  drivers/block/zram/zram_drv.h               |  2 +
>  include/linux/zsmalloc.h                    | 15 +++-
>  mm/zsmalloc.c                               | 98 +++++++++++++--------
>  5 files changed, 145 insertions(+), 45 deletions(-)
> 

Sorry, I can't cleanly apply this patch series due to conflicts in
patch [1/6]. On what tree and commit the series is based?
  
Sergey Senozhatsky Oct. 25, 2022, 3:42 a.m. UTC | #2
On (22/10/25 10:26), Bagas Sanjaya wrote:
> 
> Sorry, I can't cleanly apply this patch series due to conflicts in
> patch [1/6]. On what tree and commit the series is based?

next-20221024
  
Sergey Senozhatsky Oct. 25, 2022, 4:30 a.m. UTC | #3
On (22/10/25 01:12), Sergey Senozhatsky wrote:
> Sergey Senozhatsky (6):
>   zsmalloc: turn zspage order into runtime variable
>   zsmalloc/zram: pass zspage order to zs_create_pool()
>   zram: add pool_page_order device attribute
>   Documentation: document zram pool_page_order attribute
>   zsmalloc: break out of loop when found perfect zspage order
>   zsmalloc: make sure we select best zspage size

Andrew, I want to replace the last 2 patches in the series: I think
we can drop `usedpc` calculations and instead optimize only for `waste`
value. Would you prefer me to resend the entire instead?
  
Sergey Senozhatsky Oct. 25, 2022, 7:57 a.m. UTC | #4
On (22/10/25 13:30), Sergey Senozhatsky wrote:
> On (22/10/25 01:12), Sergey Senozhatsky wrote:
> > Sergey Senozhatsky (6):
> >   zsmalloc: turn zspage order into runtime variable
> >   zsmalloc/zram: pass zspage order to zs_create_pool()
> >   zram: add pool_page_order device attribute
> >   Documentation: document zram pool_page_order attribute
> >   zsmalloc: break out of loop when found perfect zspage order
> >   zsmalloc: make sure we select best zspage size
> 
> Andrew, I want to replace the last 2 patches in the series: I think
> we can drop `usedpc` calculations and instead optimize only for `waste`
> value. Would you prefer me to resend the entire instead?

Andrew, let's do it another way - let's drop the last patch from the
series. But only the last one. The past was a last minute addition to
the series and I have not fully studied it's impact yet. From a
preliminary research I can say that it improves zsmalloc memory usage
only for order 4 zspages and has no statistically significant impact
on order 2 nor order 3 zspages.

Synthetic test, base get_pages_per_zspage() vs 'waste' optimized
get_pages_per_zspage() for order 4 zspages:

x zram-order-4-memused-base
+ zram-order-4-memused-patched
+----------------------------------------------------------------------------+
|+               +        +  +                               x xx           x|
|     |___________A_______M____|                           |____M_A______|   |
+----------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x   4 6.3960678e+08 6.3974605e+08 6.3962726e+08 6.3965082e+08     64101.637
+   4 6.3902925e+08 6.3929958e+08 6.3926682e+08 6.3919514e+08     120652.52
Difference at 95.0% confidence
	-455680 +/- 167159
	-0.0712389% +/- 0.0261329%
	(Student's t, pooled s = 96607.6)


If I will have enough confidence in that patch I will submit it
separately, with a proper commit message and clear justification.
  
Bagas Sanjaya Oct. 25, 2022, 8:40 a.m. UTC | #5
On 10/25/22 10:42, Sergey Senozhatsky wrote:
> On (22/10/25 10:26), Bagas Sanjaya wrote:
>>
>> Sorry, I can't cleanly apply this patch series due to conflicts in
>> patch [1/6]. On what tree and commit the series is based?
> 
> next-20221024

Hmm, still can't be applied (again patch [1/6] is the culprit).
Please rebase on top of mm-everything. Don't forget to pass
--base to git-format-patch(1) so that I know the base commit
of this series.

Thanks.