Purpose of maple_node objects to be its size aligned

Message ID 56154bf4-c1e2-16d5-c6e2-c2dee42d3377@quicinc.com
State New
Headers
Series Purpose of maple_node objects to be its size aligned |

Commit Message

Charan Teja Kalla Jan. 23, 2024, 11:03 a.m. UTC
  I am just curious about the purpose of maple node slab objects to be its
size aligned, but I can understand why they need to be cache aligned.

void __init maple_tree_init(void)
{
	maple_node_cache = kmem_cache_create("maple_node",
			sizeof(struct maple_node),
			sizeof(struct maple_node),// Alignment of the slab object.
			SLAB_PANIC, NULL);
}

Reason for the ask is, when slub debug enabled with option Z, the change
[1] makes the total object to be 256 * 3 (=768)bytes.  This turns out to
be a problem in debug builds where the unreclaimable slab consumption
itself is very high thus exerting the memory pressure on the system.

maple_node:
  orginal object size 	   = 256b
  after slub_debug enabled = 768b


If, there is no special requirement, other than just needs to be cache
aligned, thinking of the below:


[1]d86bd1bece6f ("mm/slub: support left redzone")

Thanks,
charan
  

Comments

Matthew Wilcox Jan. 23, 2024, 1:26 p.m. UTC | #1
On Tue, Jan 23, 2024 at 04:33:51PM +0530, Charan Teja Kalla wrote:
> I am just curious about the purpose of maple node slab objects to be its
> size aligned, but I can understand why they need to be cache aligned.

Because we encode various information in the bottom few bits of the
maple node pointer.

/*
 * The Maple Tree squeezes various bits in at various points which aren't
 * necessarily obvious.  Usually, this is done by observing that pointers are
 * N-byte aligned and thus the bottom log_2(N) bits are available for use.  We
 * don't use the high bits of pointers to store additional information because
 * we don't know what bits are unused on any given architecture.
 *
 * Nodes are 256 bytes in size and are also aligned to 256 bytes, giving us 8
 * low bits for our own purposes.  Nodes are currently of 4 types:
 * 1. Single pointer (Range is 0-0)
 * 2. Non-leaf Allocation Range nodes
 * 3. Non-leaf Range nodes
 * 4. Leaf Range nodes All nodes consist of a number of node slots,
 *    pivots, and a parent pointer.
 */

> Reason for the ask is, when slub debug enabled with option Z, the change
> [1] makes the total object to be 256 * 3 (=768)bytes.  This turns out to
> be a problem in debug builds where the unreclaimable slab consumption
> itself is very high thus exerting the memory pressure on the system.

That seems like a very badly implemented patch.  Rather than make all
objects left & right redzone, we should simply insert a redzone at
the beginning of the slab.  ie

0	redzone
256	node
512	redzone
768	node
1024	redzone
1280	node
[...]
3072	redzone
3382	node
3584	redzone
3840	wasted space

Instead of getting only five nodes per 4kB page, we'd get seven; about
a 30% reduction in memory usage.

Slab redzoning is not a feature people turn on often, so I'm not
surprised nobody's noticed before now.
  
Charan Teja Kalla Jan. 23, 2024, 2:40 p.m. UTC | #2
Thanks Matthew!!

On 1/23/2024 6:56 PM, Matthew Wilcox wrote:
>> I am just curious about the purpose of maple node slab objects to be its
>> size aligned, but I can understand why they need to be cache aligned.
> Because we encode various information in the bottom few bits of the
> maple node pointer.
> 
> /*
>  * The Maple Tree squeezes various bits in at various points which aren't
>  * necessarily obvious.  Usually, this is done by observing that pointers are
>  * N-byte aligned and thus the bottom log_2(N) bits are available for use.  We
>  * don't use the high bits of pointers to store additional information because
>  * we don't know what bits are unused on any given architecture.
>  *
>  * Nodes are 256 bytes in size and are also aligned to 256 bytes, giving us 8
>  * low bits for our own purposes.  Nodes are currently of 4 types:
>  * 1. Single pointer (Range is 0-0)
>  * 2. Non-leaf Allocation Range nodes
>  * 3. Non-leaf Range nodes
>  * 4. Leaf Range nodes All nodes consist of a number of node slots,
>  *    pivots, and a parent pointer.
>  */
> 

I got it. Looks like I need to revisit the maple tree documentation
before asking such questions.

> That seems like a very badly implemented patch.  Rather than make all
> objects left & right redzone, we should simply insert a redzone at
> the beginning of the slab.  ie
> 
> 0	redzone
> 256	node
> 512	redzone
> 768	node
> 1024	redzone
> 1280	node
> [...]
> 3072	redzone
> 3382	node
> 3584	redzone
> 3840	wasted space
> 
This seems to work when only redzone is enabled?

I think it will again 768b aligned if any other debug option enabled,
say U. It is:
(size aligned red zone + maple node +  right red zone (size of (void*))
+ alloc/free track).

My understanding to have both left and right red zone is:
                /*
                 * Add some empty padding so that __we can catch
                 * overwrites from earlier objects rather than let
                 * tracking information or the free pointer be
                 * corrupted if a user writes before the start
                 * of the object__.
                 */

When all the debug options enabled, the slab object will roughly look
like below:

Left red zone | object | right red zone | free pointer | alloc/free
track | padding

> Instead of getting only five nodes per 4kB page, we'd get seven; about
> a 30% reduction in memory usage.
> 
> Slab redzoning is not a feature people turn on often, so I'm not
> surprised nobody's noticed before now.

+Vlastimil. The patch in discussion is d86bd1bece6f ("mm/slub: support
left redzone").

Thanks,
Charan
  

Patch

--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6283,8 +6283,8 @@  bool mas_nomem(struct ma_state *mas, gfp_t gfp)
 void __init maple_tree_init(void)
 {
        maple_node_cache = kmem_cache_create("maple_node",
-                       sizeof(struct maple_node), sizeof(struct
maple_node),
-                       SLAB_PANIC, NULL);
+                       sizeof(struct maple_node), 0,
+                       SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
 }