Message ID | 20221121171202.22080-2-vbabka@suse.cz |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1716623wrr; Mon, 21 Nov 2022 09:12:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf7xDWsyBr1LXfOC9qegH0+PZT2oFikbDAU8VugGvhUpYHdfQnPc3JuWYBoofs3UmGeoqvnX X-Received: by 2002:a17:902:ebc1:b0:186:b6aa:5646 with SMTP id p1-20020a170902ebc100b00186b6aa5646mr1179488plg.73.1669050764463; Mon, 21 Nov 2022 09:12:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669050764; cv=none; d=google.com; s=arc-20160816; b=Dmhmyh4NnSz2YmpD1fblO8DtpJMdpnpcCZbyvdZRscORxbiiqoeOIHFblQVraRDWo1 8i6CBLeRhKmDNlyJDD1BkXVTJPl3v7egyBJCw7YwcVJQXV56Pz5hMBokzULprngy9tEa cnHnfw/lN/sgv4PkXJIAvQTHuRUHODdSk+v27l6jQF3lSPLtj+GBF2ilzcGYYR1G6kgP t9Gmkg0P9ShfZ26zB7WBi6O0fCr+MVcVrrBLiuPsxTBIh0tWylxn5Shk6O7m2PGp14yk skyWis1S3sT25qzsQu4wqNVv8KJ6/Go5EhAKevwABj4LwE5dA+QsRs058mawN6hg0gPE DXkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=iIAiT/GWezpGGl+wQsu7Tv56r8ovXdCIcTxm0/tgheY=; b=OH4gn9/yOQemXBxOCkzrxth+6xS29Ha57a7nfXKQmYtKul4+6quctLRGIWCFoji89D ZafSkFD9DBB3/RbIPAFNlMkXfI8bjw+/y/ble7HUmRtwClP+FcV4v7PC5JLDNDgvTO34 v5pYJTRCG0JTLj63ig1WxvMUaB9+LkcdZnA2lRWH5e4w7baOBjheQ85T70phCQy0IQQU Fm/jj87zRTzqayJ9lrggA5Ys2i7rK691pOYL/pUXRTvOSIgu9P8z8kljlEMV+MpVkImL aerMpHGHbeqU58ov4aFY8qLFQx+7alaG8nIM/StG8cDQBcSYz7tMJA9grQJXtdYCn7Lo ibdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b="GvgIJH/o"; dkim=neutral (no key) header.i=@suse.cz header.b=EtJakSav; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f9-20020a63f749000000b0046004f18c6csi5896135pgk.456.2022.11.21.09.12.29; Mon, 21 Nov 2022 09:12:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b="GvgIJH/o"; dkim=neutral (no key) header.i=@suse.cz header.b=EtJakSav; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230468AbiKURMT (ORCPT <rfc822;cjcooper78@gmail.com> + 99 others); Mon, 21 Nov 2022 12:12:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230378AbiKURMP (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 21 Nov 2022 12:12:15 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA862CB9C4 for <linux-kernel@vger.kernel.org>; Mon, 21 Nov 2022 09:12:12 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id ACEF41F8AC; Mon, 21 Nov 2022 17:12:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1669050730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iIAiT/GWezpGGl+wQsu7Tv56r8ovXdCIcTxm0/tgheY=; b=GvgIJH/oeJ3busNpCVt+0e739h8ng7Y5p0H1Yvj8Oyyz4VNlMSoMJeSSkcRWIVIi+rwbOm 9iY7YqAVIJ3Q1bniJXZImm1Ex1ohYTS9h9AbyfhgjeSYdiWb6qiMTz0AxzDiwJ4mOMYjiq Wm6Ff/tkvltO0fhwoQ58ct2GtyMQF6o= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1669050730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iIAiT/GWezpGGl+wQsu7Tv56r8ovXdCIcTxm0/tgheY=; b=EtJakSavfVgdRtpd6s1MaFBxytPX8OtTNWjLrPrxQfI+w+LmcN1V+PnY15Kz29QVD5t6PY jcDi72IIpNYPBRDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7F54113B03; Mon, 21 Nov 2022 17:12:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id INVuHmqxe2MQeQAAMHmgww (envelope-from <vbabka@suse.cz>); Mon, 21 Nov 2022 17:12:10 +0000 From: Vlastimil Babka <vbabka@suse.cz> To: Christoph Lameter <cl@linux.com>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Pekka Enberg <penberg@kernel.org> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin <roman.gushchin@linux.dev>, Andrew Morton <akpm@linux-foundation.org>, Linus Torvalds <torvalds@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka <vbabka@suse.cz>, Kees Cook <keescook@chromium.org> Subject: [PATCH 01/12] mm, slab: ignore hardened usercopy parameters when disabled Date: Mon, 21 Nov 2022 18:11:51 +0100 Message-Id: <20221121171202.22080-2-vbabka@suse.cz> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121171202.22080-1-vbabka@suse.cz> References: <20221121171202.22080-1-vbabka@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750126574417155528?= X-GMAIL-MSGID: =?utf-8?q?1750126574417155528?= |
Series |
Introduce CONFIG_SLUB_TINY and deprecate SLOB
|
|
Commit Message
Vlastimil Babka
Nov. 21, 2022, 5:11 p.m. UTC
With CONFIG_HARDENED_USERCOPY not enabled, there are no
__check_heap_object() checks happening that would use the kmem_cache
useroffset and usersize fields. Yet the fields are still initialized,
preventing merging of otherwise compatible caches. Thus ignore the
values passed to cache creation and leave them zero when
CONFIG_HARDENED_USERCOPY is disabled.
In a quick virtme boot test, this has reduced the number of caches in
/proc/slabinfo from 131 to 111.
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/slab_common.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
Comments
On November 21, 2022 9:11:51 AM PST, Vlastimil Babka <vbabka@suse.cz> wrote: >With CONFIG_HARDENED_USERCOPY not enabled, there are no >__check_heap_object() checks happening that would use the kmem_cache >useroffset and usersize fields. Yet the fields are still initialized, >preventing merging of otherwise compatible caches. Thus ignore the >values passed to cache creation and leave them zero when >CONFIG_HARDENED_USERCOPY is disabled. > >In a quick virtme boot test, this has reduced the number of caches in >/proc/slabinfo from 131 to 111. > >Cc: Kees Cook <keescook@chromium.org> >Signed-off-by: Vlastimil Babka <vbabka@suse.cz> >--- > mm/slab_common.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > >diff --git a/mm/slab_common.c b/mm/slab_common.c >index 0042fb2730d1..a8cb5de255fc 100644 >--- a/mm/slab_common.c >+++ b/mm/slab_common.c >@@ -317,7 +317,8 @@ kmem_cache_create_usercopy(const char *name, > flags &= CACHE_CREATE_MASK; > > /* Fail closed on bad usersize of useroffset values. */ >- if (WARN_ON(!usersize && useroffset) || >+ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || >+ WARN_ON(!usersize && useroffset) || > WARN_ON(size < usersize || size - usersize < useroffset)) > usersize = useroffset = 0; > >@@ -640,6 +641,9 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, > align = max(align, size); > s->align = calculate_alignment(flags, align, size); > >+ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY)) >+ useroffset = usersize = 0; >+ > s->useroffset = useroffset; > s->usersize = usersize; > "Always non-mergeable" is intentional here, but I do see the argument for not doing it under hardened-usercopy. That said, if you keep this part, maybe go the full step and ifdef away useroffset/usersize's struct member definition and other logic, especially for SLUB_TINY benefits, so 2 ulongs are dropped from the cache struct? -Kees
On 11/21/22 22:35, Kees Cook wrote: > On November 21, 2022 9:11:51 AM PST, Vlastimil Babka <vbabka@suse.cz> wrote: >>With CONFIG_HARDENED_USERCOPY not enabled, there are no >>__check_heap_object() checks happening that would use the kmem_cache >>useroffset and usersize fields. Yet the fields are still initialized, >>preventing merging of otherwise compatible caches. Thus ignore the >>values passed to cache creation and leave them zero when >>CONFIG_HARDENED_USERCOPY is disabled. >> >>In a quick virtme boot test, this has reduced the number of caches in >>/proc/slabinfo from 131 to 111. >> >>Cc: Kees Cook <keescook@chromium.org> >>Signed-off-by: Vlastimil Babka <vbabka@suse.cz> >>--- >> mm/slab_common.c | 6 +++++- >> 1 file changed, 5 insertions(+), 1 deletion(-) >> >>diff --git a/mm/slab_common.c b/mm/slab_common.c >>index 0042fb2730d1..a8cb5de255fc 100644 >>--- a/mm/slab_common.c >>+++ b/mm/slab_common.c >>@@ -317,7 +317,8 @@ kmem_cache_create_usercopy(const char *name, >> flags &= CACHE_CREATE_MASK; >> >> /* Fail closed on bad usersize of useroffset values. */ >>- if (WARN_ON(!usersize && useroffset) || >>+ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || >>+ WARN_ON(!usersize && useroffset) || >> WARN_ON(size < usersize || size - usersize < useroffset)) >> usersize = useroffset = 0; >> >>@@ -640,6 +641,9 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, >> align = max(align, size); >> s->align = calculate_alignment(flags, align, size); >> >>+ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY)) >>+ useroffset = usersize = 0; >>+ >> s->useroffset = useroffset; >> s->usersize = usersize; >> > > "Always non-mergeable" is intentional here, but I do see the argument > for not doing it under hardened-usercopy. > > That said, if you keep this part, maybe go the full step and ifdef away > useroffset/usersize's struct member definition and other logic, especially > for SLUB_TINY benefits, so 2 ulongs are dropped from the cache struct? Okay, probably won't make much difference in practice, but for consistency... ----8<---- From 3cdb7b6ad16a9d95603b482969fa870f996ac9dc Mon Sep 17 00:00:00 2001 From: Vlastimil Babka <vbabka@suse.cz> Date: Wed, 16 Nov 2022 15:56:32 +0100 Subject: [PATCH] mm, slab: ignore hardened usercopy parameters when disabled With CONFIG_HARDENED_USERCOPY not enabled, there are no __check_heap_object() checks happening that would use the struct kmem_cache useroffset and usersize fields. Yet the fields are still initialized, preventing merging of otherwise compatible caches. Also the fields contribute to struct kmem_cache size unnecessarily when unused. Thus #ifdef them out completely when CONFIG_HARDENED_USERCOPY is disabled. In a quick virtme boot test, this has reduced the number of caches in /proc/slabinfo from 131 to 111. Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> --- include/linux/slab_def.h | 2 ++ include/linux/slub_def.h | 2 ++ mm/slab.h | 2 -- mm/slab_common.c | 9 ++++++++- mm/slub.c | 4 ++++ 5 files changed, 16 insertions(+), 3 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index f0ffad6a3365..5834bad8ad78 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -80,8 +80,10 @@ struct kmem_cache { unsigned int *random_seq; #endif +#ifdef CONFIG_HARDENED_USERCOPY unsigned int useroffset; /* Usercopy region offset */ unsigned int usersize; /* Usercopy region size */ +#endif struct kmem_cache_node *node[MAX_NUMNODES]; }; diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index f9c68a9dac04..7ed5e455cbf4 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -136,8 +136,10 @@ struct kmem_cache { struct kasan_cache kasan_info; #endif +#ifdef CONFIG_HARDENED_USERCOPY unsigned int useroffset; /* Usercopy region offset */ unsigned int usersize; /* Usercopy region size */ +#endif struct kmem_cache_node *node[MAX_NUMNODES]; }; diff --git a/mm/slab.h b/mm/slab.h index 0202a8c2f0d2..db9a7984e22e 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -207,8 +207,6 @@ struct kmem_cache { unsigned int size; /* The aligned/padded/added on size */ unsigned int align; /* Alignment as calculated */ slab_flags_t flags; /* Active flags on the slab */ - unsigned int useroffset;/* Usercopy region offset */ - unsigned int usersize; /* Usercopy region size */ const char *name; /* Slab name for sysfs */ int refcount; /* Use counter */ void (*ctor)(void *); /* Called on object slot creation */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 0042fb2730d1..4339c839a452 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -143,8 +143,10 @@ int slab_unmergeable(struct kmem_cache *s) if (s->ctor) return 1; +#ifdef CONFIG_HARDENED_USERCOPY if (s->usersize) return 1; +#endif /* * We may have set a slab to be unmergeable during bootstrap. @@ -223,8 +225,10 @@ static struct kmem_cache *create_cache(const char *name, s->size = s->object_size = object_size; s->align = align; s->ctor = ctor; +#ifdef CONFIG_HARDENED_USERCOPY s->useroffset = useroffset; s->usersize = usersize; +#endif err = __kmem_cache_create(s, flags); if (err) @@ -317,7 +321,8 @@ kmem_cache_create_usercopy(const char *name, flags &= CACHE_CREATE_MASK; /* Fail closed on bad usersize of useroffset values. */ - if (WARN_ON(!usersize && useroffset) || + if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || + WARN_ON(!usersize && useroffset) || WARN_ON(size < usersize || size - usersize < useroffset)) usersize = useroffset = 0; @@ -640,8 +645,10 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, align = max(align, size); s->align = calculate_alignment(flags, align, size); +#ifdef CONFIG_HARDENED_USERCOPY s->useroffset = useroffset; s->usersize = usersize; +#endif err = __kmem_cache_create(s, flags); diff --git a/mm/slub.c b/mm/slub.c index 157527d7101b..e32db8540767 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5502,11 +5502,13 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf) SLAB_ATTR_RO(cache_dma); #endif +#ifdef CONFIG_HARDENED_USERCOPY static ssize_t usersize_show(struct kmem_cache *s, char *buf) { return sysfs_emit(buf, "%u\n", s->usersize); } SLAB_ATTR_RO(usersize); +#endif static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf) { @@ -5803,7 +5805,9 @@ static struct attribute *slab_attrs[] = { #ifdef CONFIG_FAILSLAB &failslab_attr.attr, #endif +#ifdef CONFIG_HARDENED_USERCOPY &usersize_attr.attr, +#endif #ifdef CONFIG_KFENCE &skip_kfence_attr.attr, #endif
On Wed, Nov 23, 2022 at 03:23:15PM +0100, Vlastimil Babka wrote: > > On 11/21/22 22:35, Kees Cook wrote: > > On November 21, 2022 9:11:51 AM PST, Vlastimil Babka <vbabka@suse.cz> wrote: > >>With CONFIG_HARDENED_USERCOPY not enabled, there are no > >>__check_heap_object() checks happening that would use the kmem_cache > >>useroffset and usersize fields. Yet the fields are still initialized, > >>preventing merging of otherwise compatible caches. Thus ignore the > >>values passed to cache creation and leave them zero when > >>CONFIG_HARDENED_USERCOPY is disabled. > >> > >>In a quick virtme boot test, this has reduced the number of caches in > >>/proc/slabinfo from 131 to 111. > >> > >>Cc: Kees Cook <keescook@chromium.org> > >>Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > >>--- > >> mm/slab_common.c | 6 +++++- > >> 1 file changed, 5 insertions(+), 1 deletion(-) > >> > >>diff --git a/mm/slab_common.c b/mm/slab_common.c > >>index 0042fb2730d1..a8cb5de255fc 100644 > >>--- a/mm/slab_common.c > >>+++ b/mm/slab_common.c > >>@@ -317,7 +317,8 @@ kmem_cache_create_usercopy(const char *name, > >> flags &= CACHE_CREATE_MASK; > >> > >> /* Fail closed on bad usersize of useroffset values. */ > >>- if (WARN_ON(!usersize && useroffset) || > >>+ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || > >>+ WARN_ON(!usersize && useroffset) || > >> WARN_ON(size < usersize || size - usersize < useroffset)) > >> usersize = useroffset = 0; > >> > >>@@ -640,6 +641,9 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, > >> align = max(align, size); > >> s->align = calculate_alignment(flags, align, size); > >> > >>+ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY)) > >>+ useroffset = usersize = 0; > >>+ > >> s->useroffset = useroffset; > >> s->usersize = usersize; > >> > > > > "Always non-mergeable" is intentional here, but I do see the argument > > for not doing it under hardened-usercopy. > > > > That said, if you keep this part, maybe go the full step and ifdef away > > useroffset/usersize's struct member definition and other logic, especially > > for SLUB_TINY benefits, so 2 ulongs are dropped from the cache struct? > > Okay, probably won't make much difference in practice, but for consistency... > ----8<---- > From 3cdb7b6ad16a9d95603b482969fa870f996ac9dc Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka <vbabka@suse.cz> > Date: Wed, 16 Nov 2022 15:56:32 +0100 > Subject: [PATCH] mm, slab: ignore hardened usercopy parameters when disabled > > With CONFIG_HARDENED_USERCOPY not enabled, there are no > __check_heap_object() checks happening that would use the struct > kmem_cache useroffset and usersize fields. Yet the fields are still > initialized, preventing merging of otherwise compatible caches. > > Also the fields contribute to struct kmem_cache size unnecessarily when > unused. Thus #ifdef them out completely when CONFIG_HARDENED_USERCOPY is > disabled. > > In a quick virtme boot test, this has reduced the number of caches in > /proc/slabinfo from 131 to 111. > > Cc: Kees Cook <keescook@chromium.org> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > include/linux/slab_def.h | 2 ++ > include/linux/slub_def.h | 2 ++ > mm/slab.h | 2 -- > mm/slab_common.c | 9 ++++++++- > mm/slub.c | 4 ++++ > 5 files changed, 16 insertions(+), 3 deletions(-) > > diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h > index f0ffad6a3365..5834bad8ad78 100644 > --- a/include/linux/slab_def.h > +++ b/include/linux/slab_def.h > @@ -80,8 +80,10 @@ struct kmem_cache { > unsigned int *random_seq; > #endif > > +#ifdef CONFIG_HARDENED_USERCOPY > unsigned int useroffset; /* Usercopy region offset */ > unsigned int usersize; /* Usercopy region size */ > +#endif > > struct kmem_cache_node *node[MAX_NUMNODES]; > }; > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index f9c68a9dac04..7ed5e455cbf4 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -136,8 +136,10 @@ struct kmem_cache { > struct kasan_cache kasan_info; > #endif > > +#ifdef CONFIG_HARDENED_USERCOPY > unsigned int useroffset; /* Usercopy region offset */ > unsigned int usersize; /* Usercopy region size */ > +#endif > > struct kmem_cache_node *node[MAX_NUMNODES]; > }; > diff --git a/mm/slab.h b/mm/slab.h > index 0202a8c2f0d2..db9a7984e22e 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -207,8 +207,6 @@ struct kmem_cache { > unsigned int size; /* The aligned/padded/added on size */ > unsigned int align; /* Alignment as calculated */ > slab_flags_t flags; /* Active flags on the slab */ > - unsigned int useroffset;/* Usercopy region offset */ > - unsigned int usersize; /* Usercopy region size */ > const char *name; /* Slab name for sysfs */ > int refcount; /* Use counter */ > void (*ctor)(void *); /* Called on object slot creation */ > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 0042fb2730d1..4339c839a452 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -143,8 +143,10 @@ int slab_unmergeable(struct kmem_cache *s) > if (s->ctor) > return 1; > > +#ifdef CONFIG_HARDENED_USERCOPY > if (s->usersize) > return 1; > +#endif > > /* > * We may have set a slab to be unmergeable during bootstrap. > @@ -223,8 +225,10 @@ static struct kmem_cache *create_cache(const char *name, > s->size = s->object_size = object_size; > s->align = align; > s->ctor = ctor; > +#ifdef CONFIG_HARDENED_USERCOPY > s->useroffset = useroffset; > s->usersize = usersize; > +#endif > > err = __kmem_cache_create(s, flags); > if (err) > @@ -317,7 +321,8 @@ kmem_cache_create_usercopy(const char *name, > flags &= CACHE_CREATE_MASK; > > /* Fail closed on bad usersize of useroffset values. */ > - if (WARN_ON(!usersize && useroffset) || > + if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || > + WARN_ON(!usersize && useroffset) || > WARN_ON(size < usersize || size - usersize < useroffset)) > usersize = useroffset = 0; I think this change is no longer needed as slab_unmergeable() now does not check usersize when CONFIG_HARDENED_USERCOPY=n? > @@ -640,8 +645,10 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, > align = max(align, size); > s->align = calculate_alignment(flags, align, size); > > +#ifdef CONFIG_HARDENED_USERCOPY > s->useroffset = useroffset; > s->usersize = usersize; > +#endif > > err = __kmem_cache_create(s, flags); > > diff --git a/mm/slub.c b/mm/slub.c > index 157527d7101b..e32db8540767 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -5502,11 +5502,13 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf) > SLAB_ATTR_RO(cache_dma); > #endif > > +#ifdef CONFIG_HARDENED_USERCOPY > static ssize_t usersize_show(struct kmem_cache *s, char *buf) > { > return sysfs_emit(buf, "%u\n", s->usersize); > } > SLAB_ATTR_RO(usersize); > +#endif > > static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf) > { > @@ -5803,7 +5805,9 @@ static struct attribute *slab_attrs[] = { > #ifdef CONFIG_FAILSLAB > &failslab_attr.attr, > #endif > +#ifdef CONFIG_HARDENED_USERCOPY > &usersize_attr.attr, > +#endif > #ifdef CONFIG_KFENCE > &skip_kfence_attr.attr, > #endif > -- > 2.38.1 > >
On 11/24/22 12:16, Hyeonggon Yoo wrote: >> /* Fail closed on bad usersize of useroffset values. */ >> - if (WARN_ON(!usersize && useroffset) || >> + if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || >> + WARN_ON(!usersize && useroffset) || >> WARN_ON(size < usersize || size - usersize < useroffset)) >> usersize = useroffset = 0; > > I think this change is no longer needed as slab_unmergeable() > now does not check usersize when CONFIG_HARDENED_USERCOPY=n? True, but the code here still follows by if (!usersize) s = __kmem_cache_alias(name, size, align, flags, ctor); So it seemed simplest just to leave it like that.
On Wed, Nov 23, 2022 at 03:23:15PM +0100, Vlastimil Babka wrote: > > On 11/21/22 22:35, Kees Cook wrote: > > On November 21, 2022 9:11:51 AM PST, Vlastimil Babka <vbabka@suse.cz> wrote: > >>With CONFIG_HARDENED_USERCOPY not enabled, there are no > >>__check_heap_object() checks happening that would use the kmem_cache > >>useroffset and usersize fields. Yet the fields are still initialized, > >>preventing merging of otherwise compatible caches. Thus ignore the > >>values passed to cache creation and leave them zero when > >>CONFIG_HARDENED_USERCOPY is disabled. > >> > >>In a quick virtme boot test, this has reduced the number of caches in > >>/proc/slabinfo from 131 to 111. > >> > >>Cc: Kees Cook <keescook@chromium.org> > >>Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > >>--- > >> mm/slab_common.c | 6 +++++- > >> 1 file changed, 5 insertions(+), 1 deletion(-) > >> > >>diff --git a/mm/slab_common.c b/mm/slab_common.c > >>index 0042fb2730d1..a8cb5de255fc 100644 > >>--- a/mm/slab_common.c > >>+++ b/mm/slab_common.c > >>@@ -317,7 +317,8 @@ kmem_cache_create_usercopy(const char *name, > >> flags &= CACHE_CREATE_MASK; > >> > >> /* Fail closed on bad usersize of useroffset values. */ > >>- if (WARN_ON(!usersize && useroffset) || > >>+ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || > >>+ WARN_ON(!usersize && useroffset) || > >> WARN_ON(size < usersize || size - usersize < useroffset)) > >> usersize = useroffset = 0; > >> > >>@@ -640,6 +641,9 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, > >> align = max(align, size); > >> s->align = calculate_alignment(flags, align, size); > >> > >>+ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY)) > >>+ useroffset = usersize = 0; > >>+ > >> s->useroffset = useroffset; > >> s->usersize = usersize; > >> > > > > "Always non-mergeable" is intentional here, but I do see the argument > > for not doing it under hardened-usercopy. > > > > That said, if you keep this part, maybe go the full step and ifdef away > > useroffset/usersize's struct member definition and other logic, especially > > for SLUB_TINY benefits, so 2 ulongs are dropped from the cache struct? > > Okay, probably won't make much difference in practice, but for consistency... > ----8<---- > From 3cdb7b6ad16a9d95603b482969fa870f996ac9dc Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka <vbabka@suse.cz> > Date: Wed, 16 Nov 2022 15:56:32 +0100 > Subject: [PATCH] mm, slab: ignore hardened usercopy parameters when disabled > > With CONFIG_HARDENED_USERCOPY not enabled, there are no > __check_heap_object() checks happening that would use the struct > kmem_cache useroffset and usersize fields. Yet the fields are still > initialized, preventing merging of otherwise compatible caches. > > Also the fields contribute to struct kmem_cache size unnecessarily when > unused. Thus #ifdef them out completely when CONFIG_HARDENED_USERCOPY is > disabled. > > In a quick virtme boot test, this has reduced the number of caches in > /proc/slabinfo from 131 to 111. > > Cc: Kees Cook <keescook@chromium.org> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > include/linux/slab_def.h | 2 ++ > include/linux/slub_def.h | 2 ++ > mm/slab.h | 2 -- > mm/slab_common.c | 9 ++++++++- > mm/slub.c | 4 ++++ > 5 files changed, 16 insertions(+), 3 deletions(-) > > diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h > index f0ffad6a3365..5834bad8ad78 100644 > --- a/include/linux/slab_def.h > +++ b/include/linux/slab_def.h > @@ -80,8 +80,10 @@ struct kmem_cache { > unsigned int *random_seq; > #endif > > +#ifdef CONFIG_HARDENED_USERCOPY > unsigned int useroffset; /* Usercopy region offset */ > unsigned int usersize; /* Usercopy region size */ > +#endif > > struct kmem_cache_node *node[MAX_NUMNODES]; > }; > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index f9c68a9dac04..7ed5e455cbf4 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -136,8 +136,10 @@ struct kmem_cache { > struct kasan_cache kasan_info; > #endif > > +#ifdef CONFIG_HARDENED_USERCOPY > unsigned int useroffset; /* Usercopy region offset */ > unsigned int usersize; /* Usercopy region size */ > +#endif > > struct kmem_cache_node *node[MAX_NUMNODES]; > }; > diff --git a/mm/slab.h b/mm/slab.h > index 0202a8c2f0d2..db9a7984e22e 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -207,8 +207,6 @@ struct kmem_cache { > unsigned int size; /* The aligned/padded/added on size */ > unsigned int align; /* Alignment as calculated */ > slab_flags_t flags; /* Active flags on the slab */ > - unsigned int useroffset;/* Usercopy region offset */ > - unsigned int usersize; /* Usercopy region size */ > const char *name; /* Slab name for sysfs */ > int refcount; /* Use counter */ > void (*ctor)(void *); /* Called on object slot creation */ > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 0042fb2730d1..4339c839a452 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -143,8 +143,10 @@ int slab_unmergeable(struct kmem_cache *s) > if (s->ctor) > return 1; > > +#ifdef CONFIG_HARDENED_USERCOPY > if (s->usersize) > return 1; > +#endif > > /* > * We may have set a slab to be unmergeable during bootstrap. > @@ -223,8 +225,10 @@ static struct kmem_cache *create_cache(const char *name, > s->size = s->object_size = object_size; > s->align = align; > s->ctor = ctor; > +#ifdef CONFIG_HARDENED_USERCOPY > s->useroffset = useroffset; > s->usersize = usersize; > +#endif > > err = __kmem_cache_create(s, flags); > if (err) > @@ -317,7 +321,8 @@ kmem_cache_create_usercopy(const char *name, > flags &= CACHE_CREATE_MASK; > > /* Fail closed on bad usersize of useroffset values. */ > - if (WARN_ON(!usersize && useroffset) || > + if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || > + WARN_ON(!usersize && useroffset) || > WARN_ON(size < usersize || size - usersize < useroffset)) > usersize = useroffset = 0; > > @@ -640,8 +645,10 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, > align = max(align, size); > s->align = calculate_alignment(flags, align, size); > > +#ifdef CONFIG_HARDENED_USERCOPY > s->useroffset = useroffset; > s->usersize = usersize; > +#endif > > err = __kmem_cache_create(s, flags); > > diff --git a/mm/slub.c b/mm/slub.c > index 157527d7101b..e32db8540767 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -5502,11 +5502,13 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf) > SLAB_ATTR_RO(cache_dma); > #endif > > +#ifdef CONFIG_HARDENED_USERCOPY > static ssize_t usersize_show(struct kmem_cache *s, char *buf) > { > return sysfs_emit(buf, "%u\n", s->usersize); > } > SLAB_ATTR_RO(usersize); > +#endif > > static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf) > { > @@ -5803,7 +5805,9 @@ static struct attribute *slab_attrs[] = { > #ifdef CONFIG_FAILSLAB > &failslab_attr.attr, > #endif > +#ifdef CONFIG_HARDENED_USERCOPY > &usersize_attr.attr, > +#endif > #ifdef CONFIG_KFENCE > &skip_kfence_attr.attr, > #endif > -- > 2.38.1 > Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
diff --git a/mm/slab_common.c b/mm/slab_common.c index 0042fb2730d1..a8cb5de255fc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -317,7 +317,8 @@ kmem_cache_create_usercopy(const char *name, flags &= CACHE_CREATE_MASK; /* Fail closed on bad usersize of useroffset values. */ - if (WARN_ON(!usersize && useroffset) || + if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || + WARN_ON(!usersize && useroffset) || WARN_ON(size < usersize || size - usersize < useroffset)) usersize = useroffset = 0; @@ -640,6 +641,9 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, align = max(align, size); s->align = calculate_alignment(flags, align, size); + if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY)) + useroffset = usersize = 0; + s->useroffset = useroffset; s->usersize = usersize;