Message ID | 20221121135024.1655240-1-feng.tang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1600768wrr; Mon, 21 Nov 2022 05:57:00 -0800 (PST) X-Google-Smtp-Source: AA0mqf449ltKEKsyWXl3mVrbllnyTJotdWMyMwNo87Oqf+/EUqcUy+Pg1Ao5MD8EAK7/fye2NKXG X-Received: by 2002:a63:ef55:0:b0:45f:efc9:5935 with SMTP id c21-20020a63ef55000000b0045fefc95935mr17493494pgk.28.1669039020007; Mon, 21 Nov 2022 05:57:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669039020; cv=none; d=google.com; s=arc-20160816; b=fl9ln1pai4z7FkCEDIWo32ylEw4KwYAu294hA02P45WXtYBYduWlx12MJjBCbsYGTJ YOyjC0U97HSt+qbr0VNoKD6VDmU5VPEfYsRDAi23WMmYlYKW0ouzm/Dnk6yIPVYbnA2Z TgbSpvocDZRDeQr4ni8sYCTfhS8/1yisIemuNit5oVeh4mnxP7Dx99dPMWq4Qo6aihDm 5NcYNLiUsdXJczopOhh0CYQYjXC7HsIPx1dAKpDQzA6Hz5nWvlqeO23wDd7+UfJiDGiI uqDAqcq4BhO+1yxLENWiVIiNmz5qVaGsOlMEbcdzplk7bLPEgNBKgPdWOv1KV1Sj3ix7 z31g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=UICwHwUy/rM4e5XTjGUHW4OK+5ASBUwTNuClQeGn99A=; b=b6H+bjNcPRlR7xSpjf1f4lbXCjkq1DiUx5yDQt5c8/OeU/bQ8ep9v1syft2sT5Cjaj aIt9CbyNIb/1z9+OOzmDjf9PzwG/X/nUwhRhj06I9PSHVlP4Rkl8zUYO+0ElepYnUg2I MWze19p4RbOXxCcVT/10/r7KbX7qfC1S0M41P6GS8Vrt1xXGa5/Peqj5+7lJujJmT+FT JZGpV9LNnHfAuiTZa1ki8lX/xitfYtmkoZCS8dFZDFtCfMJXJMZ7OLhX1ptgWHRsy7Xg x8LeNPRBFWBmfTC7tXW00Y5hxqDDmQrRe830Sb+RqdupvBPrceYmfAIBH9jQ83gTdi8/ 7sfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="oE/lX1Co"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k4-20020a63d104000000b0046fb2a57348si10919521pgg.84.2022.11.21.05.56.40; Mon, 21 Nov 2022 05:56:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="oE/lX1Co"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229984AbiKUNx3 (ORCPT <rfc822;cjcooper78@gmail.com> + 99 others); Mon, 21 Nov 2022 08:53:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229640AbiKUNx1 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 21 Nov 2022 08:53:27 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 855F61EECB for <linux-kernel@vger.kernel.org>; Mon, 21 Nov 2022 05:53:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669038806; x=1700574806; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=VxqTtM17BmtYe97UGlIkeXzguZmoQW3WI6yuaWgDYSE=; b=oE/lX1CojWbEkBQaUY5u/UQtNZcrQZse7JrviaSBsAwjQfvVwcCbuk8b 2CRhFfo6r4G6pNloOiJ0sfi4SipGEMeaX1taQpO0L10UxbKoia7UWeWXd AFbwATVGpODMJQr15WZC39OUDxtjU/X7zxoQq53EpxIYKPaLt5/uYMgp1 AiUQ9r/O0bq5TPckSVZfV3mzEtsWH//LuvpXpDUQnWPyjZQgHR7Zqr0MT r0HzxZf2Cb4Q6/w+cXPF1sO1UFNLkjNgP41TSAzOOAcfQgqkbopfvzdQz T5Nz3mEp8AA6WRlKiDdY7HSuE0Mx2M5ThMFY1sIKh9+U5lWJisC+m2fg4 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="312261154" X-IronPort-AV: E=Sophos;i="5.96,181,1665471600"; d="scan'208";a="312261154" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2022 05:53:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10537"; a="886125081" X-IronPort-AV: E=Sophos;i="5.96,181,1665471600"; d="scan'208";a="886125081" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga006.fm.intel.com with ESMTP; 21 Nov 2022 05:53:22 -0800 From: Feng Tang <feng.tang@intel.com> To: Andrew Morton <akpm@linux-foundation.org>, Vlastimil Babka <vbabka@suse.cz>, Christoph Lameter <cl@linux.com>, Pekka Enberg <penberg@kernel.org>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Roman Gushchin <roman.gushchin@linux.dev>, Hyeonggon Yoo <42.hyeyoo@gmail.com>, Andrey Konovalov <andreyknvl@gmail.com>, Dmitry Vyukov <dvyukov@google.com>, Andrey Ryabinin <ryabinin.a.a@gmail.com>, Alexander Potapenko <glider@google.com>, Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: linux-mm@kvack.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Feng Tang <feng.tang@intel.com> Subject: [PATCH -next 1/2] mm/slab: add is_kmalloc_cache() helper macro Date: Mon, 21 Nov 2022 21:50:23 +0800 Message-Id: <20221121135024.1655240-1-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750114259239115797?= X-GMAIL-MSGID: =?utf-8?q?1750114259239115797?= |
Series |
[-next,1/2] mm/slab: add is_kmalloc_cache() helper macro
|
|
Commit Message
Feng Tang
Nov. 21, 2022, 1:50 p.m. UTC
commit 6edf2576a6cc ("mm/slub: enable debugging memory wasting of
kmalloc") introduces 'SLAB_KMALLOC' bit specifying whether a
kmem_cache is a kmalloc cache for slab/slub (slob doesn't have
dedicated kmalloc caches).
Add a helper macro for other components like kasan to simplify code.
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
include/linux/slab.h | 6 ++++++
1 file changed, 6 insertions(+)
Comments
On Mon, 21 Nov 2022 21:50:23 +0800 Feng Tang <feng.tang@intel.com> wrote: > +#ifndef CONFIG_SLOB > +#define is_kmalloc_cache(s) ((s)->flags & SLAB_KMALLOC) > +#else > +#define is_kmalloc_cache(s) (false) > +#endif Could be implemented as a static inline C function, yes? If so, that's always best. For (silly) example, consider the behaviour of x = is_kmalloc_cache(s++); with and without CONFIG_SLOB.
On Mon, Nov 21, 2022 at 12:19:38PM -0800, Andrew Morton wrote: > On Mon, 21 Nov 2022 21:50:23 +0800 Feng Tang <feng.tang@intel.com> wrote: > > > +#ifndef CONFIG_SLOB > > +#define is_kmalloc_cache(s) ((s)->flags & SLAB_KMALLOC) > > +#else > > +#define is_kmalloc_cache(s) (false) > > +#endif > > Could be implemented as a static inline C function, yes? Right, I also did try inline function first, and met compilation error: " ./include/linux/slab.h: In function ‘is_kmalloc_cache’: ./include/linux/slab.h:159:18: error: invalid use of undefined type ‘struct kmem_cache’ 159 | return (s->flags & SLAB_KMALLOC); | ^~ " The reason is 'struct kmem_cache' definition for slab/slub/slob sit separately in slab_def.h, slub_def.h and mm/slab.h, and they are not included in this 'include/linux/slab.h'. So I chose the macro way. Btw, I've worked on some patches related with sl[auo]b recently, and really felt the pain when dealing with 3 allocators, on both reading code and writing patches. And I really like the idea of fading away SLOB as the first step :) > If so, that's always best. For (silly) example, consider the behaviour > of > > x = is_kmalloc_cache(s++); > > with and without CONFIG_SLOB. Another solution I can think of is putting the implementation into slab_common.c, like the below? Thanks, Feng --- diff --git a/include/linux/slab.h b/include/linux/slab.h index 067f0e80be9e..e4fcdbfb3477 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -149,6 +149,17 @@ struct list_lru; struct mem_cgroup; + +#ifndef CONFIG_SLOB +extern bool is_kmalloc_cache(struct kmem_cache *s); +#else +static inline bool is_kmalloc_cache(struct kmem_cache *s) +{ + return false; +} +#endif + /* * struct kmem_cache related prototypes */ diff --git a/mm/slab_common.c b/mm/slab_common.c index a5480d67f391..860e804b7c0a 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -77,6 +77,13 @@ __setup_param("slub_merge", slub_merge, setup_slab_merge, 0); __setup("slab_nomerge", setup_slab_nomerge); __setup("slab_merge", setup_slab_merge); +#ifndef CONFIG_SLOB +bool is_kmalloc_cache(struct kmem_cache *s) +{ + return (s->flags & SLAB_KMALLOC); +} +#endif + /* * Determine the size of a slab object */
On Tue, 22 Nov 2022 13:30:19 +0800 Feng Tang <feng.tang@intel.com> wrote: > > If so, that's always best. For (silly) example, consider the behaviour > > of > > > > x = is_kmalloc_cache(s++); > > > > with and without CONFIG_SLOB. > > Another solution I can think of is putting the implementation into > slab_common.c, like the below? I'm not sure that's much of an improvement on the macro :( How about we go with the macro and avoid the expression-with-side-effects gotcha (and the potential CONFIG_SLOB=n unused-variable gotcha)? That would involve evaluating the arg within the CONFIG_SLOB=y version of the macro.
On 11/22/22 06:30, Feng Tang wrote: > On Mon, Nov 21, 2022 at 12:19:38PM -0800, Andrew Morton wrote: >> On Mon, 21 Nov 2022 21:50:23 +0800 Feng Tang <feng.tang@intel.com> wrote: >> >> > +#ifndef CONFIG_SLOB >> > +#define is_kmalloc_cache(s) ((s)->flags & SLAB_KMALLOC) >> > +#else >> > +#define is_kmalloc_cache(s) (false) >> > +#endif >> >> Could be implemented as a static inline C function, yes? > > Right, I also did try inline function first, and met compilation error: > > " > ./include/linux/slab.h: In function ‘is_kmalloc_cache’: > ./include/linux/slab.h:159:18: error: invalid use of undefined type ‘struct kmem_cache’ > 159 | return (s->flags & SLAB_KMALLOC); > | ^~ > " > > The reason is 'struct kmem_cache' definition for slab/slub/slob sit > separately in slab_def.h, slub_def.h and mm/slab.h, and they are not > included in this 'include/linux/slab.h'. So I chose the macro way. You could try mm/slab.h instead, below the slub_def.h includes there. is_kmalloc_cache(s) shouldn't have random consumers in the kernel anyway. It's fine if kasan includes it, as it's intertwined with slab a lot anyway. > Btw, I've worked on some patches related with sl[auo]b recently, and > really felt the pain when dealing with 3 allocators, on both reading > code and writing patches. And I really like the idea of fading away > SLOB as the first step :) Can't agree more :) >> If so, that's always best. For (silly) example, consider the behaviour >> of >> >> x = is_kmalloc_cache(s++); >> >> with and without CONFIG_SLOB. > > Another solution I can think of is putting the implementation into > slab_common.c, like the below? The overhead of function call between compilation units (sans LTO) is not worth it. > Thanks, > Feng > > --- > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 067f0e80be9e..e4fcdbfb3477 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -149,6 +149,17 @@ > > struct list_lru; > struct mem_cgroup; > + > +#ifndef CONFIG_SLOB > +extern bool is_kmalloc_cache(struct kmem_cache *s); > +#else > +static inline bool is_kmalloc_cache(struct kmem_cache *s) > +{ > + return false; > +} > +#endif > + > /* > * struct kmem_cache related prototypes > */ > diff --git a/mm/slab_common.c b/mm/slab_common.c > index a5480d67f391..860e804b7c0a 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -77,6 +77,13 @@ __setup_param("slub_merge", slub_merge, setup_slab_merge, 0); > __setup("slab_nomerge", setup_slab_nomerge); > __setup("slab_merge", setup_slab_merge); > > +#ifndef CONFIG_SLOB > +bool is_kmalloc_cache(struct kmem_cache *s) > +{ > + return (s->flags & SLAB_KMALLOC); > +} > +#endif > + > /* > * Determine the size of a slab object > */
On Wed, Nov 23, 2022 at 10:21:03AM +0100, Vlastimil Babka wrote: > On 11/22/22 06:30, Feng Tang wrote: > > On Mon, Nov 21, 2022 at 12:19:38PM -0800, Andrew Morton wrote: > >> On Mon, 21 Nov 2022 21:50:23 +0800 Feng Tang <feng.tang@intel.com> wrote: > >> > >> > +#ifndef CONFIG_SLOB > >> > +#define is_kmalloc_cache(s) ((s)->flags & SLAB_KMALLOC) > >> > +#else > >> > +#define is_kmalloc_cache(s) (false) > >> > +#endif > >> > >> Could be implemented as a static inline C function, yes? > > > > Right, I also did try inline function first, and met compilation error: > > > > " > > ./include/linux/slab.h: In function ‘is_kmalloc_cache’: > > ./include/linux/slab.h:159:18: error: invalid use of undefined type ‘struct kmem_cache’ > > 159 | return (s->flags & SLAB_KMALLOC); > > | ^~ > > " > > > > The reason is 'struct kmem_cache' definition for slab/slub/slob sit > > separately in slab_def.h, slub_def.h and mm/slab.h, and they are not > > included in this 'include/linux/slab.h'. So I chose the macro way. > > You could try mm/slab.h instead, below the slub_def.h includes there. > is_kmalloc_cache(s) shouldn't have random consumers in the kernel anyway. > It's fine if kasan includes it, as it's intertwined with slab a lot anyway. Good suggestion! thanks! This can address Andrew's concern and also avoid extra cost. And yes, besides sanity code like kasan/kfence, rare code will care whether other kmem_cache is a kmalloc cache or not. And kasan code already includes "../slab.h". > > Btw, I've worked on some patches related with sl[auo]b recently, and > > really felt the pain when dealing with 3 allocators, on both reading > > code and writing patches. And I really like the idea of fading away > > SLOB as the first step :) > > Can't agree more :) > > >> If so, that's always best. For (silly) example, consider the behaviour > >> of > >> > >> x = is_kmalloc_cache(s++); > >> > >> with and without CONFIG_SLOB. > > > > Another solution I can think of is putting the implementation into > > slab_common.c, like the below? > > The overhead of function call between compilation units (sans LTO) is not > worth it. Yes. Will send out the v2 patches. Thanks, Feng
diff --git a/include/linux/slab.h b/include/linux/slab.h index 1c670c16c737..ee6499088ad3 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -758,6 +758,12 @@ extern void kvfree_sensitive(const void *addr, size_t len); unsigned int kmem_cache_size(struct kmem_cache *s); +#ifndef CONFIG_SLOB +#define is_kmalloc_cache(s) ((s)->flags & SLAB_KMALLOC) +#else +#define is_kmalloc_cache(s) (false) +#endif + /** * kmalloc_size_roundup - Report allocation bucket size for the given size *